I recently installed wordpress and drupal on my two websites.
Wordpress on http://alrishachen.com
Drupal on http://alrisha.ca
I'd compare those two popular CMS based on my own experience.
* installation. Both are easy to install/deploy on your hosting server. You just need to untar the tarball and setup database, usually MySQL based on the document. Then, you run the install page and input necessary data and you are done! I'd say wordpress is easier than drupal. The installation of wordpress is faster, with less to worry.
* administration. Once you installed the system, you'd login as the administrator and to configure your website and add contents. As wordpress is generally regarded as a blog system, I used it for a blog. The default administration interface of wordpress is both easy and powerful. Drupal is targeting a more generic website, thus, the administration is more complicated with a lot of settings. It should be intimidating for a small website admin to learn to use Drupal. But, I can see that Drupal provided more controls, especially role based user permission management. I would say this point, they are even. Although the wordpress interface is more intuitive for beginners.
* theme. Wordpress is a winner in terms of themes. There are a lot of themes provided by the community for wordpress. The preview of the themes looks good and it is easier to apply the themes. On the other hand, the themes provided by Drupal seemed providing more control over the layout/block of the website to designers. For admin like me, without too much designer background, I love the beautiful themes in wordpress.
* module/plugin. Both of them have a huge amount of modules/plugins available. Modules for Wordpress are easy to install, but with FTP permission only.
* overall. I'd love to use wordpress for small website, especially blog like style websites. Drupal seemed more flexible in configuring website and provided more controls. If you spend more time on website design and are willing to learn Drupal, you may choose Drupal. If you just need a quick website setup for a certain purpose, I'd recommend Wordpress.
Anyway, the comparison is based on my limited experience of using those two CMS and personal preference. I may have some different opinion later on with more use and requirement. I'd like to try other popular CMS as well, such as Joomla. More reviews may come later.
A blog about the system and programming of *nix environment. It covers: * Scripts that I am familiar with include shell scripts, Perl, Python, Tcl/Tk, and PHP. * Utilities includes sed, grep, awk, cat, tac, ... * Kernel development and optimization, device driver * Networking * Toolchain, build environment
Saturday, April 24, 2010
Tuesday, March 16, 2010
linking with multiple definition of functions
I had fixed a linking problem today. The problem was caused by multiple definitions of the same function name. The linker complains about the multiple definitions of the same function. The solution is to add "-z muldefs" to the LDFLAGS, so that the linker will accept multiple definitions and just link the function it encountered first into the final ELF file. This maybe dangerous, as your final image may be different if the linking order of the object files changed. So, be careful when you use this option. We used this option as we have defined some of our own stdio functions with the same name as the standard stdio functions. During the linking stage, they will give the multiple definition errors. We just need to place our own implementation of the stdio functions in front of the newlib stdio library.
A better solution was to rename our functions to different function names other than the stdio function names. In this case, it won't cause further confusion.
I finally found that I had fixed this problem with bcmring platform a year ago and I have to fix it for the bcmhana platform again. The engineer who ported the makefile to bcmhana didn't pay enough attention to this option.
A better solution was to rename our functions to different function names other than the stdio function names. In this case, it won't cause further confusion.
I finally found that I had fixed this problem with bcmring platform a year ago and I have to fix it for the bcmhana platform again. The engineer who ported the makefile to bcmhana didn't pay enough attention to this option.
Sunday, March 07, 2010
u-boot works now
I finally get u-boot working on our bcmring platform on Friday. I had the hunch on Thursday that the ATAG parameters were not set properly that kernel can't boot. I used JTAG to examine the memory where the ATAG paramters stored. By comparing the memory content set by the working boot2 loader and the u-boot, I found that the command line for kernel wasn't set at all in u-boot. It turns out to be a configuration error in the configuration header files. By setting the proper command line for kernel, I can finally boot the kernel by using u-boot. That's a big progress and I made the u-boot works in two weeks.
In doing this project, I got more understanding of the kernel booting procedure, the ATAG parameters, and u-boot itself. After discussing with my lead, the upcoming work for u-boot includes:
* deliver the changes into linux-c for u-boot support, with proper parameters and configurations.
* upgrade u-boot to the latest version and get it working on bcmring platform.
* port ethernet driver to u-boot, so that the network boot and network rootfs will be supported.
* if time permits, I may get u-boot or other bootloader working for bcmhana as well.
In doing this project, I got more understanding of the kernel booting procedure, the ATAG parameters, and u-boot itself. After discussing with my lead, the upcoming work for u-boot includes:
* deliver the changes into linux-c for u-boot support, with proper parameters and configurations.
* upgrade u-boot to the latest version and get it working on bcmring platform.
* port ethernet driver to u-boot, so that the network boot and network rootfs will be supported.
* if time permits, I may get u-boot or other bootloader working for bcmhana as well.
Thursday, March 04, 2010
u-boot again
I am still working the u-boot project. After I got the ARM realview ice debugger, my life is easier, but still tough. I traced the assembly code of Linux kernel and compare the execution of the same kernel booted by both boot2 and u-boot. As the boot2 can successfully boot the kernel image, I just need to make sure u-boot can get to the same state that u-boot created for the kernel.
I figured out the memory starting address configuration is not set to proper value in u-boot today, but checking the kernel parameters. However, after I modified the u-boot code, I still can't get through the same point in kernel image. Anyway, there is a problem to be fixed. I'd keep working on that tomorrow, by checking the memory set by boot2/uboot, before the kernel booted.
The Linux kernel needs some settings in ATAG. Those settings will be read by kernel during the boot stage. There are several ATAG parameters, like the memory starting address, size, command line, arch id, etc.
I figured out the memory starting address configuration is not set to proper value in u-boot today, but checking the kernel parameters. However, after I modified the u-boot code, I still can't get through the same point in kernel image. Anyway, there is a problem to be fixed. I'd keep working on that tomorrow, by checking the memory set by boot2/uboot, before the kernel booted.
The Linux kernel needs some settings in ATAG. Those settings will be read by kernel during the boot stage. There are several ATAG parameters, like the memory starting address, size, command line, arch id, etc.
Wednesday, February 24, 2010
u-boot
I have been working on u-boot since last week. I need to port u-boot to our bcmring platform.
We have some example from another team of Broadcom to support u-boot on the 2153 platform, which has a similar ARM core with our bcmring platform.
I got the u-boot to build last week. Since this week, I learned to use JTAG to load the u-boot elf file into the memory of the platform after ddr initialization. I ported the UART driver and then the timer driver. Yesterday, I was able to boot to u-boot shell with JTAG. That's a good milestone.
Today, I studied our 1st boot loader (boot1) and the relation between the boot1 and boot2, the second stage boot loader. As I was trying to replace boot2 with u-boot, after I figured out the memory map of boot2, with the help of Darwin, I managed to load u-boot from boot1 and boot to u-boot shell without using JTAG. That's a big success. I can then work on the nand driver to get u-boot to boot our kernel.
The ethernet driver, sd/mmc support will be followed after the nand driver.
We have some example from another team of Broadcom to support u-boot on the 2153 platform, which has a similar ARM core with our bcmring platform.
I got the u-boot to build last week. Since this week, I learned to use JTAG to load the u-boot elf file into the memory of the platform after ddr initialization. I ported the UART driver and then the timer driver. Yesterday, I was able to boot to u-boot shell with JTAG. That's a good milestone.
Today, I studied our 1st boot loader (boot1) and the relation between the boot1 and boot2, the second stage boot loader. As I was trying to replace boot2 with u-boot, after I figured out the memory map of boot2, with the help of Darwin, I managed to load u-boot from boot1 and boot to u-boot shell without using JTAG. That's a big success. I can then work on the nand driver to get u-boot to boot our kernel.
The ethernet driver, sd/mmc support will be followed after the nand driver.
Wednesday, January 13, 2010
udev 150
I've successfully updated udev for our target from 142 to 150 versions. I've added three patches to fix the compile errors. Those patches are introduced because the kernel header used in our toolchain is older than the latest version, like the inotify_init1 system call is not available in our version of toolchain. I have to use the inotify_init system call instead.
Anyway, it works in my first attempt. The old 142 version was ported by an intern, with too much modification to the makefiles/configure scripts. Those patches are 90% of junk, I would say. Anyway, the newer version is more maintainable now.
Anyway, it works in my first attempt. The old 142 version was ported by an intern, with too much modification to the makefiles/configure scripts. Those patches are 90% of junk, I would say. Anyway, the newer version is more maintainable now.
Tuesday, January 12, 2010
udev
I've got udev working on our platform. It's initially ported by an intern, but it seemed not working before he left. The problem was due to the kernel configuration. Once I sorted out the proper kernel configuration, udevd started properly and it works. The ota also worked with the udev. The previous port of udev was udev 142 version, which is rather old. I am porting the 150 version to our system.
Sunday, December 27, 2009
Toolchain study digest
Some information I learned recently about GNU GCC toolchain and related topic from the book "Professional Linux Programming". Although I am familiar with most of the content, I'd just record here for a better memory.
* -fPIC (position independent code), used to generate code can be executed anywhere in the kernel, usually used to generate shared library.
* -shared -o libfoo.so, generate shared library directly, convenience provided by GCC.
* LD_LIBRARY_PATH, specify the additional path to shared library, other than regular /lib, /usr/lib ... directories.
GCC options:
* General Options:
-c (compilation only)
-S (generate assembly code)
* Language Options:
-ansi (ansi C90 standard)
-std=c89, gnu89, c99, gnu89 (use various ANSI C standards, or GNU C standards)
-fno-builtin (do not use the built-in version of printf/memcpy/etc in gcc, use external library version)
* Warning Levels:
-pedantic (strictly follow C standard, give warnings, or -pedantic-errors)
-Wformat (one of many warnings to check the code)
-Wall (a wide range of warnings)
* Debugging:
-g
* Optimization:
-O0, -O3
-Os
* Hardware options:
-march
-msoft-float
-mbig-endian, -mlittle-endian
-mabi
GNU Binutils
Assembler:
as -o foo.o foo.s
Linker:
ELF: Executable and Linking Format
crtbegin.o, crtend.o
Linking script: /usr/lib/ldscripts
Objcpy and objdump:
objdump -x -d -S hello_world
Kernel and toolchain:
* inline assembly
__asm__ __volatile__
* __attribute__, section, alignment, alias
* custom linker script, arch//kernel/vmlinux.lds
* -fPIC (position independent code), used to generate code can be executed anywhere in the kernel, usually used to generate shared library.
* -shared -o libfoo.so, generate shared library directly, convenience provided by GCC.
* LD_LIBRARY_PATH, specify the additional path to shared library, other than regular /lib, /usr/lib ... directories.
GCC options:
* General Options:
-c (compilation only)
-S (generate assembly code)
* Language Options:
-ansi (ansi C90 standard)
-std=c89, gnu89, c99, gnu89 (use various ANSI C standards, or GNU C standards)
-fno-builtin (do not use the built-in version of printf/memcpy/etc in gcc, use external library version)
* Warning Levels:
-pedantic (strictly follow C standard, give warnings, or -pedantic-errors)
-Wformat (one of many warnings to check the code)
-Wall (a wide range of warnings)
* Debugging:
-g
* Optimization:
-O0, -O3
-Os
* Hardware options:
-march
-msoft-float
-mbig-endian, -mlittle-endian
-mabi
GNU Binutils
Assembler:
as -o foo.o foo.s
Linker:
ELF: Executable and Linking Format
crtbegin.o, crtend.o
Linking script: /usr/lib/ldscripts
Objcpy and objdump:
objdump -x -d -S hello_world
Kernel and toolchain:
* inline assembly
__asm__ __volatile__
* __attribute__, section, alignment, alias
* custom linker script, arch/
Monday, November 30, 2009
Useful Mutt documents and tips
I have started to use mutt as my main email client for a couple of months. It is powerful and convenient, especially for me as I am a lover of terminal programs.
I'd collect mutt relevant document here.
Training your mutt. http://www.linux.com/archive/articles/58760
I'd collect mutt relevant document here.
Training your mutt. http://www.linux.com/archive/articles/58760
Saturday, November 28, 2009
Vim tricks
I've read another blog today about Efficient Editing with Vim. I learned some new tricks as the follows.
* g + h,j,k,l, to move screen line instead of real line. This can be real useful when you are editing an email with long lines.
* fx, Fx, go forward/backword to the next occurrence of x.
* (, ), move to the next sentence or the previous sentence
* xG, goto the xth line, the same as :x, just faster
* H, M, L, move to the top, middle, and end of the screen
* g + h,j,k,l, to move screen line instead of real line. This can be real useful when you are editing an email with long lines.
* fx, Fx, go forward/backword to the next occurrence of x.
* (, ), move to the next sentence or the previous sentence
* xG, goto the xth line, the same as :x, just faster
* H, M, L, move to the top, middle, and end of the screen
Wednesday, November 19, 2008
kernel upgrade to 2.6.27 for mips/arm platform
I have encountered other interesting issue when I was upgrading the kernel from 2.6.23 to 2.6.27 for our MIPS32 platform.
1. the timer, as in 2.6.24 or later, the default MIPS timer has been separated from the old timer code. For our architecture used the default cp0 comparison based timer, we need to enable the r4k timer to get the timer working. I spend couple of days to understand this change. The start_kernel function was able to proceed after the proper is enabled. However, we do have a timer block in our chip, and we can set the timer to a certain frequency as an individual timer source. In my later debugging process, I have enabled the timer block and use our own timer source. It also works.
2. the cache should be disabled when kernel started and re-enable later on. In 2.6.23, the cache was enabled by default. However, in 2.6.27 or some version later than 2.6.23, the cache was configurable by a kernel option "cca=" and it is disabled by default. This change really hurts me. As there are so many changes from 23 to 27 kernel, it is almost impossible for me to notice this change at first. What I have observed at first was the slowness of the system. The BogoMips dropped from about 273 to 3, which is unbelievable. I was doubting the correctness of the timer function at the beginning. I scrutinized the code and well studied the new timer implementation. I even implemented our own timer by using the timer block in our chip. Those doesn't help either. The system was able to boot to busybox but it is really slow. I accidentally tried to use our performance counter program to measure the performance. The performance counter reported the cache hit is 0, which means that there is no cache enabled. I checked our private i/d cache register and they seems enabled. However, I forget to check the setting of the cp0 register of MIPS. There is another setting to enable/disable cache policy. I used a very stupid and old method to pinpoint the problem. I added NOP test to both 23 and 27 kernels. In 23 kernel, when the cache is enabled, the NOP test gives much lower CPI (clock per instruction), otherwise the CPI is high. In 27 kernel, the CPI doesn't change. I tried to figure out the exact point where the CPI drop 23 kernel and check the corresponding code in 27 kernel. I finally found that the default cache policy was disable in 27 kernel, while it is enabled in 23 kernel. By adding the "cca=3" kernel command line option, everything backs to normal, BogoMips, kernel boots properly.
3. Export symbol and export symbol gpl'ed. If your driver, kernel module used the latter symbols, your driver/kernel module must be gpl'ed. This can cause problem for us as we don't want to open source all our kernel modules, especially wlan driver. We deliver binary kernel module for our wlan drivers.
1. the timer, as in 2.6.24 or later, the default MIPS timer has been separated from the old timer code. For our architecture used the default cp0 comparison based timer, we need to enable the r4k timer to get the timer working. I spend couple of days to understand this change. The start_kernel function was able to proceed after the proper is enabled. However, we do have a timer block in our chip, and we can set the timer to a certain frequency as an individual timer source. In my later debugging process, I have enabled the timer block and use our own timer source. It also works.
2. the cache should be disabled when kernel started and re-enable later on. In 2.6.23, the cache was enabled by default. However, in 2.6.27 or some version later than 2.6.23, the cache was configurable by a kernel option "cca=" and it is disabled by default. This change really hurts me. As there are so many changes from 23 to 27 kernel, it is almost impossible for me to notice this change at first. What I have observed at first was the slowness of the system. The BogoMips dropped from about 273 to 3, which is unbelievable. I was doubting the correctness of the timer function at the beginning. I scrutinized the code and well studied the new timer implementation. I even implemented our own timer by using the timer block in our chip. Those doesn't help either. The system was able to boot to busybox but it is really slow. I accidentally tried to use our performance counter program to measure the performance. The performance counter reported the cache hit is 0, which means that there is no cache enabled. I checked our private i/d cache register and they seems enabled. However, I forget to check the setting of the cp0 register of MIPS. There is another setting to enable/disable cache policy. I used a very stupid and old method to pinpoint the problem. I added NOP test to both 23 and 27 kernels. In 23 kernel, when the cache is enabled, the NOP test gives much lower CPI (clock per instruction), otherwise the CPI is high. In 27 kernel, the CPI doesn't change. I tried to figure out the exact point where the CPI drop 23 kernel and check the corresponding code in 27 kernel. I finally found that the default cache policy was disable in 27 kernel, while it is enabled in 23 kernel. By adding the "cca=3" kernel command line option, everything backs to normal, BogoMips, kernel boots properly.
3. Export symbol and export symbol gpl'ed. If your driver, kernel module used the latter symbols, your driver/kernel module must be gpl'ed. This can cause problem for us as we don't want to open source all our kernel modules, especially wlan driver. We deliver binary kernel module for our wlan drivers.
Tuesday, November 18, 2008
kernel upgrade to 2.6.27
I had just fixed a network driver bug when I upgrade the kernel from 2.6.23 to 2.6.27 for our MIPS platform. It takes couple of weeks. The original problem appears when the NAPI was used in network driver and I changed the net_poll function accordingly. Then, I got kernel panic with memory access failure. After long time debugging, I found that there is some problem when the driver tries to figure out the address of skb out of the skb->data structure. This is weird because the same code was used in both 2.6.23 kernel and 2.6.27 kernel. The original author of the network driver gave some hints that he had experienced similar problem when he was creating a network driver for our next generation chip, based on the old driver. He mentioned that the original driver was confused about the physical/virtual address when accessing the dma'ed memory. This is quite helpful. I spend a whole day dig into this issue, and studied the new driver for next generation chip. After replacing the memory allocation function for the skb buffer, I finally got the proper method to access the memory. I've learned about the physical/virtual/bus address when accessing memory in kernel. The principle was simple as stated by Linus, "use virtual address when accessing memory in kernel, and use bus address when the memory was given to device". In some architecture, the bus address is identical to physical address. Never use physical address directly. The functions: phys_to_virt, virt_to_bus, bus_to_virt, virt_to_phys, are all helper functions.
It seems that we still had a lot of bugs in our network driver. Apparently, our engineers haven't had enough knowledge creating drivers in Linux. Most of their experience was in VxWorks, with flat memory model.
It seems that we still had a lot of bugs in our network driver. Apparently, our engineers haven't had enough knowledge creating drivers in Linux. Most of their experience was in VxWorks, with flat memory model.
Thursday, October 02, 2008
kernel upgrade
I am recently working on kernel upgrade from 2.6.23.17 to 2.6.27-rc5 for our VoIP SoCs.
The following are lessons I learned this time.
* use git to help migrate the patches, as we manage patches in our system, not by git. So, we may use git to help migrate the patches.
* get a kernel git tree (git fetch)
* check out a branch for your current kernel version (git branch, git checkout)
* patch the git tree with your patches (patch, git commit)
* rebase the git tree to the newer kernel (git rebase)
* resolve conflict, for each resolved conflict, remember your changes (git rerere)
* once resolved all conflicts, generate new patches for the new kernel (git format-patch)
* to get your new kernel built for the platform at first
* use make oldconfig to migrate the kernel configurations
* keep all the old configuration as much as possible
* turn on most of the kernel debug option, like early_printk, printk_with_time, spinlock, etc.
* If it doesn't boot, there may be a lot of reasons
* check kernel dump, especially the dumped code, disassemble the whole kernel to figure out where the dump happened
* there are usually a big change in each major version of kernel, such as timer change in mips architecture, path and inclusion changes in arm architecture, sd/mmc infrastructure changes, be very careful.
The following are lessons I learned this time.
* use git to help migrate the patches, as we manage patches in our system, not by git. So, we may use git to help migrate the patches.
* get a kernel git tree (git fetch)
* check out a branch for your current kernel version (git branch, git checkout)
* patch the git tree with your patches (patch, git commit)
* rebase the git tree to the newer kernel (git rebase)
* resolve conflict, for each resolved conflict, remember your changes (git rerere)
* once resolved all conflicts, generate new patches for the new kernel (git format-patch)
* to get your new kernel built for the platform at first
* use make oldconfig to migrate the kernel configurations
* keep all the old configuration as much as possible
* turn on most of the kernel debug option, like early_printk, printk_with_time, spinlock, etc.
* If it doesn't boot, there may be a lot of reasons
* check kernel dump, especially the dumped code, disassemble the whole kernel to figure out where the dump happened
* there are usually a big change in each major version of kernel, such as timer change in mips architecture, path and inclusion changes in arm architecture, sd/mmc infrastructure changes, be very careful.
Sunday, September 21, 2008
vmlinux.lds
The vmlinux.lds is the linker script for the linux kernel. It is generated/stored in arch/mips/boot/compressed directory for mips architecture. For x86 architecture, it is stored in arch/x86/boot/compressed directory.
Wednesday, April 16, 2008
Tuesday, March 04, 2008
Linker script .lds
A script xxx.lds is used to instruct the linker, how to generate and link the final elf target. I recently solved a bootloader problem, by moving a part of (reginfo) into Data section. The bug in binutil overwrite my target elf file, with the reginfo, which shouldn't be loaded.
Some links about the linker.
http://www.gnu.org/software/binutils/manual/ld-2.9.1/html_mono/ld.html
http://www.embedded.com/2000/0002/0002feat2.htm
Some links about the linker.
http://www.gnu.org/software/binutils/manual/ld-2.9.1/html_mono/ld.html
http://www.embedded.com/2000/0002/0002feat2.htm
Tuesday, January 29, 2008
CGI internal server error
Haven't worked on CGI/perl for a while, recently I am reworking on my perl scripts for star5.ca website. The following items should be checked when you got the 'Internal Server Error" in Perl. GoDaddy has no server log enabled by default, so you have to be very careful in doing CGI/Perl programming.
1. File permission (your perl program should have 755 mode)
2. DOS ending (your perl program should be using unix line ending, instead of dos line ending. A lot system will complain of "file not found", because of the ending issue)
3. Path to perl (usual !/usr/bin/perl)
4. Library path (use "path to your local library"), to support local perl modules.
1. File permission (your perl program should have 755 mode)
2. DOS ending (your perl program should be using unix line ending, instead of dos line ending. A lot system will complain of "file not found", because of the ending issue)
3. Path to perl (usual !/usr/bin/perl)
4. Library path (use "path to your local library"), to support local perl modules.
Wednesday, January 16, 2008
gcc
I am working on toolchain upgrade recently, from gcc 3.4.6 to gcc 4.1.1.
Thing I learned during the upgrade.
1. libgcc contains compiler specific library, usually used for floating point computation. For example, 64 bit floating point has no support on the host system, the libgcc has to convert the computation into some native instructions.
You can use "gcc -v" to figure out the default library path "-L xxx" used by gcc, besides the path passed by your Makefile. "gcc -v" can also give you the exact command called by gcc, as gcc itself is an umbrella program, which calls "cc1", "collect2" etc.
2. -ffreestanding, flag may be used for kernel compilation. It implies that standard library may not exist and the program startup may not necessarily be at "main". To use gcc 4.1.1 to compile linux kernel 2.6.17, we need to add -ffreestanding in our makerules.
3. Optimization, gcc optimize the code differently in each version. Some functions were optimized away in kernel, but called in bootloader. I have to create a dummy function in bootloader to avoid linking problem.
4. -std=gnu99. I've met several preprocessing error when I was building gcc in buildroot. cpp was complaining about the unknown labels in assembly code. It turns out the the cpp using gnu standard is not 100% compatible with the iso c standard. By removing the -std=gnu99 flag, I can get gcc compiled.
5. -sysroot. This option will enable you to use a different set of "/include", "/lib" directories in a different root. I think that it may be useful in cross-compiling environment.
Thing I learned during the upgrade.
1. libgcc contains compiler specific library, usually used for floating point computation. For example, 64 bit floating point has no support on the host system, the libgcc has to convert the computation into some native instructions.
You can use "gcc -v" to figure out the default library path "-L xxx" used by gcc, besides the path passed by your Makefile. "gcc -v" can also give you the exact command called by gcc, as gcc itself is an umbrella program, which calls "cc1", "collect2" etc.
2. -ffreestanding, flag may be used for kernel compilation. It implies that standard library may not exist and the program startup may not necessarily be at "main". To use gcc 4.1.1 to compile linux kernel 2.6.17, we need to add -ffreestanding in our makerules.
3. Optimization, gcc optimize the code differently in each version. Some functions were optimized away in kernel, but called in bootloader. I have to create a dummy function in bootloader to avoid linking problem.
4. -std=gnu99. I've met several preprocessing error when I was building gcc in buildroot. cpp was complaining about the unknown labels in assembly code. It turns out the the cpp using gnu standard is not 100% compatible with the iso c standard. By removing the -std=gnu99 flag, I can get gcc compiled.
5. -sysroot. This option will enable you to use a different set of "/include", "/lib" directories in a different root. I think that it may be useful in cross-compiling environment.
Wednesday, December 12, 2007
Kernel network tune-up
I closed two open tickets related to networking in Linux Kernel. So, I'd write it down in order not to forget.
* broadcast ping problem. After Linux kernel 2.6.14+, the kernel wont' respond to broadcast ping by default. You have to "echo 0 > /proc/sys/net/ipv4/ignore_broadcast_icmp" to enable response to broadcast ping. The default behavior has been changed.
* loopback ping > 32K payload. For small system with <= 16M memory, we use loopback UDP for IPC. But we can't receive packets with 32K+ payload. I wrote a test program to verify the problem. After one day's hack, I found that it is due to a limitation in internal buffer size of skbuff (the internal data structure in Linux network stack). "/proc/sys/net/core/rmem_max", "/proc/sys/net/core/rmem_default". We have to increase the limit on those two entries to allow loopback ping with 32K+ payload. There won't any problem with network ping, as the MTU limitation will eventually split the skbuff and you won't need a big skbuff. The loopback has no MTU limitation, thus it matters.
* broadcast ping problem. After Linux kernel 2.6.14+, the kernel wont' respond to broadcast ping by default. You have to "echo 0 > /proc/sys/net/ipv4/ignore_broadcast_icmp" to enable response to broadcast ping. The default behavior has been changed.
* loopback ping > 32K payload. For small system with <= 16M memory, we use loopback UDP for IPC. But we can't receive packets with 32K+ payload. I wrote a test program to verify the problem. After one day's hack, I found that it is due to a limitation in internal buffer size of skbuff (the internal data structure in Linux network stack). "/proc/sys/net/core/rmem_max", "/proc/sys/net/core/rmem_default". We have to increase the limit on those two entries to allow loopback ping with 32K+ payload. There won't any problem with network ping, as the MTU limitation will eventually split the skbuff and you won't need a big skbuff. The loopback has no MTU limitation, thus it matters.
Friday, November 09, 2007
Kernel programming and driver debugging
Don't forget to kill the klogd and syslogd daemons before you start to debug your driver or kernel codes. Otherwise, your printk will be buffered and you won't know the exact point of crash or hang.
"
killall klogd
killall syslogd
"
For interrupt handler, the first thing in your ISR is to disable or mask out the interrupt, otherwise, the interrupt may keep firing and your system may be locked up.
At the end of your ISR, you can re-enable and enable the mask of the interrupt.
To get a list of interrupt and interrupt handler,
"
cat /proc/interrupts
"
"
killall klogd
killall syslogd
"
For interrupt handler, the first thing in your ISR is to disable or mask out the interrupt, otherwise, the interrupt may keep firing and your system may be locked up.
At the end of your ISR, you can re-enable and enable the mask of the interrupt.
To get a list of interrupt and interrupt handler,
"
cat /proc/interrupts
"
Subscribe to:
Posts (Atom)