Saturday, August 29, 2009

Damn It! Hostmonster!

This webpage is not available.

The webpage at https://host381.hostmonster.com:2083/login/ might be temporarily down or it may have moved permanently to a new web address.

Here are some suggestions:
Reload this web page later.
More information on this error

Monday, August 24, 2009

Disk Quota in Linux

http://linuxhelp.blogspot.com/2005/10/disk-quotas-in-linux-explained.html

Disk Quotas in GNU/Linux explained

Have you ever encountered a situation where your children who are using your PC are hoarding music and video on the harddisk and filling up all the space ? In linux, there is a way for you to prohibit others from hogging all the disk space. This you do by using quotas. Here I will explain how to setup disk quotas in Linux.
Setting up disk quotas
In this example, let us assume that the /home is on its own seperate partition and it is running out of space. So we use quota system to manage and restrict disk space to all users (or a select few).
  1. Enter Single User mode - As we'll need to remount the /home filesystem it's best to ensure that no other users or processes are using it. This is best achieved by entering single user mode from the console. This may be unnecessary if you are certain that you're the only user on the system.
    # init 1
  2. Edit your /etc/fstab file - You'll need to add the usrquota option to the /etc/fstab file to let it know that you are enabling user quotas in your file system.
    ---------------------------------------------------------- Old  /etc/fstab file LABEL=/home     /home   ext3    defaults            1   2 ---------------------------------------------------------- New /etc/fstab file LABEL=/home     /home   ext3    defaults,usrquota   1   2 ---------------------------------------------------------- 
  3. Remount your file system - Once you finish editing your /etc/fstab file, you have to remount your filesystem as follows :
    # mount -o remount /home
  4. Create aquota.user and/or aquota.group files - These are created in the top most directory of the file system. In our case /home. In our case, we are enabling only per user quotas so only aquota.user file is required.
    # touch /home/aquota.user # chmod 600 /home/aquota.user
  5. Make linux read the aquota.user file - This is done using the quotacheck command.
    # quotacheck -vagum
  6. Modify the user's quota information - Use the edquota command for this purpose.
    # edquota -u
    The above command will invoke the vi editor which will allow you to edit a number of fields.
    Disk quota for user ravi (uid 503):  Filesystem   blocks  soft  hard   inodes  soft  hard /dev/hda3      24     0      0       7      0     0     
    Blocks : The amount of space in 1k blocks the user is currently using
    inodes : The number of files the user is currently using.
    Soft Limit : The maximum blocks/inodes a quota user may have on a partition. The role of a soft limit changes if grace periods are used. When this occurs, the user is only warned that their soft limit has been exceeded. When the grace period expires, the user is barred from using additional disk space or files. When set to zero, limits are disabled.
    Hard Limit : The maximum blocks/inodes a quota user may have on a partition when a grace period is set. Users may exceed a soft limit, but they can never exceed their hard limit.
    In our example, we limit the user ravi to a maximum of 5MB of data storage on /dev/hda3 (/home) as follows:
    Disk quota for user ravi (uid 503):  Filesystem  blocks   soft   hard  inodes    soft   hard /dev/hda3    24      5000    0       7       0      0     
    Note: 5MB is used just for test purposes. A more realistic size will be '5 GB' if you are having a hard disk of size 20 GB.

  7. Get out of single user mode : Return to your original run level by typing either init 3 or init 5 commands.
Other quota commands

Editing grace periods
 # edquota -t 
This command sets the grace period for each filesystem. The grace period is a time limit before the soft limit is enforced for a quota enabled file system. Time units of seconds, minutes, hours, days, weeks and months can be used. This is what you will see with the 'edquota -t' command:

Grace period before enforcing soft limits for users : Time units may be : days, hours, minutes or seconds File       System Block grace period         Inode grace period /dev/hda3         7days                           7days 
Editing Group quotas
# edquota -g
Checking quotas regularly - Linux doesn't check quota usage each time a file is opened, you have to force it to process the aquota.user and aquota.group files periodically with the quotacheck command.You can setup a cron job to run a script similar to the one below to achieve this.
#!/bin/bash quotacheck -vagu 
Getting quota reports - The repquota command lists quota usage limits of all users of the system. Here is an example.
# repquota /home
 *** Report for user quotas on device /dev/hda3 Block grace time: 7days; Inode grace time: 7days Block limits File limits User   used   soft  hard   grace   used   soft   ------------------------------------------------ root  52696    0     0     1015     0      0 ... ... ... ravi   24      0     0      7       0      0

How to Configure Jumbo Frames on Linux

Linux Configure Jumbo Frames to Boost Network Performance / Throughput

Q. Jumbo frames are Ethernet frames with more than 1500 bytes of payload MTU. Does Linux support jumbo frames? If so how do I set frames to 9000 bytes under Linux operating systems?

A. Most modern Linux distros (read as Linux Kernel 2.6.17+) does support frames larger than 1500 bytes. This can improve the performance. First, make sure your network driver supports custom MTU. Second you need to have a compatible gigabit NIC and switch (such as Cisco Catalyst 4000/4500 Switches with Supervisor III or Supervisor IV ) that is jumbo frame clean. If you are not sure about requirements, please refer to your product documentation.

Sunday, August 09, 2009

generate uboot uImage

Porting Linux to U-Boot based systems:
---------------------------------------

U-Boot cannot save you from doing all the necessary modifications to
configure the Linux device drivers for use with your target hardware
(no, we don't intend to provide a full virtual machine interface to
Linux :-).

But now you can ignore ALL boot loader code (in arch/ppc/mbxboot).

Just make sure your machine specific header file (for instance
include/asm-ppc/tqm8xx.h) includes the same definition of the Board
Information structure as we define in include/asm-<arch>/u-boot.h,
and make sure that your definition of IMAP_ADDR uses the same value
as your U-Boot configuration in CONFIG_SYS_IMMR.


Configuring the Linux kernel:
-----------------------------

No specific requirements for U-Boot. Make sure you have some root
device (initial ramdisk, NFS) for your target system.


Building a Linux Image:
-----------------------

With U-Boot, "normal" build targets like "zImage" or "bzImage" are
not used. If you use recent kernel source, a new build target
"uImage" will exist which automatically builds an image usable by
U-Boot. Most older kernels also have support for a "pImage" target,
which was introduced for our predecessor project PPCBoot and uses a
100% compatible format.

Example:

make TQM850L_config
make oldconfig
make dep
make uImage

The "uImage" build target uses a special tool (in 'tools/mkimage') to
encapsulate a compressed Linux kernel image with header information,
CRC32 checksum etc. for use with U-Boot. This is what we are doing:

* build a standard "vmlinux" kernel image (in ELF binary format):

* convert the kernel into a raw binary image:

${CROSS_COMPILE}-objcopy -O binary \
-R .note -R .comment \
-S vmlinux linux.bin

* compress the binary image:

gzip -9 linux.bin

* package compressed binary image for U-Boot:

mkimage -A ppc -O linux -T kernel -C gzip \
-a 0 -e 0 -n "Linux Kernel Image" \
-d linux.bin.gz uImage


The "mkimage" tool can also be used to create ramdisk images for use
with U-Boot, either separated from the Linux kernel image, or
combined into one file. "mkimage" encapsulates the images with a 64
byte header containing information about target architecture,
operating system, image type, compression method, entry points, time
stamp, CRC32 checksums, etc.

"mkimage" can be called in two ways: to verify existing images and
print the header information, or to build new images.

In the first form (with "-l" option) mkimage lists the information
contained in the header of an existing U-Boot image; this includes
checksum verification:

tools/mkimage -l image
-l ==> list image header information

The second form (with "-d" option) is used to build a U-Boot image
from a "data file" which is used as image payload:

tools/mkimage -A arch -O os -T type -C comp -a addr -e ep \
-n name -d data_file image
-A ==> set architecture to 'arch'
-O ==> set operating system to 'os'
-T ==> set image type to 'type'
-C ==> set compression type 'comp'
-a ==> set load address to 'addr' (hex)
-e ==> set entry point to 'ep' (hex)
-n ==> set image name to 'name'
-d ==> use image data from 'datafile'

Right now, all Linux kernels for PowerPC systems use the same load
address (0x00000000), but the entry point address depends on the
kernel version:

- 2.2.x kernels have the entry point at 0x0000000C,
- 2.3.x and later kernels have the entry point at 0x00000000.

So a typical call to build a U-Boot image would read:

-> tools/mkimage -n '2.4.4 kernel for TQM850L' \
> -A ppc -O linux -T kernel -C gzip -a 0 -e 0 \
> -d /opt/elsk/ppc_8xx/usr/src/linux-2.4.4/arch/ppc/coffboot/vmlinux.gz \
> examples/uImage.TQM850L
Image Name: 2.4.4 kernel for TQM850L
Created: Wed Jul 19 02:34:59 2000
Image Type: PowerPC Linux Kernel Image (gzip compressed)
Data Size: 335725 Bytes = 327.86 kB = 0.32 MB
Load Address: 0x00000000
Entry Point: 0x00000000

To verify the contents of the image (or check for corruption):

-> tools/mkimage -l examples/uImage.TQM850L
Image Name: 2.4.4 kernel for TQM850L
Created: Wed Jul 19 02:34:59 2000
Image Type: PowerPC Linux Kernel Image (gzip compressed)
Data Size: 335725 Bytes = 327.86 kB = 0.32 MB
Load Address: 0x00000000
Entry Point: 0x00000000

NOTE: for embedded systems where boot time is critical you can trade
speed for memory and install an UNCOMPRESSED image instead: this
needs more space in Flash, but boots much faster since it does not
need to be uncompressed:

-> gunzip /opt/elsk/ppc_8xx/usr/src/linux-2.4.4/arch/ppc/coffboot/vmlinux.gz
-> tools/mkimage -n '2.4.4 kernel for TQM850L' \
> -A ppc -O linux -T kernel -C none -a 0 -e 0 \
> -d /opt/elsk/ppc_8xx/usr/src/linux-2.4.4/arch/ppc/coffboot/vmlinux \
> examples/uImage.TQM850L-uncompressed
Image Name: 2.4.4 kernel for TQM850L
Created: Wed Jul 19 02:34:59 2000
Image Type: PowerPC Linux Kernel Image (uncompressed)
Data Size: 792160 Bytes = 773.59 kB = 0.76 MB
Load Address: 0x00000000
Entry Point: 0x00000000


Similar you can build U-Boot images from a 'ramdisk.image.gz' file
when your kernel is intended to use an initial ramdisk:

-> tools/mkimage -n 'Simple Ramdisk Image' \
> -A ppc -O linux -T ramdisk -C gzip \
> -d /LinuxPPC/images/SIMPLE-ramdisk.image.gz examples/simple-initrd
Image Name: Simple Ramdisk Image
Created: Wed Jan 12 14:01:50 2000
Image Type: PowerPC Linux RAMDisk Image (gzip compressed)
Data Size: 566530 Bytes = 553.25 kB = 0.54 MB
Load Address: 0x00000000
Entry Point: 0x00000000

Saturday, August 08, 2009

signature verification error while running aptitude install

http://littlebrain.org/2008/11/25/an-error-occurred-during-the-signature-verification/


If you're using ubuntu as your choice of linux distribution, and you like to add some unofficial repositories, you probably find some warnings when you run apt-get update command.

The warnings are probably like this

W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: http://dl.google.com stable Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY A040830F7FAC5991
 
W: Failed to fetch http://dl.google.com/linux/deb/dists/stable/Release
 
W: Some index files failed to download, they have been ignored, or old ones used instead.
W: You may want to run apt-get update to correct these problems

That's the warnings I got when I added google debian repository. There won't be anything bad about it, since we could still install the packages from that repositories. But, I we could get rid the warnings, that's would be a lot better. We could use gpg command to get the key.

Short Version

For you who don't want to read pointless explanations below, here's the short version

gpg --keyserver hkp://subkeys.pgp.net --recv-keys A040830F7FAC5991
gpg --export --armor 7FAC5991 | sudo apt-key add -

Long Version

Here's the explained step. First we have to get the key from the key server.

gpg --keyserver hkp://subkeys.pgp.net --recv-keys A040830F7FAC5991

The A040830F7FAC5991 is from the warning shown before. You might want to change it if you have different repository. For the google repository, you should get this as the output

gpg: requesting key 7FAC5991 from hkp server subkeys.pgp.net
gpg: key 7FAC5991: public key "Google, Inc. Linux Package Signing Key <linux-packages-keymaster@google.com>" imported
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: Total number processed: 1
gpg: imported: 1

After that, type

gpg --export --armor 7FAC5991 | sudo apt-key add -

Where the 7FAC5991 is from the output shown before.

After that you'll get OK as the output. You may run apt-get update again.

Oh, one more thing, I only tested this method in opera and google repositories

Thursday, August 06, 2009

Makefile how to

Here is a brief instruction of Makefile. Thanks the author.
http://www.wlug.org.nz/MakefileHowto


Makefiles are easy.
In fact, to build a simple program that doesn't depend on any libraries, you don't even need a makefile. make(1) is smart enough to figure it all out itself. For instance, if you have a file "foo.c" in the current directory:

$ ls foo.c  $ make foo cc foo.c -o foo

make(1) will detect the type of file and compile it for you, automatically naming the executable the same as the input file (gcc(1) foo.c will give you a file called a.out unless you manually specify a name for it). If you need libraries, you can specify them by setting the LDFLAGS variable on the command line.

Of course, most useful projects contain more than one file. A makefile describes the dependencies between files. It is called Makefile (with a capital M). Each line will typically consist of a filename, a colon and a list of dependencies. for instance, a simple make file to link together two object files foo.o and bar.o might look like:

program: foo.o bar.o

Each filename (before the colon) is called a target. You can make a specific target by executing

$ make target

make is smart enough to use the first rule in the Makefile as the default action, so:

$ ls Makefile foo.c bar.c  $ make cc bar.c -c -o bar.o cc foo.c -c -o foo.o cc foo.o bar.o -o program

See the CompilingHowto for more info on the steps required to turn source code into an executable.

You can get your Makefiles made automatigically for you using AutoTools.


Dynamic updating

Occasionally you might want to specify something special to happen, for a specific file. This can be done by providing some rules to build that target. This is done indented, on the next line after the dependencies are listed. Our sample make file again:

program: foo.o bar.o  bar.c:   echo 'char *builddate="' `date` '";' >bar.c

Note that the line that begins "echo" must be indented by one tab. If this isn't done make(1) will abort with a weird error message like "Missing delimiter". The echo line makes a one line C file with a variable called "builddate", set to the current date and time. This is a useful thing to do for your program if you wanted to know when this particular version was compiled. (Not that this is the only way, or in fact the best way to get this information, but it's a good example.)

Running this would produce:

$ make echo 'char *builddate="' `date` '"' >bar.c cc    -c -o bar.o bar.c cc    -c -o foo.o foo.c cc foo.o bar.o -o program

Phony targets

You can have "phony" targets -- targets which don't actually create a file, but do something. These are created like normal targets: for instance, to add a "all" target to our makefile we'd add (probably at the top, so it becomes the default target):

all: foo

This rule won't run if there exists a file called "all" in the directory (if someone was stupid enough to create one somehow). So we can tell make(1) that this is a phony target and should be rebuilt always this is by using the target .PHONY. so, we can add to our Makefile:

.PHONY: all

To add a clean target is fairly simple too, add:

clean:   rm -f bar.o bar.c foo.o foo.c

and add clean to the list of phony targets:

.PHONY: all clean

Selective building

Why use a makefile, instead of a script to rebuild everything from scratch?

If you have a rule that reads

objectfile.o: foo.c foo.h bar.c bar.h Makefile

then make(1) will check the last modification date of objectfile.o against the last modification date of all the files that follow it (foo.c, foo.h, bar.c, bar.h and the Makefile itself). If none of these things have changed, then it won't recompile objectfile.o.

Build lines like this with careful reference to #includes in your source - if your foo.h #includes bar.h, it has to be on the Makefile line - otherwise, changes to bar.h won't cause a recompile of objectfile.o and you might get confused as to why your constants aren't what you thought they should be.

Or, you could have make determine all your header file dependencies for you! If foo.h #includes bar.h, and bar.h #includes another.h, which #includes etc.h, it could very quickly become difficult to keep track of it all. Not to mention it may result in huge dependency lines! Instead, you can have a header file as a target and list its #included files as its dependencies. Then use the 'touch' command to update the timestamp. For example, if foo.c #includes foo.h, and both foo.h and bar.c #include bar.h, we could use this Makefile:

executable: foo.o bar.o         $(CC) foo.o bar.o -o executable foo.o: foo.c foo.h Makefile bar.o: bar.c bar.h Makefile foo.h: bar.h         touch foo.h bar.h:

So if you edit bar.h to change some constants or function definitions, Make will see that foo.h needs to be updated and 'touch' it. Then it will know it must also update foo.o in (in addition to bar.o) since foo.h appears new. This way each target only lists files that it is directly dependent on. Let make figure out the rest -- that's what it's supposed to do!

Or you could decide that mindless drone work is a waste of time and just use makedepend to spare yourself the hassle. --AristotlePagaltzis

You should consider that using this touch could affect the configuration management system you are using (e.g. RCS or CVS), if it goes by the timestamp to determine the need to commit/checkin: you might suddenly have lots of files to commit, or lots of files locked! At the very least the new timestamp will confuse your friends and confound your enemies. However, makedepends can generate an unreadable and therefore unmaintainable monstrosity, partly because it cites every system dependency (e.g. stdio), and also as it recurses through the subdep files it cites each reference to stdio by the subdeps as if it were a separate dependency. So, depending on the size of your project, and how often you have to make major adjustments by hand to the makefiles, and how many headers each file uses, you may want to decide whether or not to use this touch method (which would indeed keep the dependencies nicely hierarchical), or use makedepends. To have it both ways, I believe you could precede the touch with "cp -p foo.h foo.h_preserveDate; touch foo.h" and then under the foo.o dependency you could after the compile then do "cc foo.c; mv foo.h_preserveDate foo.h" which would preserve the original date on the foo.h checked-out file. This would still keep the hierarchical nature, which is quite valuable because it eliminates redundancy in separate places (two distant places to maintain one fact is very bad). -- LindaBrock


Makefiles in subdirectories

With larger projects you often have subdirectories with their own Makefile. To allow make to run these Makefiles with the options passed to make use the $(MAKE) variable. This variable actually callse a second make process to make the Makefile in the subdirectory. To specify the Makefile's subdirectory use the -C option of make.

Example Makefile:

    all: Documentation/latex/refman.pdf      install: Documentation/latex/refman.pdf         cp Documentation/latex/refman.pdf Documentation/!KeithleyMeter.pdf      Documentation: Doxyfile Makefile src/keithleyMeter.cc hdr/keithleyMeter.h         # Dosen't use all the options you passed to make         make clean         # make the Documentation folder         /Applications/Doxygen.app/Contents/Resources/doxygen      Documentation/latex/refman.pdf: Documentation         # Uses the options you passed to make         $(MAKE) -C Documentation/latex      clean:         rm -rf Documentation

For a counter-argument against having separate make processes for sub-directories (and instead using makefile fragments but only one make process), see Recursive Makefile considered harmful (PDF)


Rules

The real power from makefiles comes when you want to add your own "rules" for files. If we have a program called "snozzle" that takes a ".snoz" file and produces a ".c" file we can add:

%.c: %.snoz    snozzle $< -o $@

$< expands to the first dependency, and $@ the target. So, if foo.c is built from foo.snoz we can now:

$ ls Makefile foo.snoz $ make snozzle foo.snoz -o foo.c cc -c -o foo.o foo.c echo 'char *builddate="' `date` '"' >bar.c cc -c -o bar.o bar.c cc foo.o bar.o -o foo rm foo.c

Note that foo.c is removed by make at the end -- make(1) removes intermediate files itself when it's done. Smart, eh?


EnvironmentVariables

The only other major thing left to mention about Make is environmental variables. It uses $(variable) as an expando. thus the rule:

%c: %.snoz   snozzle $(SNOZFLAGS) $<

would let you specify the arguments to snozzle. This is useful if you call snozzle in multiple places, but want to be able to make one change to update the flags.

make(1) uses these variables for its compilers. The compiler it uses for compiling C is "CC", You can set the environment variable "CC" to your own favourite C compiler if you so wish. CFLAGS is used for the flags to the C compiler. Thus setting CFLAGS to "-g -Wall" will compile all programs with debugging (-g) and with all warnings enabled (-Wall). Environment variables can be defined in make by using "VARIABLE=value" for example:

CFLAGS=-g -Wall

So, our full make file would become:

CFLAGS=-g -Wall SNOZFLAGS=--with-extra-xyzzy  all: program  clean:    rm -f foo.c foo.o bar.c bar.o  .PHONY: clean all  program: foo.o bar.o  bar.c:   echo 'char *builddate="' `date` '";' >bar.c  %.c: %.snoz    snozzle $(SNOZFLAGS) $< -o $@
  • CPPFLAGS command line flags to cpp
  • CFLAGS command line flags to cc
  • CXXFLAGS command line flags to c++
  • LDFLAGS command line flags to ld
  • ASFLAGS command line flags to as

If you specify your own command line you will have to explicitly include these variables in it.

You can also check if an environment variable has been set and initialise it to something if it has not. ie.

DESTDIR ?= /usr/local

will set DESTDIR to /usr/local if it is not already defined

To append to the environment variables use the += operator:

CFLAGS += -g -Wall

This allows the user to specify system specific optimizations in their shell environment.

Note : As you may have noticed, make uses $ to identify variables - both environment and defined in the file. To put a literal $ in a makefile, use $$. However, bash also uses $ to identify variables, and will consume the $ when it is passed to whatever program you're running. To therefore pass a literal $ to a program you must use \$$ - note the single \, not double. - OrionEdwards


An example makefile

 1: CXXFLAGS=-g  2:  3: sim: car.o road.o sim.o event.o  4:        g++ $(LDFLAGS)  sim.o car.o road.o event.o -lm   -o sim  5:  6: car.o: car.cc car.h sim.h event.h road.h Makefile  7: sim.o: sim.cc sim.h car.h road.h event.h Makefile  8: road.o: road.cc road.h sim.h event.h car.h Makefile  9: event.o: event.cc event.h sim.h Makefile

This makefile is for a car simulator written in C++. (It was written by Dr. Tony !McGregor? from TheUniversityOfWaikato).

  • Line 1 sets up the environment variables to the C++ compiler, ensuring everything is compiled with debugging info on.
  • Line 3 is the first target in the file, so when you run 'make' it will 'make sim'. sim depends on car.o, road.o etc (targets that are defined on lines 6-9).
  • Line 4 is indented; because we want to add extra smarts to the compiling of sim (we want to link to the math library libm.a); so when 'make sim' is executed and the .o's are up to date, that line will be executed.
  • Lines 6-9 are targets for the various object files that will be generated. They say that car.o is built from car.cc, car.h etc. This probably means that car.h somewhere #include's event.h, road.h... Every time you run 'make car.o', it will compare the last modification date on all the files listed against the modification date of car.o. If car.o is newer, it is up to date and no compiling is necessary. Otherwise, make will recompile everything it needs to.

Functions

It is possible to call some predefined functions in makefiles. A full list of them can be found in the manual, of course (http://www.gnu.org/software/make/manual/html_chapter/make_8.html#SEC83).

Perhaps you want to find all the .c files in directory for later use:

SOURCES := $(wildcard *.c)

Given these, maybe you want to know the names of their corresponding .o files:

OBJS := $(patsubst %.c, %.o, $(SOURCES))

You can do things like adding prefixes and suffixes, which comes in handy quite often. For example, you could have at the top of the makefile a variable where you set the libraries to be included:

LIBS := GL SDL stlport

And then use

$(addprefix -l,$(LIBS))

in a later rule to add a -l prefix for every library mentioned in LIBS above.

Finding files in multiple directories is a good example of the usage of foreach

DIRS := src obj headers FILES := $(foreach dir, $(DIRS), $(wildcard $(dir)/*))

Automatic dependency calculation

Based on manuals and research papers, if you are creating a Makefile for C/C++ gcc can calculate dependency information for you. The quickest way to get this going is to add the -MD flag to your CFLAGS first. You will then need to know the names of the .d files in your makefile. I do something like this:

DEPS := $(patsubst %.o,%.d,$(OBJS))

Then near the end of the makefile, add an

-include $(DEPS)

It might also help to make a 'deps' target:

deps: $(SOURCES)      $(CC) -MD -E $(SOURCES) > /dev/null

'-E' tells gcc to stop after preprocessing. When using -E, the processed C file is sent to STDOUT. Therefore to avoid the mess on the screen, send it to /dev/null instead. Using this command all of the *.d files will be made.


Comments

As your Makefile gets longer, you may want to insert comments to explain what the file is supposed to do. Comment lines, which are ignored by make, begin with a '#':

deps: $(SOURCES)      # the following line makes all of the .d files.      $(CC) -MD -E $(SOURCES) > /dev/null

You can also put a comment on the same line as another statement:

-include $(DEPS)  # this includes everything in DEPS

Gotchas

Some OperatingSystems use filesystems (such as MicrosoftWindows FAT, FAT32 and NTFS(?) and Apple's HFS) that are case insensitive, and can cause problems. (This implies you are using Cygwin on windows or Darwin on MacOSX). Particularly as unix packages often have a file named INSTALL which has installation instructions, the command "make install" says

make: Nothing to be done for install

You can fix this in your make(1) files by adding:

.PHONY: install

This will tell make that the "install" target is a "phony" target, and doesn't actually refer to a file and should always be rebuilt.

If you are on an OS such as FreeBSD you might need to invoke 'gmake' for a GNU compatible make.

Shell variables in Makefiles

There may come a time you need to use shell scripting complicated enough to require shell vars in a Makefile but make has issues since $ is the prefix for Make vars too, to escape the $, just use $$, so this:

for e in * ; do echo $e ; done

becomes:

for e in * ; do echo $$e ; done

It's a simple change but I didn't see it written anywhere obvious :)


See Also:


Requests

I'd like to know something about makedepend and such things. Maybe some links to other or "official" make HOWTOs would be useful as well. Thanks. -- Someone

Dear Someone, Take a look at the make manual, especially section 4.14. Basically 'make depend' is not really needed anymore.


I cannot find info about the meaning of '@AMDEP_TRUE@' variables in a Makefile. At the moment i get the error
make: AMDEP_TRUE@: Kommando nicht gefunden make: *** [arandom.lo? Fehler 127

thx, !FlorianKonnertz?

This isn't really anything to do with make. The autoconf/configure methods that many projects use take a template file (such as Makefile.in) and use that to create a makefile. autoconf uses things like @CXXFLAGS@ for its variables, and should replace @...@ vars with something that makes sense to make. If you have a makefile that still has @...@ variables in it, then it's a bug and there is a bug in the package.


I have a question. I have a directory called src. Within this directory, a group publishes designs inside directories:

Release_010405 Release_010505 Release_010605 Release_032304 Release_082903
If there is a file called baja.c inside one of these directories that is newer than baja.o, I want to compile. I was able to make a list of all the baja.c files within the Release directories using wildcard
NEWSOURCES = $(wildcard Release_*/baja.c)

However, I don't know how to tell Make which is the latest file. The following grabs the first of the list.

baja.local: $(NEWSOURCES)

cp $< .

You could try using $? which gives you the names of the prerequisites which are newer than the target. If there can be several of those and you only need the latest, though, you have to do it in the recipe, using shell tools. --AristotlePagaltzis

difference between gcc and g++

g++ is simply a script that passes a certain set of command line
arguments to gcc, so g++ uses gcc internally, not the other way around.
It used to be an actual bash script in older versions of gcc, now it's a
binary executable, but it still does the same thing.

What's more, the major difference between using "g++" or "gcc" commands
with C++ programs is in linking. "g++" will automatically link the code
with the C++ runtime library (libstdc++), but you must include it
manually if you use "gcc" or "ld".

my first open source project on code.google.com

here is the link to my project.
http://code.google.com/p/kernelcpp

I have created this project for several months. As I need to busy work whole days, lucky enough i have some time to keep it going now.

My kernelcpp project is to implement a frame for writing kernel module in C++ language. As we know, linux kernel is writen in c and all of its modules including device drivers are must be wrote in c language. This work can help you to write kernel modules such as device driver in C++ language easily.

The benefits we can get from this is mainly a reference for writing private code in linux kernel. Or for embeded system developers, they can reuse previous code in linux kernel without too much change. People also can use this to design a kernel space platform for your product.

Because the kernel compilation and C++ working environment are both compiler related. So I choose GCC as my supported compiler. 

First working environment would be x86 platform. I would implement other platforms' code after it is fully working.

The implementation would be step by step. May only support some of c++ , may or may not in following sequence.

1. C++ bare bone
2. global objects
3. static variable 
4. pure virtual function
5. new / delete
6. STL ( not support )
7. RTTI (not support ? this need import library ) 


Platform: ARM, ABI , EABI ??

Tuesday, August 04, 2009

Mixing C and C++ Code in the Same Program

http://developers.sun.com/solaris/articles/mixing.html

By Stephen Clamage, Sun Microsystems, Sun ONE Studio Solaris Tools Development Engineering  
The C++ language provides mechanisms for mixing code that is compiled by compatible C and C++ compilers in the same program. You can experience varying degrees of success as you port such code to different platforms and compilers. This article shows how to solve common problems that arise when you mix C and C++ code, and highlights the areas where you might run into portability issues. In all cases we show what is needed when using Sun C and C++ compilers.
Contents
 
Using Compatible Compilers
Accessing C Code From Within C++ Source
Accessing C++ Code From Within C Source
Mixing IOstream and C Standard I/O
Working with Pointers to Functions
Working with C++ Exceptions
Linking the Program
 
 
Using Compatible Compilers

The first requirement for mixing code is that the C and C++ compilers you are using must be compatible. They must, for example, define basic types such as int, float or pointer in the same way. The Solaris Operating System (Solaris OS) specifies the Application Binary Interface (ABI) of C programs, which includes information about basic types and how functions are called. Any useful compiler for the Solaris OS must follow this ABI.

Sun C and C++ compilers follow the Solaris OS ABI and are compatible. Third-party C compilers for the Solaris OS usually also follow the ABI. Any C compiler that is compatible with the Sun C compiler is also compatible with the Sun C++ compiler.

The C runtime library used by your C compiler must also be compatible with the C++ compiler. C++ includes the standard C runtime library as a subset, with a few differences. If the C++ compiler provides its own versions of of the C headers, the versions of those headers used by the C compiler must be compatible.

Sun C and C++ compilers use compatible headers, and use the same C runtime library. They are fully compatible.

 
Accessing C Code From Within C++ Source

The C++ language provides a "linkage specification" with which you declare that a function or object follows the program linkage conventions for a supported language. The default linkage for objects and functions is C++. All C++ compilers also support C linkage, for some compatible C compiler.

When you need to access a function compiled with C linkage (for example, a function compiled by the C compiler), declare the function to have C linkage. Even though most C++ compilers do not have different linkage for C and C++ data objects, you should declare C data objects to have C linkage in C++ code. With the exception of the pointer-to-function type, types do not have C or C++ linkage.

 

Declaring Linkage Specifications
Use one of the following notations to declare that an object or function has the linkage of language language_name:

extern "language_name" declaration ; extern "language_name" { declaration ; declaration ; ... }       
 

The first notation indicates that the declaration (or definition) that immediately follows has the linkage of language_name. The second notation indicates that everything between the curly braces has the linkage of language_name, unless declared otherwise. Notice that you do not use a semicolon after the closing curly brace in the second notation.

You can nest linkage specifications, but they do not create a scope. Consider the following example:

extern "C" {     void f();             // C linkage     extern "C++" {         void g();         // C++ linkage         extern "C" void h(); // C linkage         void g2();        // C++ linkage     }     extern "C++" void k();// C++ linkage     void m();             // C linkage }       
 

All the functions above are in the same global scope, despite the nested linkage specifiers.

 

Including C Headers in C++ Code
If you want to use a C library with its own defining header that was intended for C compilers, you can include the header in extern "C" brackets:

extern "C" {     #include "header.h" }       
 
Warning-
 
Do not use this technique for system headers on the Solaris OS. The Solaris headers, and all the headers that come with Sun C and C++ compilers, are already configured for use with C and C++ compilers. You can invalidate declarations in the Solaris headers if you specify a linkage.
 
 

Creating Mixed-Language Headers
If you want to make a header suitable for both C and C++ compilers, you could put all the declarations inside extern "C" brackets, but the C compiler does not recognize the syntax. Every C++ compiler predefines the macro __cplusplus, so you can use that macro to guard the C++ syntax extensions:

#ifdef __cplusplus extern "C" {