Wednesday, October 31, 2007

Easy shell scripting

Introduction




Shell scripting can be defined as a group of commands executed in sequence.
Let's start by describing the steps needed to write and execute a shell script:




Step 1: Open the file using an editor (e.g., "vi" or "pico".)




vi Firstshellscript.sh



Step 2: All shell scripts should begin with
"#!/bin/bash" or whatever other shell you prefer. This line is called the
shebang, and although it looks like a comment, it's not: it notifies the
shell of the interpreter to be used for the script. The provided path must
be an absolute one (you can't just use "bash", for example), and the
shebang must be located on the first line of the script without any
preceding space.




Step 3: Write the code that you want to develop. Our
first shell script will be the usual "Hello World" routine, which we'll
place in a file called 'Firstshellscript.sh'.




#!/bin/sh
echo "Hello World"



Step 4:The next step is to make the script
executable by using the "chmod" command.




chmod 744 Firstshellscript.sh



or




chmod +x Firstshellscript.sh



Step 5: Execute the script. This can be done by
entering the name of the script on the command line, preceded by its path.
If it's in the current directory, this is very simple:




bash$ ./Firstshellscript.sh
Hello World



If you want to see the execution step-by-step - which is very useful for
troubleshooting - then execute it with the '-x' ('expand arguments') option:




sh -x Firstshellscript.sh
+ echo 'Hello World'
Hello World










Category: Shell scripting



Shell scripting can be defined as a group of commands executed in sequence.





To see the contents of a script, you can use the 'cat' command or
simply open the script in any text editor:




bash$ cat Firstshellscript.sh
#!/bin/sh
echo Hello World


Comments in a Shell




In shell scripting, all lines beginning with # are comments.



# This is a comment line.
# This is another comment line.


You can also have comments that span multiple lines by using a colon
and single quotes:




: 'This is a comment line.

Again, this is a comment line.

My God, this is yet another comment line.'



Note: This will not work if there is a single quote mark within
the quoted contents.



Variables



As you may or may not know, variables are the most significant part of
any programming language, be it Perl, C, or shell scripting. In the shell,
variables are classified as either system variables or user-defined
variables.



System Variables




System variables are defined and kept in the environment of the parent
shell
(the shell from which your script is launched.) They are also
called environment variables. These variable names consist of capital
letters, and can be seen by executing the 'set' command. Examples of system
variables are PWD, HOME, USER, etc. The values of these system variables can
be displayed individually by "echo"ing the system variables. E.g.,
echo $HOME will display the value stored in the system variable
HOME.




When setting a system variable, be sure to use the "export" command to make
it available to the child shells (any shells that are spawned from
the current one, including scripts):




bash$ SCRIPT_PATH=/home/blessen/shellscript
bash$ export SCRIPT_PATH



Modern shells also allow doing all this in one pass:




bash$ export SCRIPT_PATH=/home/blessen/shellscript



User-Defined Variables



These are the variables that are normally used in scripting - ones that
you don't want or need to make available to other programs. Their names
cannot start with numbers, and are written using lower case letters and
underscores by convention - e.g. 'define_tempval'.



When we assign a value to a variable, we write the variable name
followed by '=' which is immediately followed by the value, e.g.,
define_tempval=blessen (note that there must not be any spaces
around the equals sign.) Now, to use or display the value in
define_tempval, we have to use the echo command
and precede the variable name with a '$' sign,
i.e.:




bash$ echo $define_tempval
blessen



The following script sets a variable named "username"
and displays its content when executed.




#!/bin/sh

username=blessen
echo "The username is $username"


Commandline Arguments



These are variables that contain the arguments to a script when it is
run. These variables are accessed using $1, $2, ... $n, where $1 is the first
command-line argument, $2 the second, etc. Arguments are delimited by spaces. $0
is the name of the script. The variable $# will display the number of
command-line arguments supplied; this number is limited to 9 arguments in
the older shells, and is practically unlimited in the modern ones.




Consider a script that will take two command-line arguments and display
them. We'll call it 'commandline.sh':




#!/bin/sh

echo "The first variable is $1"
echo "The second variable is $2"



When I execute 'commandline.sh' with command-line arguments like "blessen"
and "lijoe", the output looks like this:




bash$ ./commandline.sh blessen lijoe
The first variable is blessen
The second variable is lijoe


Exit status variable



This variable tells us if the last command executed was successful or
not. It is represented by $?. A value of 0 means that the command was
successful. Any other number means that the command was unsuccessful
(although a few programs such as 'mail' use a non-zero return to indicate
status rather than failure.) Thus, it is very useful in scripting.



To test this, create a file named "test", by running touch test .
Then, "display" the content of the file:




bash$ cat test



Then, check the value of $?.




bash$ echo $?
0



The value is zero because the command was successful. Now try running 'cat'
on a file that isn't there:




bash$ cat xyz1
bash$ echo $?
1



The value 1 shows that the above command was unsuccessful.




Scope of a Variable



I am sure most programmers have learned (and probably worked with)
variables and the concept of scope (that is, a definition of
where a variable has meaning.) In shell programming, we also use the scope
of a variable for various programming tasks - although this is very rarely
necessary, it can be a useful tool. In the shell, there are two types of
scope: global and local. Local variables are defined by using a "local" tag
preceding the variable name when it is defined; all other variables, except
for those associated with function arguments, are global, and thus
accessible from anywhere within the script. The script below demonstrates
the differing scopes of a local variable and a global one:




#!/bin/sh

display()
{
local local_var=100
global_var=blessen
echo "local variable is $local_var"
echo "global variable is $global_var"
}

echo "======================"
display
echo "=======outside ========"
echo "local variable outside function is $local_var"
echo "global variable outside function is $global_var"


Running the above produces the following output:




======================
local variable is 100
global variable is blessen
=======outside ========
local variable outside function is
global variable outside function is blessen


Note the absence of any value for the local variable outside the
function.




Input and Output in Shell Scripting



For accepting input from the keyboard, we use read. This command
will read values typed from the keyboard, and assign each to the variable
specified for it.




read <variable_name>



For output, we use the echo command.




echo "statement to be displayed"



Arithmetic Operations in Shell Scripting



Like other scripting languages, shell scripting also allows us to use
arithmetic operations such as addition, subtraction, multiplication, and
division. To use these, one uses a function called expr; e.g.,
"expr a + b" means 'add a and b'.




e.g.:




sum=`expr 12 + 20`



Similar syntax can be used for subtraction, division, and
multiplication. There is another way to handle arithmetic operations;
enclose the variables and the equation inside a square-bracket expression starting
with a "$" sign. The syntax is




$[expression operation statement]



e.g.:




echo $[12 + 10]


[ Note that this syntax is not universal; e.g., it
will fail in the Korn shell. The '$((...))' syntax is more shell-agnostic;
better yet, on the general principle of "let the shell do what it does best
and leave the rest to the standard toolkit", use a calculator program such
as 'bc' or 'dc' and command substitution. Also, note that shell arithmetic
is integer-only, while the above two methods have no such problem. -- Ben ]



Conditional Statements




Let's have some fun with a conditional statement like "if condition".
Most of the time, we shell programmers have situations where we have to
compare two variables, and then execute certain statements depending on
the truth or falsity of the condition. So, in such cases, we
have to use an "if" statement. The syntax is show below:




if [ conditional statement ]
then
... Any commands/statements ...
fi



The script cited below will prompt for a username, and if
the user name is "blessen", will display a message showing that I
have successfully logged in. Otherwise it will display
the message "wrong username".




#!/bin/sh

echo "Enter your username:"
read username

if [ "$username" = "blessen" ]
then
echo 'Success!!! You are now logged in.'
else
echo 'Sorry, wrong username.'
fi


Remember to always enclose the variable being tested in double quotes;
not doing so will cause your script to fail due to incorrect syntax when
the variable is empty. Also, the square brackets (which are an alias for
the 'test' command) must have a space following the opening bracket and
preceding the closing one.




Variable Comparison




In shell scripting we can perform variable comparison.
If the values of variables to be compared are numerical, then you have
to use these options:




-eq Equal to

-ne Not Equal to

-lt Less than

-le Less than or equal to

-gt Greater than

-ge Greater then or equal to




If they are strings, then you have to
use these options:




= Equal to

!= Not Equal to

< First string sorts before second

> First string sorts after second



Loops



The "for" Loop




The most commonly used loop is the "for" loop. In shell scripting,
there are two types: one that is similar to C's "for"
loop, and an iterator (list processing) loop.




Syntax for the first type of "for" loop (again, this type is only available
in modern shells):




for ((initialization; condition; increment/decrement))
do
...statements...
done



Example:




#!/bin/sh

for (( i=1; $i <= 10; i++ ))
do
echo $i
done


This will produce a list of numbers from 1 to 10. The syntax for the
second, more widely-available, type of "for" loop is:




for <variable> in <list>
do
...statements...
done



This script will read the contents of '/etc/group'
and display each line, one at a time:




#!/bin/sh

count=0
for i in `cat /etc/group`
do
count=`expr "$count" + 1`
echo "Line $count is being displayed"
echo $i
done

echo "End of file"



Another example of the "for" loop uses "seq" to
generate a sequence:




#!/bin/sh

for i in `seq 1 5`
do
echo $i
done


While Loop



The "while" loop is another useful loop used in all programming
languages; it will continue to execute until the condition specified
becomes false.




while [ condition ]
do
...statement...
done


The following script assigns the value "1" to the variable num and
adds one to the value of num each time it goes around the loop, as
long as the value of num is less than 5.





#!/bin/sh

num=1

while [$num -lt 5]; do num=$[$num + 1]; echo $num; done









Category: Programming



[Break] code into small chunks called functions, and call them by name in
the main program. This approach helps in debugging, code re-usability, etc.




Select and Case Statement




Similar to the "switch/case" construct in C programming, the combination of
"select" and "case" provides shell programmers with the same features.
The "select" statement is not part of the "case" statement, but
I've put the two of them together to illustrate how both can be used
in programming.




Syntax of select:




select <variable> in <list>
do
...statements...
done



Syntax of case:




case $<variable> in
<option1>) statements ;;
<option2>) statements ;;
*) echo "Sorry, wrong option" ;;
esac



The example below will explain the usage of select and case together,
and display options involving a machine's services needing to be restarted.
When the user selects a particular option, the script starts the
corresponding service.




#!/bin/bash

echo "***********************"
select opt in apache named sendmail
do
case $opt in
apache) /etc/rc.d/init.d/httpd restart;;
named) /etc/rc.d/init.d/named restart;;
sendmail) /etc/rc.d/init.d/sendmail restart;;
*) echo "Nothing will be restarted"
esac
echo "***********************"

# If this break is not here, then we won't get a shell prompt.
break

done


[ Rather than using an explicit 'break' statement -
which is not useful if you want to execute more than one of the presented
options - it is much better to include 'Quit' as the last option in the
select list, along with a matching case statement. -- Ben ]



Functions




In the modern world where all programmers use the OOP model for
programming, even we shell programmers aren't far behind. We too can break
our code into small chunks called functions, and call them by name in
the main program. This approach helps in debugging, code re-usability,
etc.




Syntax for "function" is:




<name of function> ()
{ # start of function
statements
} # end of function


Functions are invoked by citing their names in the main program,
optionally followed by arguments. For example:





#!/bin/sh

sumcalc ()
{
sum=$[$1 + $2]
}

echo "Enter the first number:"
read num1
echo "Enter the second number:"
read num2

sumcalc $num1 $num2

echo "Output from function sumcalc: $sum"


Debugging Shell Scripts



Now and then, we need to debug our programs. To do so, we use the '-x'
and '-v' options of the shell. The '-v' option produces verbose output. The
'-x' option will expand each simple command, "for" command, "case" command,
"select" command, or arithmetic "for" command, displaying the expanded
value of PS4, followed by the command and its expanded arguments or
associated word list. Try them in that order - they can be very helpful
when you can't figure out the location of a problem in your script.

Sunday, October 21, 2007

Six Wonderful Google Games To Keep You Entertained!

Toogle Search - Bill Gates - When you make a search on Toogle, it fetches the first images from Google images search and converts the picture into a colored ascii file made of only the search terms.

Google Mirror - elgoog - This site is like a mirror reflection of Google. All the text is displayed in the reverse order and inclined to the right of the page just like Arabic language. Remember that the queries are also to written in the backward direction

Gwigle - What Am I Googling? - A very addictive game where you are shown the Google search results page and you then have to reverse guess the search query. The game has various levels and can keep you busy for a long time. The accompanying tips will help you become a better googler. [Thanks, Ionut]

Guess The Google - At the start of this Google game, a grid of 20 image thumbnails would appear each of would match one search keyword. You get 20 seconds to guess the search keyword but you can make as many number of guess as you want during that time.

Googlewhack - A Googlewhack is a Google search query consisting of two words - both in the dictionary, and without quotation marks - that returns a single result. The search will list 'Results 1-1 of 1'. Googlewhacking is the pastime activity of finding such a result. A person attempting to find Googlewhacks is known as a Googlewhacker. [Whack Stack]

World War on Google Maps - Online players (2-25) randomly receive a set of countries with troop hitpoints based on real world population data. To play: attack neutral and enemy countries in an effort to try to take over the world. You have a 20% chance of receiving more troops when you overtake an enemy country. [via Slashdot]

Google Maps Flight Simulator - Nothing so advanced as the Microsoft Flight simulator, but this Googel computer game lets you fly a small farmer plane over any landscape created from a compilation of Google Maps images. You can use the keyboard arrow keys to change the flying directions, bank and dive. Space lets you fire while A/Z are for varying the flying speed.

Guess the Place - You are shown a picture and need to find out which country, state or city is being shown by looking at parts of Google Maps, or Flickr images of the place.

Saturday, October 20, 2007

10 Killer Freebies for Your Pocket PC

Many PDA users never venture beyond basic calendar and contact management, perhaps thinking that's all the devices are good for. That's a shame, because the modern Pocket PC (that is, a PDA running Microsoft's Windows Mobile operating system) can do more than you ever imagined, from reading e-books to making VoIP phone calls to streaming TV shows from your PC.

And you can do all that without spending a dime on extra software. Here are 10 of my favorite Pocket PC freebies.

1. ADB Idea Outliner
If you live and die by the outline, you'll love ADB Idea Outliner. It provides a traditional tree-style format for organizing your plans, tasks, and ideas, but you don't have to stop with text: The program also lets you add sketches, voice notes, and file attachments to your outlines, which can be exported in Pocket Access format. Idea Outliner is an excellent substitute for the anemic Windows Mobile Tasks app.

2. Skype for Pocket PC
Hey, is that a phone in your pocket? It is if you're carrying a Wi-Fi-connected Pocket PC with Skype installed. The PDA version of the mega-popular voice-over-IP app works surprisingly well, enabling you to make calls to other Skype users or any landline/mobile phone. It also offers group chat, a photo-enhanced address book, and other desktop-like features. Just awesome.

3. Agile Messenger
Available for Windows Mobile PDAs and smartphones, Agile Messenger offers instant messaging on the run. I'll confess: IM-ing on a PDA is not my idea of fun, if only because entering text is so slow and awkward. But for those times when I need real-time communication, it's the only way to go. Agile Messenger supports AOL, ICQ, MSN, and Yahoo chat services (what, no Gtalk or Jabber?).

4. Audiopod
So you've just finished watching the latest episode of Battlestar Galactica and you're wishing you could watch it again while listening to producer Ronald D. Moore's podcast. With Audiopod you can download podcasts directly to your Pocket PC and listen to them offline. This program just came out, so I haven't had a chance to try it yet--but it definitely looks like a winner.

5. Avvenu
Avvenu enables you to access your home or office PC via your Web-connected PDA or smartphone. The utility itself runs in Windows; you use your handheld's Web browser to establish a link with your PC. Once connected, you can browse the files on your hard drive, download them to your PDA, and share them with other users. You can even stream MP3s. Avvenu could come in mighty handy if you're on the road and realize you forgot an important document.

6. eReader
Most books I read these days, I read on my PDA. eReader is vastly superior to the Microsoft Reader app that comes with some Pocket PCs, which features like bookmarks, notes, auto-scroll, and fine control over the look and layout of book text. As for the books themselves, eReader.com is home to thousands of mainstream titles, from Dan Brown to Stephen King. They're not free, of course, though you can find lots of compatible public-domain titles at sites like MemoWare. And don't forget the very best thing about e-books on your PDA: You can read in bed without disturbing your spouse.

7. Kevtris
Before Bejeweled, before Zuma, there was Tetris--and it's just as fun and addictive as you remember. Kevtris is an attractive Tetris clone that's ideal for those times when you have five minutes to kill, like in line at the post office or waiting for the dentist. Free Pocket PC games are few and far between; this one's a gem.

8. Magic Button 2.0
Pocket PCs are notoriously bad at memory management. For instance, when you exit a program, it may not actually shut down. This can lead to seriously sluggish performance. There are dozens of utilities designed to help you manually close stubborn apps, but most are shareware. Magic Button 2.0 is a freebie. Once installed, it takes just two taps to "close all" or "close all but active" apps.

9. Orb
Orb is actually a Windows application that lets you stream audio and video from one PC to another--or, in this case, to a Pocket PC. Just point your handheld's Web browser to your "My Orb" address and you can listen to your music library, stream videos, and even watch live TV (if you're connected to a Media Center PC or one with a TV tuner). Video can be a little choppy, especially if your Wi-Fi connection is on the slow side, but this is still a way-cool way to leverage your Pocket PC.

10. PocketMusic
How to put this delicately... Windows Media Player 10 Mobile, um, bites. PocketMusic offers a vastly superior interface, including support for the 10,000 or so Winamp 2.x skins available online. It sports a 10-band graphic equalizer with loads of presets, and it provides a playlist editor. The freeware version plays only MP3s, however; if you want to play other audio files (including Audible, Ogg, and WMA), you'll need the $19.95 PocketMusic Bundle.

What are your favorite Pocket PC apps? Let us know in the comments. And if you're a Palm user, stay tuned for a similar roundup next week.

Wednesday, October 17, 2007

Explore the Linux memory model

Latest studies show that about 40000 people are switching to linux from windows everyday.From this it is clear that linux is becoming popular day by day.The main reason for this is "linux is opensource". Becuse of its increasing popularity ,i thought it would be a good idea to have a study on the linux memory model.so i searched the net to find something related to it.finally a found a guided introduction to the Linux® memory model.we can Learn the fundamentals of how memory is constructed and managedwith the help of this guide. This guide includes an examination of the segment control unit and the paging models as well as a detailed look at the physical memory zone.

Understanding the memory models used in Linux is the first step to grasping Linux design and implementation on a grander scale, so this gives you an introductory-level tour of Linux memory models and management. Linux uses the monolithic approach that defines a set of primitives or system calls to implement operating system services such as process management, concurrency, and memory management in several modules that run in supervisor mode. And although Linux maintains the segment control unit model as a symbolic representation for compatibility purposes, it uses this model at a minimal level.

The main issues that relate to memory management are:

1.Virtual memory management, a logical layer between application memory requests and physical memory.
2.Physical memory management.
3.Kernel virtual memory management/kernel memory allocator, a component that tries to satisfy the requests for memory. The request can be from within the kernel or from a user.
4.Virtual address space management.
5.Swapping and caching.

This article can help you understand the Linux internals from a memory-management perspective within the operating system by addressing the following:

1.The segment control unit model, in general, and specifically for Linux
2.The paging model, in general, and specifically for Linux
3.The physical details of the memory zone

This article does not detail how the memory is managed by the Linux kernel, but the information on the overall memory model and how it is addressed should give you a framework for learning more. This article focuses on the x86 architecture, but you can use the material in this article with other hardware implementations.

if you are ready for the deep study, i am not wasting your time. please click here to go to guide.

Tuesday, October 16, 2007

Ext2 Installable File System For Windows

What's unique about this software?

It provides Windows NT4.0/2000/XP/2003 with full access to Linux Ext2 volumes (read access and write access). This may be useful if you have installed both Windows and Linux as a dual boot environment on your computer.

The "Ext2 Installable File System for Windows" software is freeware.

If you currently have Windows running and you realize that you need some files for your work which you have stored on an Ext2 volume of your Linux installation, you no longer have to shut down Windows and boot Linux!

Furthermore, Windows will now be able to handle floppy disks which have been formatted with an Ext2 file system.

Linux Ext3 volumes can also be accessed.




It installs a pure kernel mode file system driver Ext2fs.sys, which actually extends the Windows NT/2000/XP/2003 operating system to include the Ext2 file system. Since it is executed on the same software layer at the Windows NT operating system core like all of the native file system drivers of Windows (for instance NTFS, FASTFAT, or CDFS for Joliet/ISO CD-ROMs), all applications can access directly to Ext2 volumes. Ext2 volumes get drive letters (for instance G:). Files, and directories of an Ext2 volume appear in file dialogs of all applications. There is no need to copy files from or to Ext2 volumes in order to work with them.

Features


1.Detailed list of features of the file system driver Extfs.sys:
2.Supports Windows NT4.0, Windows 2000, Windows XP and Windows 2003 (x86 processors only).
3.All operations you would expect: Reading and writing files, listing directories, creating, renaming, moving and deleting files or directories, querying and modifying the volume's label.
4.Files larger than 4 GBytes.
5.Paging files are supported. (A paging file is a file "pagefile.sys", which Windows swaps virtual memory to.) Users may create paging files at NT's control panel at Ext2 volumes.
6.Specific functions of the I/O subsystem of NT: Byte range locks, notification of changes of directories, oplocks (which are required by the NT LAN manager for sharing files via SMB).

The file system driver Ext2fs.sys caches file data and the file system's meta data such as directories and all the on-disk structures of the Ext2 file system. (It uses the file cache of the Windows NT operating system.) Therefore it is performant. The level of sophistication of the Ext2 file system driver's implementation is indeed comparable to Windows NT's native file system drivers.

The "Ext2 Installable File System" software package is distributed as a single executable solution, complete with all of the features. It is a setup wizard which installs and configures the Ext2 file system driver. If you wish to deinstall the software, select "Add/remove Software" from the Control Panel.

Furthermore, "IFS Drives" is installed at the computer's control panel, which allows you to assign drive letters to Ext2 volumes.

Saturday, October 13, 2007

Amazing linux anime pics

while i was browsing through flickr,i came to see some cute funny linux anime pics.
i thought it would be a good idea to share it.so i posted it here.the picture was uploaded on flickr by dnoway.



larger view
download
source

some other interesting posts:
8 must have Opera Widgets for bloggers and web designers
Top 8 Widget engines for the Linux platform
Top 10 killer apps for Linux Nerds

Friday, October 12, 2007

Top 10 killer apps for Linux Nerds

Linux was once the "little engine that could" of the software market. Linux spent its early years in the back of the proverbial caboose behind the behemoths Windows and Macintosh. However, it wasn't too long before devoted Linux-users developed an array of programs and applications comparable to Microsoft or Macintosh applications, putting Linux on the fast-track to becoming an open-source super star.



Linux users have just as many programs and applications available as other users, and these programs often rival their Windows and Macintosh compatible cousins. Developers have created a robust and dynamic network of applications designed to make Linux an increasingly prominent operating system in both the home and office. People from all over the world use these programs to enhance their Linux experience, to create or edit data, maintain databases, manipulate graphics, play games or media, browse the internet, and much more. While some of these programs are still in the infancy of development, there are hundreds of fully-developed, top-notch applications that revolutionize the Linux experience. Here are just 10 of them:

1.Firefox

Firefox is the King Midas of all Open Source software. It has made the jump from fringe-focused software to one of the most commonly downloaded web browsers. It is a free, open-source browser that is regarded as one of the most user-friendly and flexible available. Firefox is a cornerstone of open-source, Linux-friendly applications, setting a standard for reliability and universality.

2.Apache

Like Firefox, Apache needs little introduction. Apache is the most widely-used web server on the Internet. It supports both Pearl and PHP and is the life-blood of UNIX-based operating systems. Its many powerful add-ons rival that of any Microsoft product.

3.Freespire

Freespire is poised to overtake Synaptic/Adept as a program that increases the efficiency of software installation on Linux systems. It allows users to choose which proprietary codecs, drivers, and applications are included or installed, and there is no limitation set on that choice.

According to the Freespire community, Freespire is
• Freedom of choice
• Easy yet powerful
• Exceptional "fit and finish"
• Linux for the masses
• An active community
• Worldwide language support
• Be a good citizen of the broader Linux community
• Baseline for Linspire

4.Open Office

Open Office is the free, altruistic cousin of Microsoft Office: it is the program that just keeps giving and giving. It is a phenomenal program without pretension, giving users the ability to create spreadsheets, documents, and other presentations. Open Office also reads and edits MS Office documents. If anything, it is one of the most useful programs for even novice Linux users. Open Office brings familiarity of Microsoft-ish products to the Linux frontier.

Open Office Features:
Writer– a word processor that can be used for anything.
Calc – a useful spreadsheet that allows you to calculate and present data.
Impress – a fast and powerful way to create multimedia presentations.
Draw – allows you to create diagrams, 3D illustrations, and everything in between.
Base – allows users to manipulate databases, modify tables, forms, queries, and reports.
Math– creates mathematical equations using a graphic interface.

5.Konqueror

The appropriately named Konqueror is a program that does it all. Konqueror is a file manager, open source web browser with HTML 4.01 compliance, a universal viewing application, and a heck of a lot more.

Konqueror supports basic file management on local UNIX file systems and is the canvas for all the latest KDE technology. This killer program is capable of embedding read-only viewing components within it to view documents without ever launching another application.


6.iPodLinux

iPodLinux is venturing into porting Linux onto the ubiquitous music monster that is the iPod. So far, the iPodLinux Project has successfully ported a customized uClinux kernel to the iPod and have created an interface affectingly dubbed podzilla. iPodLinux is poised to become one of coolest applications. Tech-soothsayers predict that within the next couple of years as iPodLinux evolves, it will become an intragul component in portable media programs. As a result, Linux's already strong staying power will be well-spoken for.

7.AmaroK

Named after the Amarok album by Mike Oldfield, AmaroK is quickly becoming everyone's new (and much better) ITunes. AmaroK is more than just a music player: it organizes a library of music into folders by artist, genre, and album; it can edit tags attached to most music formats; associate album art and attach lyrics. But wait- there's more:
AmaroK plays media files in MP3, FLAC, Ogg, WAV, AAC, WMA, and Musepack not to mention nearly 100 more.
AmaroK syncs, plays, retrieves, or uploads music to Ipods and other digital music players.
AmaroK displays artist information via Wikipedia
AmaroK plays Podcasts

With a host of other features, including Moodbar functionality and Musicbrainz support, AmaroK may soon conquer the media application realm.

8.Beryl

With Desktop Cube, Animations, Water, Blur, Trailfocus, and Fading windows, Beryl makes for a dynamic desktop experience. Beryl is a combined window manager and composite manager that uses OpenGL for acceleration. It is highly flexible, extensible, and portable.

Beryl uses flat file backend with almost no gnome dependency. It has a custom theme decorator (Emerald) with features added on a daily basis. Best of all, it's maintained by a community.

Desktop Effects Features:

Scale Effect
OS X Expose Like Effect
Live Window Previews
Drag and Drop Support

Enhanced Switcher
Improved Visual Identification
Better Selecting Control

Desktop Cube
See Only What You Want
Visualize Your Workspace

Window Animations

Transparency, Brightness, and Saturation

Gnome Terminal True Transparency

Negative Windows


9.MPlayer

MPlayer surpasses the performance of any Windows media player in both quality and performance. MPlayer can read mpg, avi, mov, wav, Real Media, and the latest version of Windows Media Player files. You can even watch television, capture streams from the Internet or tuner card, and recode them with your favorite codec.

Screem HTML/XML Editor

SCREEM is a web developer's wet dream: its simple interface increases productivity without insulting seasoned coders. Far surpassing Dreamweaver, SCREEM presents raw HTML source in its editor window which allows developers to learn more without the crutch of having WYSIWYG page display. SCREEM uses a text-based editing system which allows developers to use the markup they want. Screem is also an XML editing package.

10.Deskbar

The DeskbarApplet provides a versatile search interface. Users can type in search terms into the deskbar entry panel and are then presented with the search results as you type. Deskbar uses a series of plugins to handle the searches and provides a simple interface to manage the plugins.

Linux programs and developers very much remain nestled in a niche community, shunning commercialism in favor of quality. There are literally hundreds of Linux applications, Applets, and programs out there.

Monday, October 08, 2007

Novell Gives openSUSE the (Faster) Boot

Novell Thursday updated OpenSUSE to version 10.3, adding the latest and greatest the open source community has to offer. Perhaps just as importantly, it’s now also faster to get to all the latest and greatest, with what Novell claims is the shortest boot time yet for its community Linux distribution.

The openSUSE 10.3 release includes the 2.6.22.5 Linux kernel OpenOffice 2.3 and the latest GNOME 2.20 and KDE 3.5.7 desktop GUIs among the myriad of updated packages.

Though openSUSE 10.3 adds in a lot of updated programs, the project’s contributors took a stab at also making the operating system more efficient. Of particular focus in this release were boot-time enhancements.

“There are now some incredibly impressive speed-ups, with desktops booting in around 24 seconds, or laptops booting in 27 seconds, compared to a 55 second wait in openSUSE 10.2,” developer Francis Giannaros wrote on the openSUSE blog.

Novell has also improved the way that openSUSE users get their packages. openSUSE 10.3 includes new and redesigned modules for their YaST (Yet another Setup Tool installer) including a revamped network module.

The new release will also make it simpler for users to build customized distributions, with a YaST front-end for KIWI, openSUSE’s application for creating custom system images. KIWI first debuted in January, at the same time Novell announced AutoBuild, which also figures prominently in this release. As one might guess, AutoBuild enabling developers to automatically build packages for openSUSE.

Another notable enhancement to the 10.3 release is openSUSE’s “1 click install” feature, which takes advantage of the build service.

“1 click install works directly from your Web browser, without any need to invoke YaST or any other package management tool first, with a single click,” Gerald Pfeifer, director of product management for SUSE Linux told InternetNews.com. “Leveraging this, anyone can set up a Web page with a reference to a set of packages and users can install this very easily.”

The openSUSE 10.2 release emerged in early December 2006 and included the “pirate” 2.6.18.2 kernel. Pfeifer said today’s openSUSE 10.3 release, coming some 11 months after the prior release, appeared on schedule.

Thursday, October 04, 2007

MontaVista, Arm, others to build Linux UMPC platform

A group of seven companies including Mozilla Corp., Arm Ltd. and MontaVista Software Inc. are hoping to grow the market for a relatively new device category that sits in between a smartphone and a laptop.

The companies are collaborating on a Linux-based open-source platform that encompasses chip design, operating system and some applications. They hope that the platform will make it easier for hardware developers to build devices similar to Nokia Corp.'s N800 tablet. That Linux-based device is bigger than a smartphone but smaller than a laptop and includes Wi-Fi but not cellular capabilities.

The group, which also includes Texas Instruments Inc., Samsung Electronics America Inc., Movial Corp. and Marvell Technologies Group Ltd., expects to complete the platform's development in the early part of next year, said Kerry McGuire, director of strategic alliances in Arm's connected mobile computing group. The devices should hit the market in early 2009, she said.

The device category is similar to the ultramobile PC but is based on Linux and not Microsoft Corp. programs, said Jim Ready, CTO and founder of MontaVista.

Devices based on the new platform would weigh less than a laptop, both literally and from the perspective that they might not require as many applications, he said. "You can attach to the Web and do e-mail and browsing without all the baggage of a PC and Windows and Office," he said. "There are Web-based alternatives to all that." For example, instead of Microsoft Word, a user of such a device could access Google Docs through the browser to create a document.

The group is "complimentary" to the LiMo Foundation, said McGuire. LiMo is one of many organizations working on standards and specifications for mobile Linux. Those groups, however, aren't focused on this slightly different device category, said McGuire.

While the platform the companies develop may be similar to the one Nokia uses in the N800, the Arm group is creating a completely open platform that it will share with the open-source community, McGuire said.

The N800 has been available for a couple of years and Nokia has not discussed how many units it has sold. McGuire has high hopes for the category though. By 2010, she expects there will be 90 million of the devices on the market.

The platform will comprise Arm's Debian-based Linux distribution, MontaVista's operating system, a desktop and application environment from Gnome Mobile, a browser from Mozilla, a multimedia player and other components such as integrated hardware management for battery and power savings, a customizable user interface and various options for wireless connectivity.

Wednesday, October 03, 2007

Linux: the big picture

This article gives a brief introduction to Linux, with a sketch of the background history.

History

The history of computer operating systems starts in the 1950s, with simple schemes for running batch programs efficiently, minimizing idle time between programs. A batch program is one that does not interact with the user at all. It reads all its input from a file (possibly a stack of punch cards) and outputs all its output to another file (possibly to a printer). This is how all computers used to work.

Then, in early 1960s, interactive use started to gain ground. Not only interactive use, but having several people use the same computer at the same time, from different terminals. Such systems were called time-sharing systems and were quite a challenge to implement compared to the batch systems.

During the 1960s there were many attempts at building good time-sharing systems. Some of these were university research projects, others were commercial ones. One such project was Multics, which was quite innovative at the time. It had, for example, a hierarchical file system, something taken for granted in modern operating systems.

The Multics project did not, however, progress very well. It took years longer to complete than anticipated and never got a significant share of the operating system market. One of the participants, Bell Labs, withdrew from the project. The Bell Labs people who were involved then made their own operating system and called it Unix.

Unix was originally distributed for free and gained much popularity in universities. Later, it got an implementation of the TCP/IP protocol stack and was adopted as the operating system of choice for early workstations.

By 1990, Unix had a strong position in the server market and was especially strong in universities. Most universities had Unix systems and computer science students were exposed to them. Many of them wanted to run Unix on their own computers as well. Unfortunately, by that time, Unix had become commercial and rather expensive. About the only cheap option was Minix, a limited Unix-like system written by Andrew Tanenbaum for teaching purposes. There was also 386BSD, a precursor NetBSD, FreeBSD, and OpenBSD, but that wasn't mature yet, and required higher end hardware than many had at home.

Into this scene came Linux, in October, 1991. Linus Torvalds, the author, had used Unix at the University of Helsinki, and wanted something similar on his PC at home. Since the commercial alternatives were way too expensive, he started out with Minix, but wanted something better and soon started to write his own operating system. After its first release, it soon attracted the attention of several other hackers. While Linux initially was not really useful except as a toy, it soon gathered enough features to be interesting even for people uninterested in operating system development.

Linux itself is only the kernel of an operating system. The kernel is the part that makes all other programs run. It implements multitasking, and manages hardware devices, and generally enables applications to do their thing. All the programs that the user (or system administrator) actually interacts with are run on top of the kernel. Some of these are essential: for example, a command line interpreter (or shell), which is used both interactively and to write shell scripts (corresponding to .BAT files).

Linus did not write these programs himself, and used existing free versions instead. This reduced greatly the amount of work he had to do to get a working environment. In fact, he often changed the kernel to make it easier to get the existing programs to run on Linux, instead of the other way around.

Most of the critically important system software, including the C compiler, came from the Free Software Foundation's GNU project. Started in 1984, the GNU project aims to develop an entire Unix-like operating system that is completely free. To credit them, many people like to refer to a Linux system as a GNU/Linux system. (GNU has their own kernel as well.)

During 1992 and 1993, the Linux kernel gathered all the necessary features it required to work as a replacement for Unix workstations, including TCP/IP networking and a graphical windowing system (the X Window System). Linux also received plenty of industry attention, and several small companies were started to develop and distribute Linux. Dozens of user groups were founded, and the Linux Journal magazine started to appear in early 1994.

Version 1.0 of the Linux kernel was released in March, 1994. Since then, the kernel has gone through many development cycles, each culminating in a stable version. Each development cycle has taken a year or three, and has involved redesigning and rewriting large parts of the kernel to deal with changes in hardware (for example, new ways to connect peripherals, such as USB) and to meet increased speed requirements as people apply Linux to larger and larger systems (or smaller and smaller ones: embedded Linux is becoming a hot topic).

From a marketing and political point of view, after the 1.0 release the next huge step happened in 1997, when Netscape decided to release their web browser as free software (the term 'open source' was created for this). This was the occasion that first brought free software to the attention of the whole computing world for the time. It has taken years of work since then, but free software (whether called that or open source) has become not only generally accepted but also often the preferred choice for many applications.

Social phenomenon

Apart from being a technological feat, Linux is also an interesting social phenomenon. Much through Linux, the free software movement has broken through to general attention. On the way, it even got an informal marketing department and brand: open source. It is baffling to many outsiders that something as successful as Linux could be developed by a bunch of unorganized people in their free time.

The major factor here is the availability of all the source code to the system, plus a copyright license that allows modifications to be made and distributed. When the system has many programmers among its users, if they find a problem, they can fairly easily fix it. Additionally, if they think a feature is missing, they can add it themselves. For some reason, that is something programmers like to do, even if they're not paid for it: they have an itch (a need), so they scratch (write the code to fill the need).

It is necessary to have at least one committed developer who puts in lots of effort. After a while, however, once there are enough programmer-users sending small changes and improvements, you get a snowball effect: lots of small changes result in a fairly rapid total development speed, which then attracts more users, some of which will be programmers. This then results in more small changes and improvements sent in by users, and so on.

For operating system development specifically, this large group of programmer-users results in two important types of improvements: bug fixes and device drivers. Operating system code often has bugs that only occur rarely and it can be difficult for the developers to reproduce them. When there are thousands or more users who are also programmers, this results in a very effective testing and debugging army.

Most of the code volume in Linux is device drivers. The core functionality, which implements multitasking and multiuser functionality, is small in comparison. Most device drivers are independent from each other, and only interact with the operating system core via well defined interfaces. Thus, it is fairly easy to write a new device driver without having to understand the whole complexity of the operating system. This also allows the main developers to concentrate on the core functiionality, and they can let those people write the device drivers who actually have the devices.

It would be awkward just to store the thousands of different sound cards, Ethernet cards, IDE controllers, motherboards, digital cameras, printers, and so on that Linux supports. The Linux development model is distributed, and spreads the work around quite effectively.

The Linux model is not without problems. When a new device gets on the market, it can take a few months before a Linux programmer is interested enough to write a device driver. Also, some device manufacturers, for whatever reason, do not want to release programming information for their devices, which can prevent a Linux device driver to be written at all. Luckily, with the growing global interest in Linux such companies become fewer in numbers.

What it is

Linux is a Unix-like multitasking, multiuser 32 and 64 bit operating system for a variety of hardware platforms and licensed under an open source license. This is a somewhat accurate but rather brief description. I'll spend the rest of this article expounding on it.

Being Unix-like means emulating the Unix operating system interfaces so that programs written for Unix will work for Linux merely by re-compiling. It follows that Linux uses mostly the same abstractions as the Unix system. For example, the way processes are created and controlled is the same in Unix and Linux.

There are a number of other operating systems in active use: from Microsoft's family of Windows versions, through Apple's MacOS to OpenVMS. Linux's creator, Linus Torvalds, chose Unix as the model for Linux partly for its aesthetic appeal to system programmers, partly because of all the operating systems he was familiar with, it was the one he knew best.

The Unix heritage also gives Linux the two most important features: multitasking and multiuser capabilities. Linux, like Unix, was designed from the start to run multiple processes independently of each other. Implementing multitasking well requires attention at every level of the operating system. It is hard to add multitasking to an operationg system afterwards. That's why the Windows 95 series and MacOS (before MacOS X) did multitasking somewhat poorly: multitasking was added to an existing operating system, not designed into a new one. That's also why the Windows NT series, MacOS X, and Linux do multitasking so much better.

A good implementation of multitasking requires, among other things, proper memory management. The operating system must use the memory protection support in the processor to protect running programs from each other. Otherwise a buggy program (that is, most any program) may corrupt the memory area of another program, or the operating system itself, causing weird behavior or a total system crash, with likely loss of data and unsaved work.

Supporting many concurrent users is easy after multitasking works. You label each instance of a running program with a particular user and prevent the program from tampering with other user's files.
Portable and scalable

Linux was originally written for an Intel 386 processor, and naturally works on all successive processors. After about three years of development, work began to adapt (or port) Linux to other processor families as well. The first one was the Alpha processor, then developed and sold by the Digital Equipment Corporation. The Alpha was chosen because Digital graciously donated a system to Linus. Soon other porting efforts followed. Today, Linux also runs on Sun SPARC and UltraSPARC, Motorola 68000, PowerPC, PowerPC64, ARM, Hitachi SuperH, IBM S/390, MIPS, HP PA-RISC, Intel IA-64, DEC VAX, AMD x86-64 and CRIS processors. (See kernel.org for details.)

Most of those processors are not very common on people's desks. For example, S/390 is IBM's big mainframe architecture. Here, mainframe means the kind of computer inside of which you can put your desk, rather than the kind that fits on your desk.

Some of those processors are 32 bit, like the Intel 386. Others are 64 bit, such as the Alpha. Supporting such different processors has been good for Linux. It has required designing the system to use proper modularity and good abstractions and this has improved code quality.

The large variety of supported processors also shows off Linux's scalability: it works everything from very small systems, such as embedded computers, handheld devices, and mobile phones, to very large systems, such as the IBM mainframes.

Using clustering technology, such as Beowulf (beowulf.org), Linux even runs on supercomputers. For example, the US Lawrence Livermore National Laboratories bought a cluster with 1920 processors, resulting in one of the five fastest supercomputers in the world with a theoretical peak performance of 9.2 teraFLOPS or 9.2 trillion calculations per second. (LWN article).

Using Linux

The operating system itself is pretty boring to most people. Applications are necessary so to get things done. Traditionally, Linux applications have been the kinds of applications used with Unix: scientific software, databases, and network services. Also, of course, all the tools programmers want for their craft.

Much of such software seems rather old-fashioned by today's desktop standards. User interfaces are text based, or they might not exist at all. Indeed, most software has usually been non-interactive and has been of the command line, batch processing variety. Since most users have been experts in the application domain, this has been good enough.

Thus, Linux first found corporate employment as a file server, mail server, web server, or firewall. It was a good platform for running a database, with support from all major commercial database manufacturers.

In the past few years Linux has also become an interesting option on the user friendly desktop front. The KDE (kde.org) and Gnome (gnome.org) projects develop desktop environments and applications that are easy to learn (as well as effective to use). There is now plenty of desktop applications which people with Windows or MacOS experience will have no difficulty using.

There is even a professional grade office software package. OpenOffice (openoffice.org), based on Sun's StarOffice, is free, fully featured, and file compatible with Microsoft Office. It includes a word processor, spreadsheet, and presentation program, competing with Microsoft's Word, Excel, and Powerpoint.

Linux distributions

To install Linux, you have to choose a Linux distribution. A distribution is the Linux kernel, plus an installation program, plus some set of applications to run on top of it. There are hundreds of Linux distributions, serving different needs.

All distributions use pretty much the same actual software, but they are different in which software they include, which versions they pick (a stable version known to work well or the latest version with all the bells and whistles and bugs), how the software is pre-configured, and how the system is installed and managed. For example, OpenOffice, Mozilla (web browser), KDE and Gnome (desktop environments), and Apache (web server) will all work on all distributions.

Some distributions aim to be general purpose, but most of them are task specific: they are meant for running a firewall, a web kiosk, or meant for users within a particular university or country. Those looking for their first Linux experience can concentrate on the three biggest general purpose distributions: Red Hat, SuSE, and Debian.

The Red Hat and SuSE distributions are produced by companies by the same names. They aim at providing an easy installation procedure, and for a pleasant desktop experience. They are also good as servers. Both are sold in boxes, with an installation CD and printed manual. Both can also be downloaded via the network.

The Debian distribution is produced by a volunteer organization. It's installation is less easy: you have to answer questions during the installation the other distributions deduce automatically. Nothing complicated as such, but requiring understanding of and information about hardware most PC users don't want to worry about. On the other hand, after installation, Debian can be upgraded to each new release without re-installing anything.

The easiest way to try out Linux is to use a distribution that works completely off a CD-ROM. This way, you don't have to install anything. You merely download the CD-ROM image from the net and burn it on a disk, or buy a mass-produced one via the net. Insert disk in drive, then reboot. Not having to install anything on the hard disk means you can easily switch between Linux and Windows. Also, since all Linux files are on a read-only CD-ROM, you can't break anthing by mistake while you're learning.