Monday, July 27, 2009

Timeline: 40 Years Of Unix

1969

AT&T-owned Bell Laboratories withdraws from development of Multics, a pioneering but overly complicated time-sharing system. Some important principles in Multics were to be carried over into Unix.

Ken Thompson at Bell Labs writes the first version of an as-yet-unnamed operating system in assembly language for a DEC PDP-7 minicomputer.

1970

Thompson's operating system is named Unics, for Uniplexed Information and Computing Service, and as a pun on "emasculated Multics." (The name would later be mysteriously changed to Unix.)

1971

Unix moves to the new DEC PDP-11 minicomputer.

The first edition of the Unix Programmer's Manual, written by Thompson and Dennis Ritchie, is published.

1972

Ritchie develops the C programming language.

1973

Unix matures. The "pipe" is added to Unix; this mechanism for sharing information between two programs will influence operating systems for decades. Unix is rewritten from assembler into C.

1974

"The UNIX Timesharing System," by Ritchie and Thompson, appears in the monthly journal of the Association for Computing Machinery. The article produces the first big demand for Unix.

1976

Bell Labs programmer Mike Lesk develops UUCP (Unix-to-Unix Copy Program) for the network transfer of files, e-mail and Usenet content.

1977

Unix is ported to non-DEC hardware, including the IBM 360.

1978

Bill Joy, a graduate student at UC Berkeley, sends out copies of the first Berkeley Software Distribution (1BSD), essentially Bell Labs' Unix v6 with some add-ons. BSD becomes a rival Unix branch to AT&T's Unix; its variants and eventual descendents include FreeBSD, NetBSD, OpenBSD, DEC Ultrix, SunOS, NeXTstep/OpenStep and Mac OS X.

1980

4BSD, with DARPA sponsorship, becomes the first version of Unix to incorporate TCP/IP.

1982

Bill Joy co-founds Sun Microsystems to produce the Unix-based Sun workstation.

1983

AT&T releases the first version of the influential Unix System V, which would later become the basis for IBM's AIX and Hewlett-Packard's HP-UX.

1984

X/Open Co., a European consortium of computer makers, is formed to standardize Unix in the X/Open Portability Guide.

1985

AT&T publishes the System V Interface Definition, an attempt to set a standard for how Unix works.

1986

Rick Rashid and colleagues at Carnegie Mellon University create the first version of Mach, a replacement kernel for BSD Unix.

1987

AT&T Bell Labs and Sun Microsystems announce plans to co-develop a system to unify the two major Unix branches.

Andrew Tanenbaum writes Minix, an open-source Unix clone for use in computer science classrooms.

1988

The "Unix Wars" are under way. In response to the AT&T/Sun partnership, rival Unix vendors including DEC, HP and IBM form the Open Software Foundation (OSF) to develop open Unix standards. AT&T and its partners then form their own standards group, Unix International.

The IEEE publishes Posix (Portable Operating System Interface for Unix), a set of standards for Unix interfaces.

1989

Unix System Labs, an AT&T Bell Labs subsidiary, releases System V Release 4 (SVR4), its collaboration with Sun that unifies System V, BSD, SunOS and Xenix.

1990

The OSF releases its SVR4 competitor, OSF/1, which is based on Mach and BSD.

1991

Sun announces Solaris, an operating system based on SVR4.

Linus Torvalds writes Linux, an open-source OS kernel inspired by Minix.

1992

The Linux kernel is combined with GNU to create the free GNU/Linux operating system, which many refer to as simply "Linux."

1993

AT&T sells its subsidiary Unix System Laboratories and all Unix rights to Novell. Later that year, Novell transfers the Unix trademark to the X/Open group.

Microsoft introduces Windows NT, a powerful, 32-bit multiprocessor operating system. Fear of NT spurs true Unix-standardization efforts.

1996

X/Open merges with the OSF to form The Open Group.

1999

Thompson and Ritchie receive the National Medal of Technology from President Clinton.

2002

The Open Group announces Version 3 of the Single Unix Specification.

Sources: A Quarter Century of UNIX, by Peter H. Salus; Microsoft; AT&T; The Open Group; Wikipedia and other sources

Sunday, July 19, 2009

Ksplice gives Linux users 88% of kernel updates without rebooting

Have you ever wondered why some updates or installs require a reboot, and others don’t? The main reason relates to kernel-level (core) services running in memory which either have been altered by the update to include new data that can’t be “squeezed in” to its existing footprint, or are currently attached to multiple separate processes which cannot be accounted for without a reboot. Ksplice has figured out a way around that issue in a majority of the cases.

A recent examination of Linux kernel updates suggests 88% of those which today fall under the “must reboot” category due to the types of programs they affect, could be converted into rebootless forms using Ksplice.

The Ksplice website includes a How It Works page. It explains that the lifecycle of Linux bugs (with Ksplice) operates like this:

1. A dangerous bug or security hole is discovered in Linux.
2. Linux developers create a fix or patch which corrects the problem, but may require a reboot.
3. Ksplice software analyzes the fix, and if possible creates an update “image” which can be implemented on your system without rebooting.
4. The update manager then sees either the Ksplice update, or the regular Linux kernel patch (if it could not be made into a rebootless version), and installs it.

This ability comes from an analysis of the object code used on your system before the patch is applied. This data is compared to the object code of the update. As such, memory variable locations can be isolated in both the pre- and post-versions. And with Ksplice, a type of “difference utility” (where it compares the two to see what’s changed) is run, allowing a full-on inspection of the update to determine if a rebootless version can be created.

If a rebootless version is possible, it creates the image which, when applied, maps the new memory locations as needed, and installs the new compiled code as needed. If it’s not possible, then the update is distributed through the normal Linux update mechanisms, and a reboot is required after applying.

Minimal Interruption

Ksplice says the system is disabled for 0.7 milliseconds while the update is applied, which for most types of applications is an acceptable down-time, especially when compared alongside a hard reboot.

As mentioned, 88% of Linux kernel patches which require a reboot today would not require a reboot with Ksplice. The remaining 12% fall into the category of something expanding, whereby the new data structures in the update have increased in size and cannot physically be squeezed into the quantity of memory allocated for the previous version’s structures.

A Ksplice Uptrack service is available today for Ubuntu 9.04 (Jaunty), which according to the website, provides near 100% uptime and “rebootless updates”. See also their full brochure (PDF 320KB).

Linux Only

This technology is only for Linux at the current time. No features like this are available for Windows. The technology does require a kernel patch, as Ksplice itself must be integrated into the kernel. The installation software (.deb package) handles this for you.

See Ars Technica

Rick’s Opinion
This technique would allow enterprise-level Linux installations a greater percentage of uptime. Many service providers strive for what they call “five 9s” of uptime, which is 99.999% of the time, which over the course of a year means the system would only be down for a grand total of 5m 15s. Some organizations strive for six 9s, which means it would only be down for 31 seconds.

Having the ability to reboot only in 12% of kernel updates, which don’t occur that often today anyway (a few per month) would mean much longer up-time, should a person leave their machine on 24/7.

Ubuntu desktop Linux users would also see benefits as they would not have to reboot nearly as often during the course of the day, which is typically when the Update Manager says “Oh, here’s a host of 15 updates to install”, though these do rarely require an update.

This kind of tool would allow most kernel patches to be rolled out with a greater degree of frequency without interrupting anybody’s system. This means faster turn-around times from security holes and bug fixes, and without a disruption to people’s daily routines. What could be finer?

Monday, July 13, 2009

Limit the CPU usage of an application (process) - cpulimit

cpulimit is a simple program that attempts to limit the cpu usage of a process (expressed in percentage, not in cpu time). This is useful to control batch jobs, when you don't want them to eat too much cpu. It does not act on the nice value or other scheduling priority stuff, but on the real cpu usage. Also, it is able to adapt itself to the overall system load, dynamically and quickly.

Installation:
Download last stable version of cpulimit
Then extract the source and compile with make:

tar zxf cpulimit-xxx.tar.gz
cd cpulimit-xxx
make

Executable file name is cpulimit. You may want to copy it in /usr/bin.

Usage:
Limit the process 'bigloop' by executable name to 40% CPU:

cpulimit --exe bigloop --limit 40
cpulimit --exe /usr/local/bin/bigloop --limit 40

Limit a process by PID to 55% CPU:

cpulimit --pid 2960 --limit 55

cpulimit should run at least with the same user running the controlled process. But it is much better if you run cpulimit as root, in order to have a higher priority and a more precise control.

Note:
If your machine has one processor you can limit the percentage from 0% to 100%, which means that if you set for example 50%, your process cannot use more than 500 ms of cpu time for each second. But if your machine has four processors, percentage may vary from 0% to 400%, so setting the limit to 200% means to use no more than half of the available power. In any case, the percentage is the same of what you see when you run top.

Wednesday, July 8, 2009

Chrome OS

An innocuous posting on Google's official blog last night has sent huge waves throughout the IT community today. In that post, Google has announced the next battle in the war for operating system dominance has begun. And Linux will be their weapon of choice.

The Google blog has often been the launch point for major news from the Mountain View, CA company, and this piece of information was no exception: Google plans to release a new Chrome Operating System, touted as an extenstion of their Google Chrome browser.

Chrome, which was just released nine months ago, has proven to be a popular offering, though not nearly as popular yet as Firefox, the open source browser from the Mozilla Project. According a recent Net Applications survey, Firefox held 22.1% of the average daily market share at the end of June, Safari 9.0%, and Chrome with 2.0%. Microsoft’s Internet Explorer has slipped to 65.6% of the browser market.

The new Chrome OS "will initially be targeted at netbooks," according to the post from Sundar Pichai, VP Product Management and Linus Upson, Engineering Director. This will be a natural target platform for the new operating system, which should be ready for consumers in the latter half of 2010.

"Speed, simplicity and security are the key aspects of Google Chrome OS. We're designing the OS to be fast and lightweight, to start up and get you onto the web in a few seconds. The user interface is minimal to stay out of your way, and most of the user experience takes place on the web. And as we did for the Google Chrome browser, we are going back to the basics and completely redesigning the underlying security architecture of the OS so that users don't have to deal with viruses, malware and security updates. It should just work," wrote Pichai and Upson.

The technical details were sparse in the announcement, but some key bits of information were given. The new OS "will run on both x86 as well as ARM chips" and the new Chrome OS will be based on Linux.

"The software architecture is simple--Google Chrome running within a new windowing system on top of a Linux kernel. For application developers, the web is the platform. All web-based applications will automatically work and new applications can be written using your favorite web technologies. And of course, these apps will run not only on Google Chrome OS, but on any standards-based browser on Windows, Mac, and Linux thereby giving developers the largest user base of any platform," according to the announcement. Google plans to release the source code for Chrome OS later this year.

The blog entry was very clear to differentiate this new project from the existing Andriod project, another Linux-based platform project from Google.

Android was designed from the beginning to work across a variety of devices from phones to set-top boxes to netbooks. Google Chrome OS is being created for people who spend most of their time on the web, and is being designed to power computers ranging from small netbooks to full-size desktop systems," Pichai and Upson wrote, "While there are areas where Google Chrome OS and Android overlap, we believe choice will drive innovation for the benefit of everyone, including Google."

Reaction from the IT community has been invariably along one theme: that this is the biggest challenge Microsoft has faced to date as the top operating system provider. There is little evidence that this won't be the case, as Google has historically been a strong foil to Microsoft's business strategy.

In the meantime, developers and contributors in the Linux and in other open source communities should be busy, as the blog entry concludes: "We have a lot of work to do, and we're definitely going to need a lot of help from the open source community to accomplish this vision."

Tuesday, July 7, 2009

Linux on the Desktop

Desktop Linux adoption is primarily driven by cost reduction
When asked during a recent online survey of over a thousand IT professionals with experience of desktop Linux deployment in a business context, over 70% of respondents indicated cost education as the primary driver for adoption. Ease of securing the desktop and a general lowering of overheads associated with maintenance and support were cited as factors contributing to the benefit.

But deployment is currently limited, and challenges to further adoption frequently exist
The majority of desktop Linux adopters have only rolled out to less than 20% of their total PC user base at the moment, though the opportunity for more extensive deployment is clearly identified. In order for Linux to reach its full potential in an organization, however, it is necessary to pay particular attention to challenges in the areas of targeting, user acceptance and application compatibility.

Selective deployment based on objective targeting will yield the highest ROI and acceptance
Rolling out Linux to power users, creative staff and highly mobile professionals can represent a challenge from a migration cost, requirements fulfillment and user satisfaction perspective?
However, the needs of transaction workers and general professional users with lighter and more predictable requirements can be met cost-effectively with Linux without running into the same user acceptance issues. With groups such as this typically accounting for a high proportion of the user base, there is a clear opportunity to deploy desktop Linux selectively. Optimization of the desktop estate is therefore likely to be achieved through a mix of Windows and Linux in most situations.

Linux desktop roll out is easier than expected for properly targeted end-user groups
Those with experience are much more likely to regard non-technical users as primary targets for Linux. The message here is that in practice, Linux is easier to deploy to end users than many imagine before they try it. For the majority of application types, including office tools, email clients and browsers, there is a strong consensus that the needs of most users can be met by native Linux equivalents to traditional Windows solutions. Where this is not the case, thin client or browser based delivery and/or one of the various emulation or virtualization options are available.

A focus on usability reflects a maturing of thinking
In line with the acknowledged importance of a good user experience, usability is now the most sought after attribute of a Linux distribution. Together with the emphasis on cost reduction already seen, this suggests a maturing of attitudes in relation to Linux, shifting the previous focus on pure technical considerations to a more balanced view of what really matters in a business context. This observation is significant when reviewing the mainstream relevance of the desktop Linux proposition.

Sunday, July 5, 2009

Query Apache logfiles via SQL

The Apache SQL Analyser (ASQL) is designed to read Apache log files and dynamically convert them to SQLite format so you can analyse them in a more meaningful way. Using the cut, uniq and wc commands, you can parse a log file by hand to figure out how many unique visitors came to your site, but using Apache SQL Analyser is much faster and means that the whole log gets parsed only once. Finding unique addresses is as simple as a SELECT DISTINCT command.

In terms of requirements you will need only the Perl modules for working with SQLite database, and the Term::Readline module. Upon a Debian system you may install both via:

apt-get install libdbd-sqlite3-perl libterm-readline-gnu-perl

Usage
Once installed, either via the package or via the source download, please start the shell by typing "asql". Once the shell starts you have several commands available to you, enter help for a complete list. The three most commonly used commands would be: load, select & show

The following sample session provides a demonstration of typical usage of the shell, it demonstrates the use of the alias command which may be used to create persistent aliases:

asql v0.6 - type 'help' for help.
asql> load /var/logs/apache/access.log
Loading: /var/logs/apache/access.log
sasql> select COUNT(id) FROM logs
46
asql> alias hits SELECT COUNT(id) FROM logs
ALIAS hits SELECT COUNT(id) FROM logs
asql> alias ips SELECT DISTINCT(source) FROM logs;
ALIAS ips SELECT DISTINCT(source) FROM logs;
asql> hits
46
asql> alias
ALIAS hits SELECT COUNT(id) FROM logs
ALIAS ips SELECT DISTINCT(source) FROM logs;

Wednesday, July 1, 2009

How to calculates CRC checksum and the byte count for file(s)

cksum prints the CRC checksum for each file along with the number of bytes in the file, and the file name unless no arguments were given.

cksum is typically used to ensure that files transferred by unreliable means have not been corrupted, by comparing the cksum output for the received files with the cksum output for the original files (typically given in the distribution).

The CRC algorithm is specified by the POSIX standard. It is not compatible with the BSD or System V sum algorithms and cksum is more robust.

The only options are --help and --version.
An exit status of zero indicates success, and a nonzero value indicates failure.

Example of using cksum:
Create file with following text:

$ echo "Open source is a development method for software that harnesses the power of distributed peer review and transparency of process." > file.txt

$ cksum file.txt
1121778036 130 file.txt

Here cksum calculate a cyclic redundancy check (CRC) of given file (file.txt). Users can check the integrity of file and see if the file has been modified. use your favorite text editor and remove a "." from the end of the sentence and run cksum again on the same file and observe the difference in the output

$ cksum file.txt
2131559972 129 file.txt

Another Example:
cksum can be also be used for checking the bunch of files, first get the checksum of the entire files withing the directory
$ cksum * > /someother/location/cksum.list
Above command generates checksums' file.
Now after transferring the files, run the cksum command on the same sets of file to get the new chksum figure and finally compare these two values to figure out if the files are been tempered or not.
$ cksum * > /someother/location/cksum.list-2
$ diff cksum.list cksum.list-2

cksum also can be used for fast searching for duplicates.
$ cksum *
 
Custom Search