Tuesday, October 13, 2009
Chrome Os google download link ???
Not Sure this is Google Chrome OS official download link or not http://build.chromium.org/buildbot/snapshots/
Monday, October 5, 2009
Backup & Restore MySQL Databases - MySQLDumper
MySQLDumper is a backup & restore management script for MySQL Databases, written in PHP and Perl, which fills this gap and offers a complete control over the databases.
MySQLDumper uses a proprietary technique to avoid execution interruption. It only reads and saves a certain amount of commands and then calls itself via JavaScript and memorizes how far in the process it was and resumes its action from its last standby.
MySQLDumper offers to write data directly into a compressed gz-File. The Restore-Script is able to read this file directly without unpacking it. Of course you can use it without compression, however using Gzip saves a sizable amount of bandwidth.
Installation & Configuration
* Download MySQLDumper - Here
* Extract the zip file under your webserver root directory: unzip msd1.24RC1.9.zip, this command will create a new directory msd1.24
* Lunch your browser and point to the msd1.24 directory, this will open up the installation wizard which is very simple and straight forward.
* After successful installation point your browser to "index.php" to open up the main application from where you can take & restore backup
Features:
MySQLDumper can read Dumpfiles from other Scripts via the integrated parser (for example from phpMyAdmin)
- Security: MySQLDumper can generate a .htaccess-file to protect itself and all of your backup-files
- MySQLDumper can do Multipart-Backups. That means: MySQLDumper can automatically split the dumpfile if it gets bigger than your chosen size. When you want to restore a backup and choose the wrong part - it doesn' matter: MySQLDumper will notice that and will get the correct startfile automatically.
- automatic Errormodul
- MiniSql: You have access to your MySQL-Tables. You can delete tables, edit or insert data. You can run/ save any SQL-Statement.
- Database-Overview: look at running processes or even stop them
- very good file-overview: backups of the same database are shown as one entry. Click it to see all of the files.
- automatic file-deletion: set your own rules to delete old backups. Specify the age or the number of files when it will be deletetd automatically to save server webspace.
- Perl Cronscript done: all features of the PHP-Script are now integrated in the Perlscript that can be started via a Cronjob
- Configuration can be set seperatly for each Script (PHP and Perl)
- befor you start a backup all your parameters are shown again, so you definitely know what you are doing :-)
- Send Emails with or without your dumpfile attached / you can set the maximum size of the attachement. If it grows bigger it won't be attached.
- Send dumpfiles via FTP to another Server. This is also working using the multipart feature.
MySQLDumper uses a proprietary technique to avoid execution interruption. It only reads and saves a certain amount of commands and then calls itself via JavaScript and memorizes how far in the process it was and resumes its action from its last standby.
MySQLDumper offers to write data directly into a compressed gz-File. The Restore-Script is able to read this file directly without unpacking it. Of course you can use it without compression, however using Gzip saves a sizable amount of bandwidth.
Installation & Configuration
* Download MySQLDumper - Here
* Extract the zip file under your webserver root directory: unzip msd1.24RC1.9.zip, this command will create a new directory msd1.24
* Lunch your browser and point to the msd1.24 directory, this will open up the installation wizard which is very simple and straight forward.
* After successful installation point your browser to "index.php" to open up the main application from where you can take & restore backup
Features:
MySQLDumper can read Dumpfiles from other Scripts via the integrated parser (for example from phpMyAdmin)
- Security: MySQLDumper can generate a .htaccess-file to protect itself and all of your backup-files
- MySQLDumper can do Multipart-Backups. That means: MySQLDumper can automatically split the dumpfile if it gets bigger than your chosen size. When you want to restore a backup and choose the wrong part - it doesn' matter: MySQLDumper will notice that and will get the correct startfile automatically.
- automatic Errormodul
- MiniSql: You have access to your MySQL-Tables. You can delete tables, edit or insert data. You can run/ save any SQL-Statement.
- Database-Overview: look at running processes or even stop them
- very good file-overview: backups of the same database are shown as one entry. Click it to see all of the files.
- automatic file-deletion: set your own rules to delete old backups. Specify the age or the number of files when it will be deletetd automatically to save server webspace.
- Perl Cronscript done: all features of the PHP-Script are now integrated in the Perlscript that can be started via a Cronjob
- Configuration can be set seperatly for each Script (PHP and Perl)
- befor you start a backup all your parameters are shown again, so you definitely know what you are doing :-)
- Send Emails with or without your dumpfile attached / you can set the maximum size of the attachement. If it grows bigger it won't be attached.
- Send dumpfiles via FTP to another Server. This is also working using the multipart feature.
Wednesday, September 23, 2009
Torvalds Warns Linux Is Getting Bloated
Linux creator says his job has gotten easier as development process has improved, but more work needs to be done to improve Linux and reduce bloat.
As the Linux kernel is becoming increasingly larger and more complex, Linux founder Linus Torvalds says his job is getting easier. Speaking on a panel at the LinuxCon conference, Torvalds told the audience that kernel development model is working better now than ever.
But Torvalds added that there are still areas for improvement and provided a very pointed comment about the current size of the Linux kernel.
"We're getting bloated, yes it's a problem," Torvalds said. "I'd love to say we have a plan. I mean, sometimes it's a bit sad and we're definitely not the streamlined hyper-efficient kernel that I had envisioned 15 years ago. The kernel is huge and bloated."
"I don't spend all my time just hating people for sending me merge request that are hard to merge," Torvalds said. "For me, I need to have a happy feeling inside that I know what I'm merging. Whether it works or not another issue is a different issue."
For Torvalds, he said that he wanted to know that he was merging code that people actually want to have. He needs to have the explanations about what the code is doing in order to get happy feeling. Now that he has that feeling, he added that he is actually able to do some coding, though not a lot. Torvalds said he currently does approximately two code commits a week.
Torvalds' Motivation
During the panel session, Torvalds was asked by a member of the audience if his motivation for working on Linux has changed over the years. Torvalds responded that indeed his motivation has changed a lot over the years.
"It started out being all about the technology and really twiddling with the hardware and just learning and doing something cool in my basement … well, it wasn't my basement at the time it was my mother's basement," Torvalds said. "Eventually it became somewhat about the community and the fame. Hey, that was great."
Torvalds added that these days his motivation is all about the community. He noted that he defined the community, as being all about working and collaborating together with people.
"I really enjoy arguing, it's a big part of my life are these occasional flame threads that I love getting into and telling people they are idiots," Torvalds said. "All my technical problems were solved so long ago, that I don't even care. I don't do it for my own needs on my machine, I do it because it's interesting and I feel like I'm doing something worthwhile."
He added that whenever the kernel adds a new feature the problem gets worse. That said, he didn't think that features are being added too fast and said that developers are finding bugs quickly.
As the Linux kernel is becoming increasingly larger and more complex, Linux founder Linus Torvalds says his job is getting easier. Speaking on a panel at the LinuxCon conference, Torvalds told the audience that kernel development model is working better now than ever.
But Torvalds added that there are still areas for improvement and provided a very pointed comment about the current size of the Linux kernel.
"We're getting bloated, yes it's a problem," Torvalds said. "I'd love to say we have a plan. I mean, sometimes it's a bit sad and we're definitely not the streamlined hyper-efficient kernel that I had envisioned 15 years ago. The kernel is huge and bloated."
He added that whenever the kernel adds a new feature the problem gets worse. That said, he didn't think that features are being added too fast and said that developers are finding bugs quickly.
As the Linux kernel is becoming increasingly larger and more complex, Linux founder Linus Torvalds says his job is getting easier. Speaking on a panel at the LinuxCon conference, Torvalds told the audience that kernel development model is working better now than ever.
But Torvalds added that there are still areas for improvement and provided a very pointed comment about the current size of the Linux kernel.
"We're getting bloated, yes it's a problem," Torvalds said. "I'd love to say we have a plan. I mean, sometimes it's a bit sad and we're definitely not the streamlined hyper-efficient kernel that I had envisioned 15 years ago. The kernel is huge and bloated."
"I don't spend all my time just hating people for sending me merge request that are hard to merge," Torvalds said. "For me, I need to have a happy feeling inside that I know what I'm merging. Whether it works or not another issue is a different issue."
For Torvalds, he said that he wanted to know that he was merging code that people actually want to have. He needs to have the explanations about what the code is doing in order to get happy feeling. Now that he has that feeling, he added that he is actually able to do some coding, though not a lot. Torvalds said he currently does approximately two code commits a week.
Torvalds' Motivation
During the panel session, Torvalds was asked by a member of the audience if his motivation for working on Linux has changed over the years. Torvalds responded that indeed his motivation has changed a lot over the years.
"It started out being all about the technology and really twiddling with the hardware and just learning and doing something cool in my basement … well, it wasn't my basement at the time it was my mother's basement," Torvalds said. "Eventually it became somewhat about the community and the fame. Hey, that was great."
Torvalds added that these days his motivation is all about the community. He noted that he defined the community, as being all about working and collaborating together with people.
"I really enjoy arguing, it's a big part of my life are these occasional flame threads that I love getting into and telling people they are idiots," Torvalds said. "All my technical problems were solved so long ago, that I don't even care. I don't do it for my own needs on my machine, I do it because it's interesting and I feel like I'm doing something worthwhile."
He added that whenever the kernel adds a new feature the problem gets worse. That said, he didn't think that features are being added too fast and said that developers are finding bugs quickly.
As the Linux kernel is becoming increasingly larger and more complex, Linux founder Linus Torvalds says his job is getting easier. Speaking on a panel at the LinuxCon conference, Torvalds told the audience that kernel development model is working better now than ever.
But Torvalds added that there are still areas for improvement and provided a very pointed comment about the current size of the Linux kernel.
"We're getting bloated, yes it's a problem," Torvalds said. "I'd love to say we have a plan. I mean, sometimes it's a bit sad and we're definitely not the streamlined hyper-efficient kernel that I had envisioned 15 years ago. The kernel is huge and bloated."
He added that whenever the kernel adds a new feature the problem gets worse. That said, he didn't think that features are being added too fast and said that developers are finding bugs quickly.
Sunday, September 13, 2009
Linux Datacenters Virtualize More, See Big TCO Savings
Linux x86 datacenter users are much more likely to use virtualization technology and gain the benefit of significant total-cost-of-ownership savings than Microsoft datacenter users. In some cases, the savings from full virtualization implementations on Linux can approach 60 percent less than similar implementations on Microsoft platforms.
Those were some of the key results of a new whitepaper from the Gabriel Consulting Group, Inc. released to the public Friday. "Virtualization & TCO: Linux vs. Microsoft" stems from Gabriel's 2008 x86 Vendor Preference Survey, which found that Linux-centric customers are implementing virtualization more, in terms of numbers and extent, than Windows-centric users, who are using virtualization less and seeing less benefit from the technology they have implemented.
The whitepaper, currently hosted on IBM's Linux Library, details that 77 percent of self-described predominately Linux users have virtualized at least some of their x86 systems, compared to 59 percent of predominantly Windows users.
When asked about the extent of virtualization, 41 percent of Linux users have virtualization on more than half their boxes, while 29 percent of Microsoft users reported the same amount of use.
According to Dan Olds, Founder and Principal Analyst of the Gabriel Group, these results came out of a much larger survey of 187 x86 datacenter personnel, where respondents self-selected their operating system preferences. The whitepaper itself detailed that very few of the respondents reported complete homogeneity of operating systems, with most datacenters reporting they carried mixed environments. The survey, which is unsponsored by any Gabriel customer, is an annual survey that has been conducted by the Beaverton, OR-based analyst firm for the last four years.
The paper cites Microsoft's recent history in virtualization as one possible reason for Windows users being behind on virtual implementations. Only until recently has Microsoft been willing to support any non-Microsoft virtualization engine, which constrained customers. With Hyper-V, that closed stance has changed, but the paper indicates that "Hyper-V is still quite a ways behind both VMWare and Xen in terms of features, functions, and manageability, meaning that die-hard Microsoft standardizers are probably behind the curve in terms of virtualization implementation."
There are other, more quantifiable reasons for the increased amount of Linux virtualization use. Just 56 percent of Windows users believe that virtualization helps to better utilize hardware, while 77 percent of Linux users realize that benefit.
Linux users also seem to fare better with power and space consumption. When asked if they felt that their data center was running out of electrical capacity, only 26 percent of Linux users agreed with that statement, with 44 percent of Windows users concerned about power. Posed the similar question about floor space, 31 percent of Linux users are looking for more room, and 42 percent of Windows users are feeling cramped.
All of these benefits add up to a lot of money. The whitepaper reports that depending on how servers are deployed and managed, customers can save as much as 60 percent of TCO when using full virtualization. That's the optimum--average savings are in the 20-30 percent range, the paper reported.
Olds indicated that this was the first year of the survey there were enough questions in this area to formulate virtualization results, which were an eye-opener.
Those were some of the key results of a new whitepaper from the Gabriel Consulting Group, Inc. released to the public Friday. "Virtualization & TCO: Linux vs. Microsoft" stems from Gabriel's 2008 x86 Vendor Preference Survey, which found that Linux-centric customers are implementing virtualization more, in terms of numbers and extent, than Windows-centric users, who are using virtualization less and seeing less benefit from the technology they have implemented.
The whitepaper, currently hosted on IBM's Linux Library, details that 77 percent of self-described predominately Linux users have virtualized at least some of their x86 systems, compared to 59 percent of predominantly Windows users.
When asked about the extent of virtualization, 41 percent of Linux users have virtualization on more than half their boxes, while 29 percent of Microsoft users reported the same amount of use.
According to Dan Olds, Founder and Principal Analyst of the Gabriel Group, these results came out of a much larger survey of 187 x86 datacenter personnel, where respondents self-selected their operating system preferences. The whitepaper itself detailed that very few of the respondents reported complete homogeneity of operating systems, with most datacenters reporting they carried mixed environments. The survey, which is unsponsored by any Gabriel customer, is an annual survey that has been conducted by the Beaverton, OR-based analyst firm for the last four years.
The paper cites Microsoft's recent history in virtualization as one possible reason for Windows users being behind on virtual implementations. Only until recently has Microsoft been willing to support any non-Microsoft virtualization engine, which constrained customers. With Hyper-V, that closed stance has changed, but the paper indicates that "Hyper-V is still quite a ways behind both VMWare and Xen in terms of features, functions, and manageability, meaning that die-hard Microsoft standardizers are probably behind the curve in terms of virtualization implementation."
There are other, more quantifiable reasons for the increased amount of Linux virtualization use. Just 56 percent of Windows users believe that virtualization helps to better utilize hardware, while 77 percent of Linux users realize that benefit.
Linux users also seem to fare better with power and space consumption. When asked if they felt that their data center was running out of electrical capacity, only 26 percent of Linux users agreed with that statement, with 44 percent of Windows users concerned about power. Posed the similar question about floor space, 31 percent of Linux users are looking for more room, and 42 percent of Windows users are feeling cramped.
All of these benefits add up to a lot of money. The whitepaper reports that depending on how servers are deployed and managed, customers can save as much as 60 percent of TCO when using full virtualization. That's the optimum--average savings are in the 20-30 percent range, the paper reported.
Olds indicated that this was the first year of the survey there were enough questions in this area to formulate virtualization results, which were an eye-opener.
Thursday, September 3, 2009
Google Chrome: Year One
It was one year ago today that Google Chrome was officially launched. In that time, Google has removed the Beta tag and moved from a version 0.x to a 4.x dev version.
Google has been putting out releases on the dev side fairly regularly (nearly one a week) since the browser's launch, but is still growing. A year after the official launch of Google Chrome, it isn't perfect and it has more work yet to be done.
Among the items that Google promised us a year ago were Linux and Mac versions. Today we've got dev versions for both platforms, but no stable release.
A year ago, I personally was also looking forward to Google add-ons/extensions, which are not yet part of the main Chrome release either. They are coming though, and the dev channel versions have the key infrastructure in place so it's just a matter of time (I'd guess weeks not months).
We also have not yet seen the full integration of Chrome with Google's Apps or cloud efforts. Though again this is coming. Bookmark syncing is now part of the dev-channel browser too.
Google has been putting out releases on the dev side fairly regularly (nearly one a week) since the browser's launch, but is still growing. A year after the official launch of Google Chrome, it isn't perfect and it has more work yet to be done.
Among the items that Google promised us a year ago were Linux and Mac versions. Today we've got dev versions for both platforms, but no stable release.
A year ago, I personally was also looking forward to Google add-ons/extensions, which are not yet part of the main Chrome release either. They are coming though, and the dev channel versions have the key infrastructure in place so it's just a matter of time (I'd guess weeks not months).
We also have not yet seen the full integration of Chrome with Google's Apps or cloud efforts. Though again this is coming. Bookmark syncing is now part of the dev-channel browser too.
Tuesday, September 1, 2009
Ten Things You Didn't Know Apache (2.2) Could Do
Apache 2.2 has been out for a while, and just recently, 2.2.13 was released, featuring the usual slate of enhancements and bug fixes. Happily, the migration to 2.2 seems to be proceeding apace faster than the migration from 1.3 to 2.0, and most people, finally, seem to have jettisoned Apache 1.3.
However, it also seems that a lot of folks are completely unaware of some of the cool new things available in 2.2. Sites are so used to Apache just working; most don’t think about the new features that are going into the Web server all the time.
Here, let’s look at some of the more exciting innovations found in 2.2 and perhaps peek at one or two of the more esoteric ones. You may be surprised and amazed by what’s been lying under your nose all this time.
SNI
I realized long ago that leaving the best to last merely ensures that most people won’t make it that far. So, let’s start with the most compelling feature. If you merely read this first page, you’ll still be ahead of the other system administrators in your office.
Since the beginning of time (the beginning of the web, anyways) SSL suffered from a fundamental shortcoming. Simply stated, you had to have one IP address for every new SSL host that you wanted to run. (The exact origin of this limitation isn’t terribly important right here. You can find a number of articles on the subject elsewhere.) But now that we’ve finally arrived in the 21st Century, you can finally run multiple SSL virtual hosts on the same IP address. You can do this with something called Server Name Indication (SNI).
The deal with SSL is that you don’t know what name is being requested until after the certificate — possibly the wrong one — has already been exchanged. With SNI, this is addressed by sending the server name as part of the initial negotiation, so that you get the certificate that goes with the right name.
Apache 2.2.12 contains SNI, and you can now serve multiple SSL hosts off of one IP address. More good news is that every modern browser supports this feature and has for some time, just waiting for more sites to implement it on the server side. The bad news is that the documentation is somewhat behind the implementation, but hopefully that will get resolved real soon now.
At the moment, however, the best documentation for this functionality is in the docs wiki, at http://wiki.apache.org/httpd/NameBasedSSLVHostsWithSNI. The docs wiki is sort of a staging ground for the Apache documentation, so that stuff eventually makes it into the official docs.
mod_substitute
A frequently asked question on the various Apache support forums is how to modify the content within a page as it is being served out to the client. For example, if you’re proxying to a back-end server and that server has URLs embedded in the pages that point to that back-end server, the end-user on the Internet, being unable to reach that back-end server directly, simply experiences a bunch of broken links. So what’s to be done? In the past there wasn’t much that could be done, short of using a third-party module called mod_proxy_html, which was written specifically for this situation. You can read more about it, as well as more about the situation it attempts to resolve, at http://apache.webthing.com/mod_proxy_html/.
But there is a larger class of problems at hand. What if you just want to modify something in content that’s being served to the end users? Perhaps you’re running a third-party application and don’t have access to the source to customize it, but you want to make some modifications to the output that it produces.
Another module, also available at webthing.com, is mod_line_edit (http://apache.webthing.com/mod_line_edit/) allows you make arbitrary modifications, using sed-like syntax, to the outgoing HTTP response body.
Apache 2.2 introduced mod_substitute, which includes some of the functionality of both of the latter modules and allows you to modify the response that is being sent to the web client, using regular expressions. While this doesn’t do anything that mod_line_edit, or Basant Kukreja’s mod_sed don’t do, it has the advantage of being part of the Apache 2.2 distribution, and so it’s one less step to acquire it.
To use mod_substitute, you must know enough about regular expressions to express your desired change. For example, if you are proxying a back-end server images.local and want to replace that hostname in URLs with its external hostname, you would do the following:
AddOutputFilterByType SUBSTITUTE text/html
Substitute s/images.local/images.mysite.com/i
In this case, the i on the end indicates that the substitution should happen in a case-insensitive fashion. The AddOutputFilterByType directive specifies what kind of files the substitution should affect. You don’t want to do substitutions on images or PDF files, for example, as it will corrupt them and result in garbage.
Place these directives in a or block where you want it to be in effect, or in a .htaccess file, if you don’t have access to the main server configuration file.
Graceful Stop
This may not seem like a big deal, but folks have been asking for it for a long time. Apache 2.2 adds the graceful-stop option, to stop the server … um … gracefully.
Usually, when you stop, or restart Apache, it kills all the existing client connections as part of the process. This results in angry end-users, and your phone rings, and your boss yells at you. Yelling is generally to be avoided.
So, a long, long time ago, the graceful-restart option was added, which allows you to restart the server, but without abruptly terminating in-process client connections.
$ httpd -k graceful-restart
But there are times when you need to shut down a server entirely, and in that case, too, the clients are abruptly dropped. For example, you may want to take a server out of a load-balanced configuration, but you don’t want existing client sessions to be terminated. So what do you do?
Well, with Apache 2.2, a new option stops the server but allows ongoing connections — say, if someone is executing a long-running script or downloading a large file — to complete before the child processes are killed.
$ httpd -k graceful-stop
This has the direct result of your phone ringing less when you’re doing server maintenance. Highly recommended.
mod_proxy_balancer
A lot has been written about mod_proxy_balancer, yet every time I mention it, someone is surprised that this is an included feature of the Apache product. So, here again, mod_proxy_balancer.
Apache 2.2 comes with a front-end proxy that load balances between an arbitrary number of back-end servers. It also maintains sticky sessions; that is, once a client is routed to a particular server, you can force that client to always go back to that server, so that their sessions are not interrupted. It does traffic-based load balancing. It does hot spares: a server can be automatically rolled into the rotation if one of the other ones dies. It has a Web-based management console where you can remove servers from the rotation or modify a server’s priority in the rotation.
So, it’s really a full-featured load balancing proxy. And it’s free, and included in your Apache 2.2 server.
To get started with mod_proxy_balancer, define your pool, or “cluster” of hosts to be balanced:
BalancerMember http://192.168.1.50:80
BalancerMember http://192.168.1.51:80
BalancerMember http://192.168.1.51:80
Then, tell your server to proxy requests through to those servers:
ProxyPass /test balancer://mycluster/
If that seems deceptively easy … well, it actually is that easy, but you can also configure a raft of other options on top of that, including those mentioned above.
As with the other features I’ve mentioned, I’m not going to reproduce the documentation here. Instead, take a look at the examples at http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html
httpd -M
Apache loads modules in two different ways. You can compile them into the server binary when you first install Apache, or you can load them dynamically at startup time using the AddModule directive. Almost every Apache installation has some of each kind. Until recently, if you wanted to know what modules you had loaded, you had to look two different places. You’d run httpd -l to get a list of the compiled-in type:
$ httpd -l
Compiled in modules:
core.c
prefork.c
http_core.c
mod_so.c
Then you’d have to go look in your server configuration file and see what modules had AddModule directives. This is actually harder than it sounds, because a lot of third-party distributions of Apache put each AddModule directive in a separate file, with names like php.load and mod_perl.conf and so on.
In another minor change with a big impact, Apache 2.2 adds the -M flag, which allows you to list all of the modules that are loaded, both static and shared:
$ httpd -M
Password:
Loaded Modules:
core_module (static)
mpm_prefork_module (static)
http_module (static)
so_module (static)
authn_file_module (shared)
...
php5_module (shared)
pony_module (shared)
Each module indicates whether it is static or shared, and now you know for certain what modules were successfully loaded and which ones you forgot.
And, yes, that’s mod_pony. Seriously.
httxt2dbm
If you’re like the rest of us, you have, over the years, accumulated lengthy lists of RewriteRule and Redirect directives to map old URLs to the new ones. These stack up, and, over time, can cause a great deal of confusion about where your content actually lives, not to mention a big performance hit when all the rules have to be processed every time a request is made to your server.
One way to consolidate these redirects is with RewriteMap, a directive in mod_rewrite that allows you to define an external map of rewrite rules. This may be as simple as a text file that lists the mappings, or as complicated as an external script or program, or a database query, that determines the rules.
So, for example, if you have a bunch of old URLs that you want to redirect to new ones (a very typical case), or perhaps just friendly, easier-to remember URLs that you want to redirect to the actual ugly back-end ones then you might have a RewriteMap file like this, called dogs.txt:
/collie /dogs.php?id=875
/doberman /dogs.php?id=12
/daschund /dogs.php?id=99
/siamese /cats.php?id=84
Then, you would use this file in a RewriteMap:
RewriteMap dogmap txt:/path/to/file/dogs.txt
And use the RewriteMap in a RewriteRule:
RewriteRule ^/dogs/(.*) ${dogmap:$1}
The trouble is that this is a plain text file, and, as such, unindexed and therefore slow. Every time you request a URI, mod_rewrite looks through this list, one item at a time, until it finds the one that it needs. And the more items you add to the list, the longer each lookup takes.
For years, the documentation suggested that you could convert the text file to a dbm, and offered a Perl script for doing so. Unfortunately, the script didn’t work particularly well, and, if you could get it to work, there was always the problem of picking the right type of dbm for your particular operating system.
With the 2.2 version, there’s a utility that comes with the server, and is installed alongside the other binaries, that not only converts your text file into a dbm, but correctly selects the same dbm library that your installation of Apache was built with, thus ensuring compatibility.
This script, called httxt2dbm, is used as follows.
httxt2dbm -- Program to Create DBM Files for use by RewriteMap
Usage: httxt2dbm [-v] [-f format] -i SOURCE_TXT -o OUTPUT_DBM
Options:
-v More verbose output
-i Source Text File. If '-', use stdin.
-o Output DBM.
-f DBM Format. If not specified, will use the APR Default.
GDBM for GDBM files (unavailable)
SDBM for SDBM files (available)
DB for berkeley DB files (unavailable)
NDBM for NDBM files (unavailable)
default for the default DBM type
For most of us, the -f option is not particularly useful. Of course, we want it to use the APR default - that is, whatever Apache was built with. If you actually know what the differences are between the various dbm formats, perhaps you have reasons for using a different one, and can do that if you really want to.
$ httxt2dbm -i dogs.txt -o dogs.map
Now, you can modify your RewriteMap directive to use this new file:
RewriteMap dogmap dbm:/path/to/file/dogs.map
Lookups are now performed against the dbm, and so are much faster.
PCRE Zero-Width Assertions
I said I wasn’t going to leave the best to last, but this last one is very cool, and answers one of the most frequently asked questions, although often the folks asking the question wouldn’t think to ask for this particular solution.
The question that tends to get asked often goes something like, “How can I redirect everything except for a particular directory.” For example, requests for anything on this server, I want to redirect over to that other server, except for requests for the images directory.
Now, Apache offers a RedirectMatch directive that allows you to use regular expressions to specify a class of URIs that you want to redirect. Unfortunately, it does not have a negation operator, so you can’t simply say “everything that doesn’t match images.” very easily.
At least until now.
One of the changes with the 2.2 version of the server is that RedirectMatch and all of the other *Match directives now use the Perl Compatible Regular Expression library (PCRE) and so have the full power of the regular expressions that you know and love from your favorite programming language.
One of the cooler of these features is zero-width assertions. Now, I’m not going to go into all the details of what these are. That’s covered very nicely in the tutorial at http://www.regular-expressions.info/lookaround.html. Instead, I’ll give you a specific way that they can be used in Apache to answer this frequently asked, seldom answered question.
RedirectMatch ^/(?!images/)(.*) http://dynamic.myhost.com/$1
This RedirectMatch redirects all URLs to http://dynamic.myhost.com/, unless the URL starts with /images/. This regular expression syntax is called a negative lookahead, and allows you to assert that a string does not contain a particular thing.
It makes me happy when something that I’ve always answered with “you can’t do that” becomes possible, and even easy.
Summary
Apache 2.2 has some great hidden treasures in it that a lot of folks are simply unaware of. 2.4 has even more of them. I can hardly wait.
However, it also seems that a lot of folks are completely unaware of some of the cool new things available in 2.2. Sites are so used to Apache just working; most don’t think about the new features that are going into the Web server all the time.
Here, let’s look at some of the more exciting innovations found in 2.2 and perhaps peek at one or two of the more esoteric ones. You may be surprised and amazed by what’s been lying under your nose all this time.
SNI
I realized long ago that leaving the best to last merely ensures that most people won’t make it that far. So, let’s start with the most compelling feature. If you merely read this first page, you’ll still be ahead of the other system administrators in your office.
Since the beginning of time (the beginning of the web, anyways) SSL suffered from a fundamental shortcoming. Simply stated, you had to have one IP address for every new SSL host that you wanted to run. (The exact origin of this limitation isn’t terribly important right here. You can find a number of articles on the subject elsewhere.) But now that we’ve finally arrived in the 21st Century, you can finally run multiple SSL virtual hosts on the same IP address. You can do this with something called Server Name Indication (SNI).
The deal with SSL is that you don’t know what name is being requested until after the certificate — possibly the wrong one — has already been exchanged. With SNI, this is addressed by sending the server name as part of the initial negotiation, so that you get the certificate that goes with the right name.
Apache 2.2.12 contains SNI, and you can now serve multiple SSL hosts off of one IP address. More good news is that every modern browser supports this feature and has for some time, just waiting for more sites to implement it on the server side. The bad news is that the documentation is somewhat behind the implementation, but hopefully that will get resolved real soon now.
At the moment, however, the best documentation for this functionality is in the docs wiki, at http://wiki.apache.org/httpd/NameBasedSSLVHostsWithSNI. The docs wiki is sort of a staging ground for the Apache documentation, so that stuff eventually makes it into the official docs.
mod_substitute
A frequently asked question on the various Apache support forums is how to modify the content within a page as it is being served out to the client. For example, if you’re proxying to a back-end server and that server has URLs embedded in the pages that point to that back-end server, the end-user on the Internet, being unable to reach that back-end server directly, simply experiences a bunch of broken links. So what’s to be done? In the past there wasn’t much that could be done, short of using a third-party module called mod_proxy_html, which was written specifically for this situation. You can read more about it, as well as more about the situation it attempts to resolve, at http://apache.webthing.com/mod_proxy_html/.
But there is a larger class of problems at hand. What if you just want to modify something in content that’s being served to the end users? Perhaps you’re running a third-party application and don’t have access to the source to customize it, but you want to make some modifications to the output that it produces.
Another module, also available at webthing.com, is mod_line_edit (http://apache.webthing.com/mod_line_edit/) allows you make arbitrary modifications, using sed-like syntax, to the outgoing HTTP response body.
Apache 2.2 introduced mod_substitute, which includes some of the functionality of both of the latter modules and allows you to modify the response that is being sent to the web client, using regular expressions. While this doesn’t do anything that mod_line_edit, or Basant Kukreja’s mod_sed don’t do, it has the advantage of being part of the Apache 2.2 distribution, and so it’s one less step to acquire it.
To use mod_substitute, you must know enough about regular expressions to express your desired change. For example, if you are proxying a back-end server images.local and want to replace that hostname in URLs with its external hostname, you would do the following:
AddOutputFilterByType SUBSTITUTE text/html
Substitute s/images.local/images.mysite.com/i
In this case, the i on the end indicates that the substitution should happen in a case-insensitive fashion. The AddOutputFilterByType directive specifies what kind of files the substitution should affect. You don’t want to do substitutions on images or PDF files, for example, as it will corrupt them and result in garbage.
Place these directives in a
Graceful Stop
This may not seem like a big deal, but folks have been asking for it for a long time. Apache 2.2 adds the graceful-stop option, to stop the server … um … gracefully.
Usually, when you stop, or restart Apache, it kills all the existing client connections as part of the process. This results in angry end-users, and your phone rings, and your boss yells at you. Yelling is generally to be avoided.
So, a long, long time ago, the graceful-restart option was added, which allows you to restart the server, but without abruptly terminating in-process client connections.
$ httpd -k graceful-restart
But there are times when you need to shut down a server entirely, and in that case, too, the clients are abruptly dropped. For example, you may want to take a server out of a load-balanced configuration, but you don’t want existing client sessions to be terminated. So what do you do?
Well, with Apache 2.2, a new option stops the server but allows ongoing connections — say, if someone is executing a long-running script or downloading a large file — to complete before the child processes are killed.
$ httpd -k graceful-stop
This has the direct result of your phone ringing less when you’re doing server maintenance. Highly recommended.
mod_proxy_balancer
A lot has been written about mod_proxy_balancer, yet every time I mention it, someone is surprised that this is an included feature of the Apache product. So, here again, mod_proxy_balancer.
Apache 2.2 comes with a front-end proxy that load balances between an arbitrary number of back-end servers. It also maintains sticky sessions; that is, once a client is routed to a particular server, you can force that client to always go back to that server, so that their sessions are not interrupted. It does traffic-based load balancing. It does hot spares: a server can be automatically rolled into the rotation if one of the other ones dies. It has a Web-based management console where you can remove servers from the rotation or modify a server’s priority in the rotation.
So, it’s really a full-featured load balancing proxy. And it’s free, and included in your Apache 2.2 server.
To get started with mod_proxy_balancer, define your pool, or “cluster” of hosts to be balanced:
BalancerMember http://192.168.1.50:80
BalancerMember http://192.168.1.51:80
BalancerMember http://192.168.1.51:80
Then, tell your server to proxy requests through to those servers:
ProxyPass /test balancer://mycluster/
If that seems deceptively easy … well, it actually is that easy, but you can also configure a raft of other options on top of that, including those mentioned above.
As with the other features I’ve mentioned, I’m not going to reproduce the documentation here. Instead, take a look at the examples at http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html
httpd -M
Apache loads modules in two different ways. You can compile them into the server binary when you first install Apache, or you can load them dynamically at startup time using the AddModule directive. Almost every Apache installation has some of each kind. Until recently, if you wanted to know what modules you had loaded, you had to look two different places. You’d run httpd -l to get a list of the compiled-in type:
$ httpd -l
Compiled in modules:
core.c
prefork.c
http_core.c
mod_so.c
Then you’d have to go look in your server configuration file and see what modules had AddModule directives. This is actually harder than it sounds, because a lot of third-party distributions of Apache put each AddModule directive in a separate file, with names like php.load and mod_perl.conf and so on.
In another minor change with a big impact, Apache 2.2 adds the -M flag, which allows you to list all of the modules that are loaded, both static and shared:
$ httpd -M
Password:
Loaded Modules:
core_module (static)
mpm_prefork_module (static)
http_module (static)
so_module (static)
authn_file_module (shared)
...
php5_module (shared)
pony_module (shared)
Each module indicates whether it is static or shared, and now you know for certain what modules were successfully loaded and which ones you forgot.
And, yes, that’s mod_pony. Seriously.
httxt2dbm
If you’re like the rest of us, you have, over the years, accumulated lengthy lists of RewriteRule and Redirect directives to map old URLs to the new ones. These stack up, and, over time, can cause a great deal of confusion about where your content actually lives, not to mention a big performance hit when all the rules have to be processed every time a request is made to your server.
One way to consolidate these redirects is with RewriteMap, a directive in mod_rewrite that allows you to define an external map of rewrite rules. This may be as simple as a text file that lists the mappings, or as complicated as an external script or program, or a database query, that determines the rules.
So, for example, if you have a bunch of old URLs that you want to redirect to new ones (a very typical case), or perhaps just friendly, easier-to remember URLs that you want to redirect to the actual ugly back-end ones then you might have a RewriteMap file like this, called dogs.txt:
/collie /dogs.php?id=875
/doberman /dogs.php?id=12
/daschund /dogs.php?id=99
/siamese /cats.php?id=84
Then, you would use this file in a RewriteMap:
RewriteMap dogmap txt:/path/to/file/dogs.txt
And use the RewriteMap in a RewriteRule:
RewriteRule ^/dogs/(.*) ${dogmap:$1}
The trouble is that this is a plain text file, and, as such, unindexed and therefore slow. Every time you request a URI, mod_rewrite looks through this list, one item at a time, until it finds the one that it needs. And the more items you add to the list, the longer each lookup takes.
For years, the documentation suggested that you could convert the text file to a dbm, and offered a Perl script for doing so. Unfortunately, the script didn’t work particularly well, and, if you could get it to work, there was always the problem of picking the right type of dbm for your particular operating system.
With the 2.2 version, there’s a utility that comes with the server, and is installed alongside the other binaries, that not only converts your text file into a dbm, but correctly selects the same dbm library that your installation of Apache was built with, thus ensuring compatibility.
This script, called httxt2dbm, is used as follows.
httxt2dbm -- Program to Create DBM Files for use by RewriteMap
Usage: httxt2dbm [-v] [-f format] -i SOURCE_TXT -o OUTPUT_DBM
Options:
-v More verbose output
-i Source Text File. If '-', use stdin.
-o Output DBM.
-f DBM Format. If not specified, will use the APR Default.
GDBM for GDBM files (unavailable)
SDBM for SDBM files (available)
DB for berkeley DB files (unavailable)
NDBM for NDBM files (unavailable)
default for the default DBM type
For most of us, the -f option is not particularly useful. Of course, we want it to use the APR default - that is, whatever Apache was built with. If you actually know what the differences are between the various dbm formats, perhaps you have reasons for using a different one, and can do that if you really want to.
$ httxt2dbm -i dogs.txt -o dogs.map
Now, you can modify your RewriteMap directive to use this new file:
RewriteMap dogmap dbm:/path/to/file/dogs.map
Lookups are now performed against the dbm, and so are much faster.
PCRE Zero-Width Assertions
I said I wasn’t going to leave the best to last, but this last one is very cool, and answers one of the most frequently asked questions, although often the folks asking the question wouldn’t think to ask for this particular solution.
The question that tends to get asked often goes something like, “How can I redirect everything except for a particular directory.” For example, requests for anything on this server, I want to redirect over to that other server, except for requests for the images directory.
Now, Apache offers a RedirectMatch directive that allows you to use regular expressions to specify a class of URIs that you want to redirect. Unfortunately, it does not have a negation operator, so you can’t simply say “everything that doesn’t match images.” very easily.
At least until now.
One of the changes with the 2.2 version of the server is that RedirectMatch and all of the other *Match directives now use the Perl Compatible Regular Expression library (PCRE) and so have the full power of the regular expressions that you know and love from your favorite programming language.
One of the cooler of these features is zero-width assertions. Now, I’m not going to go into all the details of what these are. That’s covered very nicely in the tutorial at http://www.regular-expressions.info/lookaround.html. Instead, I’ll give you a specific way that they can be used in Apache to answer this frequently asked, seldom answered question.
RedirectMatch ^/(?!images/)(.*) http://dynamic.myhost.com/$1
This RedirectMatch redirects all URLs to http://dynamic.myhost.com/, unless the URL starts with /images/. This regular expression syntax is called a negative lookahead, and allows you to assert that a string does not contain a particular thing.
It makes me happy when something that I’ve always answered with “you can’t do that” becomes possible, and even easy.
Summary
Apache 2.2 has some great hidden treasures in it that a lot of folks are simply unaware of. 2.4 has even more of them. I can hardly wait.
Sunday, August 23, 2009
FreeBSD 8 Getting New Routing Architecture
Though the open source FreeBSD operating system has changed in many aspects over the last 16 years of its life, one item that has remained relatively static is its underlying network routing architecture.
No more: It's getting an overhaul with the upcoming FreeBSD 8.0 release.
FreeBSD 8.0, due out next month, will include a new routing architecture that takes advantage of parallel processing capabilities. According to its developers, the update will provide FreeBSD 8.0 with a faster more advanced routing architecture than the legacy architecture.
It's an important change for FreeBSD, which has emerged as a key open source operating system for networking vendors, with players like Juniper, Coyote Point, Blue Coat and others offering their own network operating systems that are based on FreeBSD.
The new routing architecture was written Qing Li, senior architect at Blue Coat, as a way to give back to the open source community.
"Blue Coat's ProxySG networking kernel was partially derived from the FreeBSD kernel," Li told InternetNews.com. "Blue Coat is a sponsor of my open source development work, so this is a good way to contribute to the open source community."
Blue Coat's ProxySG is a WAN optimization device that relies on network intelligence to accelerate traffic.
The new routing architecture in FreeBSD 8 is also about optimization, as it reduces data dependencies across networking layers. The end result is a routing architecture that can take better advantage of multi-core, parallel processing CPUs.
"The new routing technology works on both multi-core as well as single-core CPUs," Li said. "The performance gain is most visible in the multi-core situation, though."
But making changes also has important implications for BSD 8.0, since a key goal of the release is about ensuring a degree of compatibility with prior releases and the existing software ecosystem.
"Since the rewrite affects fundamental packet processing and the operation of protocols within the networking kernel, I had to ensure regression risk was low and compatibility was high," Li said. "For example, those applications that are part of the ports, which interact with the kernel (e.g. retrieving the routing information, waiting for notification about routing table changes ) will continue to compile and operate semantically correct."
In a technical paper that Li is publishing and talking about today at a conference in Spain, Li explained that the legacy version of the FreeBSD routing architecture actually reduced parallelism on SMP (define) and parallel architectures.
"As a result of the dependency between L2 and L3 (define), the processing through these two layers was single-threaded," Li wrote in his paper. "A common parallel TCP/IP protocol stack design is to allow L2 and higher layer processing to run independently of each other, having each processor managing different protocols. The aforementioned locking contention increased processor stalling and prevented one from benefiting from more advanced hardware platforms."
According to Li, contention locks consumed as much as 47 percent of a CPU's time with the legacy routing architecture, determined through a test with eight transmitting threads.
"With the new split L2/L3 design, the L2 and L3 references can be cached in the protocol control block for connected sockets or in a flow table for unconnected sockets and forwarding," Li wrote. "Thus we see that very little of the CPU time is now spent in the locking primitives even when there are [eight] transmitting threads."
But what is it called?
While many operating system vendors will tend to brand new or improved technology with interesting names, that's not the case with the new routing architecture in FreeBSD 8.0.
"No, there is no catchy name," Li told InternetNews.com. "On the mailing list, I typically referred to it as 'L2/L3 rewrite' [and] 'new ARP/ND6 rewrite' when answering questions and providing patches."
Sun Microsystems, in contrast, recently pushed out a new networking architecture called Project Crossbow, for its openSolaris operating system. A key part of Project Crossbow is a virtualization layer for networking interfaces to improve scalability and optimization.
Sun and FreeBSD are hardly strangers, with Sun technology helping out in the FreeBSD 7.1 release earlier this year.
Expect to see similar approaches showing up in FreeBSD, as well.
"Virtualization will be part of my future work," Li said. "As a FreeBSD developer who works for Blue Coat, my areas of focus will continue to be virtualization, TCP optimization and tuning with additional IPv6 support."
No more: It's getting an overhaul with the upcoming FreeBSD 8.0 release.
FreeBSD 8.0, due out next month, will include a new routing architecture that takes advantage of parallel processing capabilities. According to its developers, the update will provide FreeBSD 8.0 with a faster more advanced routing architecture than the legacy architecture.
It's an important change for FreeBSD, which has emerged as a key open source operating system for networking vendors, with players like Juniper, Coyote Point, Blue Coat and others offering their own network operating systems that are based on FreeBSD.
The new routing architecture was written Qing Li, senior architect at Blue Coat, as a way to give back to the open source community.
"Blue Coat's ProxySG networking kernel was partially derived from the FreeBSD kernel," Li told InternetNews.com. "Blue Coat is a sponsor of my open source development work, so this is a good way to contribute to the open source community."
Blue Coat's ProxySG is a WAN optimization device that relies on network intelligence to accelerate traffic.
The new routing architecture in FreeBSD 8 is also about optimization, as it reduces data dependencies across networking layers. The end result is a routing architecture that can take better advantage of multi-core, parallel processing CPUs.
"The new routing technology works on both multi-core as well as single-core CPUs," Li said. "The performance gain is most visible in the multi-core situation, though."
But making changes also has important implications for BSD 8.0, since a key goal of the release is about ensuring a degree of compatibility with prior releases and the existing software ecosystem.
"Since the rewrite affects fundamental packet processing and the operation of protocols within the networking kernel, I had to ensure regression risk was low and compatibility was high," Li said. "For example, those applications that are part of the ports, which interact with the kernel (e.g. retrieving the routing information, waiting for notification about routing table changes ) will continue to compile and operate semantically correct."
In a technical paper that Li is publishing and talking about today at a conference in Spain, Li explained that the legacy version of the FreeBSD routing architecture actually reduced parallelism on SMP (define) and parallel architectures.
"As a result of the dependency between L2 and L3 (define), the processing through these two layers was single-threaded," Li wrote in his paper. "A common parallel TCP/IP protocol stack design is to allow L2 and higher layer processing to run independently of each other, having each processor managing different protocols. The aforementioned locking contention increased processor stalling and prevented one from benefiting from more advanced hardware platforms."
According to Li, contention locks consumed as much as 47 percent of a CPU's time with the legacy routing architecture, determined through a test with eight transmitting threads.
"With the new split L2/L3 design, the L2 and L3 references can be cached in the protocol control block for connected sockets or in a flow table for unconnected sockets and forwarding," Li wrote. "Thus we see that very little of the CPU time is now spent in the locking primitives even when there are [eight] transmitting threads."
But what is it called?
While many operating system vendors will tend to brand new or improved technology with interesting names, that's not the case with the new routing architecture in FreeBSD 8.0.
"No, there is no catchy name," Li told InternetNews.com. "On the mailing list, I typically referred to it as 'L2/L3 rewrite' [and] 'new ARP/ND6 rewrite' when answering questions and providing patches."
Sun Microsystems, in contrast, recently pushed out a new networking architecture called Project Crossbow, for its openSolaris operating system. A key part of Project Crossbow is a virtualization layer for networking interfaces to improve scalability and optimization.
Sun and FreeBSD are hardly strangers, with Sun technology helping out in the FreeBSD 7.1 release earlier this year.
Expect to see similar approaches showing up in FreeBSD, as well.
"Virtualization will be part of my future work," Li said. "As a FreeBSD developer who works for Blue Coat, my areas of focus will continue to be virtualization, TCP optimization and tuning with additional IPv6 support."
Sunday, August 9, 2009
Google Chrome: Meet the New Boss
Google Chrome may not be the perfect Web browser, if there is such a beast, but it’s definitely going to give Firefox and Internet Explorer a run for their money. Even though Chrome is still in developer preview for Linux, it’s already making great strides.
For the past week or so, I’ve been running Google Chrome as my primary browser. Ben Kevan has been making packages for openSUSE for a while, and I finally decided to take the plunge. Initially, I thought I’d take it for a spin and go back to Firefox — which is what usually happens when I try a different browser. This time around, I may be sticking with Chrome for most of my browsing.
Speed, Stability, Extensions: Pick Two
Installing Chrome on openSUSE was a snap — just install an RPM and Chrome is ready to go. At first launch, Chrome will offer to import your settings from other browsers. It sucked in my Firefox options flawlessly, including bookmarks and passwords.
The first thing that I noticed with Chrome is that it’s speedy. Really speedy. Sites seem to load a little faster and the browser user interface (UI) itself seems a little snappier. This seems particularly true when using Google services like GMail and Google Reader, but for the most part holds true across other sites as well. On occasion, though, some Web elements don’t seem to want to work with Chrome at all. For instance, posting into an older version of WordPress, some of the menus when rendered in Chrome do not work, period. This is rare, but crops up for me at least once or twice per day.
In addition to being speedy, though, Chrome is also rock-solid stable. Since Chrome on Linux is still in development and not considered a “stable” release, I wasn’t expecting great things in the stability department. After a day of browsing with no crashes, I was impressed. After a week with almost no problems with Chrome, I’m deeply impressed.
The build I used is almost feature complete, although there seems to be no support for printing — which is probably good for the environment, but not so hot when you need to actually print things. There is a context menu item for printing, but nothing happens when I select the option. I assume that the Chrome folks will get around to this one eventually.
Chrome’s UI is a bit non-standard. Instead of the usual set of menus, Chrome just has a couple of icons to the right-hand side of the interface next to the location bar. This works pretty well, though it was a bit odd at first. By default, Chrome shows no “home” button, though this can be enabled in the Options.
The location bar doubles as the search bar, and the standard Ctrl-K shortcut will bring you to the location bar to perform your search. Since I’m used to this shortcut from Firefox, I fell into using it immediately. For users who aren’t, though, I wonder how they would discover the shortcut. There’s no clue in the UI that I could find that would help the user out here. You can find a list online but it’d be nice to have a menu item as well.
In case you’re wondering, yes — you can switch the default search engine. If you prefer to use Yahoo, Bing (still listed as “Live Search” in the option), Wikipedia, or another site, you’re free to do so.
Compared to the other popular browsers, Chrome has a very stripped-down set of features and options. However, I had trouble actually finding any features missing that I couldn’t live without, excepting printing. What I do miss is some of the extensions I use heavily in Firefox, like Xmarks and Evernote. For some reason, I couldn’t get Flash working in Chrome either (even with the –enable-plugins option) though I can’t say I really missed Flash very much.
One thing bugs me about Google Chrome — being bugged to make it the default browser. I’m not quite sure why browser designers feel the necessity to implant nagware into otherwise nifty software. Prompt once, and then let it go, folks. Chrome isn’t alone in this, but it’s still an unnecessary annoyance.
Chrome on Linux is still a work in progress, but it seems good enough to use full-time if you don’t mind reverting to Firefox or another browser occasionally. For the most part, I’ve been able to rely on Chrome as my primary browser and probably will continue using Chrome on my main desktop for the foreseeable future.
For the past week or so, I’ve been running Google Chrome as my primary browser. Ben Kevan has been making packages for openSUSE for a while, and I finally decided to take the plunge. Initially, I thought I’d take it for a spin and go back to Firefox — which is what usually happens when I try a different browser. This time around, I may be sticking with Chrome for most of my browsing.
Speed, Stability, Extensions: Pick Two
Installing Chrome on openSUSE was a snap — just install an RPM and Chrome is ready to go. At first launch, Chrome will offer to import your settings from other browsers. It sucked in my Firefox options flawlessly, including bookmarks and passwords.
The first thing that I noticed with Chrome is that it’s speedy. Really speedy. Sites seem to load a little faster and the browser user interface (UI) itself seems a little snappier. This seems particularly true when using Google services like GMail and Google Reader, but for the most part holds true across other sites as well. On occasion, though, some Web elements don’t seem to want to work with Chrome at all. For instance, posting into an older version of WordPress, some of the menus when rendered in Chrome do not work, period. This is rare, but crops up for me at least once or twice per day.
In addition to being speedy, though, Chrome is also rock-solid stable. Since Chrome on Linux is still in development and not considered a “stable” release, I wasn’t expecting great things in the stability department. After a day of browsing with no crashes, I was impressed. After a week with almost no problems with Chrome, I’m deeply impressed.
The build I used is almost feature complete, although there seems to be no support for printing — which is probably good for the environment, but not so hot when you need to actually print things. There is a context menu item for printing, but nothing happens when I select the option. I assume that the Chrome folks will get around to this one eventually.
Chrome’s UI is a bit non-standard. Instead of the usual set of menus, Chrome just has a couple of icons to the right-hand side of the interface next to the location bar. This works pretty well, though it was a bit odd at first. By default, Chrome shows no “home” button, though this can be enabled in the Options.
The location bar doubles as the search bar, and the standard Ctrl-K shortcut will bring you to the location bar to perform your search. Since I’m used to this shortcut from Firefox, I fell into using it immediately. For users who aren’t, though, I wonder how they would discover the shortcut. There’s no clue in the UI that I could find that would help the user out here. You can find a list online but it’d be nice to have a menu item as well.
In case you’re wondering, yes — you can switch the default search engine. If you prefer to use Yahoo, Bing (still listed as “Live Search” in the option), Wikipedia, or another site, you’re free to do so.
Compared to the other popular browsers, Chrome has a very stripped-down set of features and options. However, I had trouble actually finding any features missing that I couldn’t live without, excepting printing. What I do miss is some of the extensions I use heavily in Firefox, like Xmarks and Evernote. For some reason, I couldn’t get Flash working in Chrome either (even with the –enable-plugins option) though I can’t say I really missed Flash very much.
One thing bugs me about Google Chrome — being bugged to make it the default browser. I’m not quite sure why browser designers feel the necessity to implant nagware into otherwise nifty software. Prompt once, and then let it go, folks. Chrome isn’t alone in this, but it’s still an unnecessary annoyance.
Chrome on Linux is still a work in progress, but it seems good enough to use full-time if you don’t mind reverting to Firefox or another browser occasionally. For the most part, I’ve been able to rely on Chrome as my primary browser and probably will continue using Chrome on my main desktop for the foreseeable future.
Sunday, August 2, 2009
Debian GNU/Linux 6.0 "Squeeze" release goals
Following up on its decision to adopt the policy of timed release freezes beginning with the next release of Debian GNU/Linux, the Debian Release Team has now published their list of release goals for the upcoming release of Debian GNU/Linux 6.0, code-named "Squeeze".
In the light of these goals and also in consideration of the Debian community's feedback to the release team's initial announcement during the keynote of this year's DebConf in Caceres, Spain, the Release Team has additionally decided to revisit its decision on December 2009 as the proposed freeze date. A new timeline will be announced by the Debian Release Team in early September.
Luk Claes, Debian Release Manager, underlines the team's commitment to quality saying "In Debian we always strive to achieve the greatest quality in our releases. The ambitious goals that we have set for ourselves will help to secure this quality in the upcoming release."
The Debian Release Team - in cooperation with the Debian Infrastructure Team - plans to include the following goals in the upcoming release:
* Multi-arch support, which will for instance improve the installation of 32 bit packages on 64 bit machines
* kFreeBSD support, introducing the first non-linux architecture into Debian
* Improved boot performance using dash as the new default shell, and a dependency-based boot system that will both clean up the boot process and help performance through parallel processing
* A further enhanced Quality Assurance process resulting in higher quality packages. This includes:
o Clean installation, upgrade and removal of all packages
o Automatic rejection of packages failing basic quality checks
o Double compilation support
* Preparation for new package formats to help streamline future development and to introduce improved compression algorithms
* Removal of obsolete libraries for improved security
* Full ipv6 support
* Large File Support
* Automatic creation of debug packages for the entire archive, a Google Summer of Code Project pending integration into the infrastructure
* Move of packages' long descriptions into a separate "translated package list", which will facilitate their translation and also provide a smaller footprint for embedded systems thanks to smaller Packages files.
* Better integration of debtags, a system to tag packages with multiple attributes for easier package selection
* Discard and rebuild of binary packages uploaded by maintainers, leaving only packages build in a controlled environment
The Debian Project looks forward to working with its many upstream projects and the worldwide community of Free Software developers in preparing the next high-quality Debian release.
In the light of these goals and also in consideration of the Debian community's feedback to the release team's initial announcement during the keynote of this year's DebConf in Caceres, Spain, the Release Team has additionally decided to revisit its decision on December 2009 as the proposed freeze date. A new timeline will be announced by the Debian Release Team in early September.
Luk Claes, Debian Release Manager, underlines the team's commitment to quality saying "In Debian we always strive to achieve the greatest quality in our releases. The ambitious goals that we have set for ourselves will help to secure this quality in the upcoming release."
The Debian Release Team - in cooperation with the Debian Infrastructure Team - plans to include the following goals in the upcoming release:
* Multi-arch support, which will for instance improve the installation of 32 bit packages on 64 bit machines
* kFreeBSD support, introducing the first non-linux architecture into Debian
* Improved boot performance using dash as the new default shell, and a dependency-based boot system that will both clean up the boot process and help performance through parallel processing
* A further enhanced Quality Assurance process resulting in higher quality packages. This includes:
o Clean installation, upgrade and removal of all packages
o Automatic rejection of packages failing basic quality checks
o Double compilation support
* Preparation for new package formats to help streamline future development and to introduce improved compression algorithms
* Removal of obsolete libraries for improved security
* Full ipv6 support
* Large File Support
* Automatic creation of debug packages for the entire archive, a Google Summer of Code Project pending integration into the infrastructure
* Move of packages' long descriptions into a separate "translated package list", which will facilitate their translation and also provide a smaller footprint for embedded systems thanks to smaller Packages files.
* Better integration of debtags, a system to tag packages with multiple attributes for easier package selection
* Discard and rebuild of binary packages uploaded by maintainers, leaving only packages build in a controlled environment
The Debian Project looks forward to working with its many upstream projects and the worldwide community of Free Software developers in preparing the next high-quality Debian release.
Monday, July 27, 2009
Timeline: 40 Years Of Unix
1969
AT&T-owned Bell Laboratories withdraws from development of Multics, a pioneering but overly complicated time-sharing system. Some important principles in Multics were to be carried over into Unix.
Ken Thompson at Bell Labs writes the first version of an as-yet-unnamed operating system in assembly language for a DEC PDP-7 minicomputer.
1970
Thompson's operating system is named Unics, for Uniplexed Information and Computing Service, and as a pun on "emasculated Multics." (The name would later be mysteriously changed to Unix.)
1971
Unix moves to the new DEC PDP-11 minicomputer.
The first edition of the Unix Programmer's Manual, written by Thompson and Dennis Ritchie, is published.
1972
Ritchie develops the C programming language.
1973
Unix matures. The "pipe" is added to Unix; this mechanism for sharing information between two programs will influence operating systems for decades. Unix is rewritten from assembler into C.
1974
"The UNIX Timesharing System," by Ritchie and Thompson, appears in the monthly journal of the Association for Computing Machinery. The article produces the first big demand for Unix.
1976
Bell Labs programmer Mike Lesk develops UUCP (Unix-to-Unix Copy Program) for the network transfer of files, e-mail and Usenet content.
1977
Unix is ported to non-DEC hardware, including the IBM 360.
1978
Bill Joy, a graduate student at UC Berkeley, sends out copies of the first Berkeley Software Distribution (1BSD), essentially Bell Labs' Unix v6 with some add-ons. BSD becomes a rival Unix branch to AT&T's Unix; its variants and eventual descendents include FreeBSD, NetBSD, OpenBSD, DEC Ultrix, SunOS, NeXTstep/OpenStep and Mac OS X.
1980
4BSD, with DARPA sponsorship, becomes the first version of Unix to incorporate TCP/IP.
1982
Bill Joy co-founds Sun Microsystems to produce the Unix-based Sun workstation.
1983
AT&T releases the first version of the influential Unix System V, which would later become the basis for IBM's AIX and Hewlett-Packard's HP-UX.
1984
X/Open Co., a European consortium of computer makers, is formed to standardize Unix in the X/Open Portability Guide.
1985
AT&T publishes the System V Interface Definition, an attempt to set a standard for how Unix works.
1986
Rick Rashid and colleagues at Carnegie Mellon University create the first version of Mach, a replacement kernel for BSD Unix.
1987
AT&T Bell Labs and Sun Microsystems announce plans to co-develop a system to unify the two major Unix branches.
Andrew Tanenbaum writes Minix, an open-source Unix clone for use in computer science classrooms.
1988
The "Unix Wars" are under way. In response to the AT&T/Sun partnership, rival Unix vendors including DEC, HP and IBM form the Open Software Foundation (OSF) to develop open Unix standards. AT&T and its partners then form their own standards group, Unix International.
The IEEE publishes Posix (Portable Operating System Interface for Unix), a set of standards for Unix interfaces.
1989
Unix System Labs, an AT&T Bell Labs subsidiary, releases System V Release 4 (SVR4), its collaboration with Sun that unifies System V, BSD, SunOS and Xenix.
1990
The OSF releases its SVR4 competitor, OSF/1, which is based on Mach and BSD.
1991
Sun announces Solaris, an operating system based on SVR4.
Linus Torvalds writes Linux, an open-source OS kernel inspired by Minix.
1992
The Linux kernel is combined with GNU to create the free GNU/Linux operating system, which many refer to as simply "Linux."
1993
AT&T sells its subsidiary Unix System Laboratories and all Unix rights to Novell. Later that year, Novell transfers the Unix trademark to the X/Open group.
Microsoft introduces Windows NT, a powerful, 32-bit multiprocessor operating system. Fear of NT spurs true Unix-standardization efforts.
1996
X/Open merges with the OSF to form The Open Group.
1999
Thompson and Ritchie receive the National Medal of Technology from President Clinton.
2002
The Open Group announces Version 3 of the Single Unix Specification.
Sources: A Quarter Century of UNIX, by Peter H. Salus; Microsoft; AT&T; The Open Group; Wikipedia and other sources
AT&T-owned Bell Laboratories withdraws from development of Multics, a pioneering but overly complicated time-sharing system. Some important principles in Multics were to be carried over into Unix.
Ken Thompson at Bell Labs writes the first version of an as-yet-unnamed operating system in assembly language for a DEC PDP-7 minicomputer.
1970
Thompson's operating system is named Unics, for Uniplexed Information and Computing Service, and as a pun on "emasculated Multics." (The name would later be mysteriously changed to Unix.)
1971
Unix moves to the new DEC PDP-11 minicomputer.
The first edition of the Unix Programmer's Manual, written by Thompson and Dennis Ritchie, is published.
1972
Ritchie develops the C programming language.
1973
Unix matures. The "pipe" is added to Unix; this mechanism for sharing information between two programs will influence operating systems for decades. Unix is rewritten from assembler into C.
1974
"The UNIX Timesharing System," by Ritchie and Thompson, appears in the monthly journal of the Association for Computing Machinery. The article produces the first big demand for Unix.
1976
Bell Labs programmer Mike Lesk develops UUCP (Unix-to-Unix Copy Program) for the network transfer of files, e-mail and Usenet content.
1977
Unix is ported to non-DEC hardware, including the IBM 360.
1978
Bill Joy, a graduate student at UC Berkeley, sends out copies of the first Berkeley Software Distribution (1BSD), essentially Bell Labs' Unix v6 with some add-ons. BSD becomes a rival Unix branch to AT&T's Unix; its variants and eventual descendents include FreeBSD, NetBSD, OpenBSD, DEC Ultrix, SunOS, NeXTstep/OpenStep and Mac OS X.
1980
4BSD, with DARPA sponsorship, becomes the first version of Unix to incorporate TCP/IP.
1982
Bill Joy co-founds Sun Microsystems to produce the Unix-based Sun workstation.
1983
AT&T releases the first version of the influential Unix System V, which would later become the basis for IBM's AIX and Hewlett-Packard's HP-UX.
1984
X/Open Co., a European consortium of computer makers, is formed to standardize Unix in the X/Open Portability Guide.
1985
AT&T publishes the System V Interface Definition, an attempt to set a standard for how Unix works.
1986
Rick Rashid and colleagues at Carnegie Mellon University create the first version of Mach, a replacement kernel for BSD Unix.
1987
AT&T Bell Labs and Sun Microsystems announce plans to co-develop a system to unify the two major Unix branches.
Andrew Tanenbaum writes Minix, an open-source Unix clone for use in computer science classrooms.
1988
The "Unix Wars" are under way. In response to the AT&T/Sun partnership, rival Unix vendors including DEC, HP and IBM form the Open Software Foundation (OSF) to develop open Unix standards. AT&T and its partners then form their own standards group, Unix International.
The IEEE publishes Posix (Portable Operating System Interface for Unix), a set of standards for Unix interfaces.
1989
Unix System Labs, an AT&T Bell Labs subsidiary, releases System V Release 4 (SVR4), its collaboration with Sun that unifies System V, BSD, SunOS and Xenix.
1990
The OSF releases its SVR4 competitor, OSF/1, which is based on Mach and BSD.
1991
Sun announces Solaris, an operating system based on SVR4.
Linus Torvalds writes Linux, an open-source OS kernel inspired by Minix.
1992
The Linux kernel is combined with GNU to create the free GNU/Linux operating system, which many refer to as simply "Linux."
1993
AT&T sells its subsidiary Unix System Laboratories and all Unix rights to Novell. Later that year, Novell transfers the Unix trademark to the X/Open group.
Microsoft introduces Windows NT, a powerful, 32-bit multiprocessor operating system. Fear of NT spurs true Unix-standardization efforts.
1996
X/Open merges with the OSF to form The Open Group.
1999
Thompson and Ritchie receive the National Medal of Technology from President Clinton.
2002
The Open Group announces Version 3 of the Single Unix Specification.
Sources: A Quarter Century of UNIX, by Peter H. Salus; Microsoft; AT&T; The Open Group; Wikipedia and other sources
Sunday, July 19, 2009
Ksplice gives Linux users 88% of kernel updates without rebooting
Have you ever wondered why some updates or installs require a reboot, and others don’t? The main reason relates to kernel-level (core) services running in memory which either have been altered by the update to include new data that can’t be “squeezed in” to its existing footprint, or are currently attached to multiple separate processes which cannot be accounted for without a reboot. Ksplice has figured out a way around that issue in a majority of the cases.
A recent examination of Linux kernel updates suggests 88% of those which today fall under the “must reboot” category due to the types of programs they affect, could be converted into rebootless forms using Ksplice.
The Ksplice website includes a How It Works page. It explains that the lifecycle of Linux bugs (with Ksplice) operates like this:
1. A dangerous bug or security hole is discovered in Linux.
2. Linux developers create a fix or patch which corrects the problem, but may require a reboot.
3. Ksplice software analyzes the fix, and if possible creates an update “image” which can be implemented on your system without rebooting.
4. The update manager then sees either the Ksplice update, or the regular Linux kernel patch (if it could not be made into a rebootless version), and installs it.
This ability comes from an analysis of the object code used on your system before the patch is applied. This data is compared to the object code of the update. As such, memory variable locations can be isolated in both the pre- and post-versions. And with Ksplice, a type of “difference utility” (where it compares the two to see what’s changed) is run, allowing a full-on inspection of the update to determine if a rebootless version can be created.
If a rebootless version is possible, it creates the image which, when applied, maps the new memory locations as needed, and installs the new compiled code as needed. If it’s not possible, then the update is distributed through the normal Linux update mechanisms, and a reboot is required after applying.
Minimal Interruption
Ksplice says the system is disabled for 0.7 milliseconds while the update is applied, which for most types of applications is an acceptable down-time, especially when compared alongside a hard reboot.
As mentioned, 88% of Linux kernel patches which require a reboot today would not require a reboot with Ksplice. The remaining 12% fall into the category of something expanding, whereby the new data structures in the update have increased in size and cannot physically be squeezed into the quantity of memory allocated for the previous version’s structures.
A Ksplice Uptrack service is available today for Ubuntu 9.04 (Jaunty), which according to the website, provides near 100% uptime and “rebootless updates”. See also their full brochure (PDF 320KB).
Linux Only
This technology is only for Linux at the current time. No features like this are available for Windows. The technology does require a kernel patch, as Ksplice itself must be integrated into the kernel. The installation software (.deb package) handles this for you.
See Ars Technica
Rick’s Opinion
This technique would allow enterprise-level Linux installations a greater percentage of uptime. Many service providers strive for what they call “five 9s” of uptime, which is 99.999% of the time, which over the course of a year means the system would only be down for a grand total of 5m 15s. Some organizations strive for six 9s, which means it would only be down for 31 seconds.
Having the ability to reboot only in 12% of kernel updates, which don’t occur that often today anyway (a few per month) would mean much longer up-time, should a person leave their machine on 24/7.
Ubuntu desktop Linux users would also see benefits as they would not have to reboot nearly as often during the course of the day, which is typically when the Update Manager says “Oh, here’s a host of 15 updates to install”, though these do rarely require an update.
This kind of tool would allow most kernel patches to be rolled out with a greater degree of frequency without interrupting anybody’s system. This means faster turn-around times from security holes and bug fixes, and without a disruption to people’s daily routines. What could be finer?
A recent examination of Linux kernel updates suggests 88% of those which today fall under the “must reboot” category due to the types of programs they affect, could be converted into rebootless forms using Ksplice.
The Ksplice website includes a How It Works page. It explains that the lifecycle of Linux bugs (with Ksplice) operates like this:
1. A dangerous bug or security hole is discovered in Linux.
2. Linux developers create a fix or patch which corrects the problem, but may require a reboot.
3. Ksplice software analyzes the fix, and if possible creates an update “image” which can be implemented on your system without rebooting.
4. The update manager then sees either the Ksplice update, or the regular Linux kernel patch (if it could not be made into a rebootless version), and installs it.
This ability comes from an analysis of the object code used on your system before the patch is applied. This data is compared to the object code of the update. As such, memory variable locations can be isolated in both the pre- and post-versions. And with Ksplice, a type of “difference utility” (where it compares the two to see what’s changed) is run, allowing a full-on inspection of the update to determine if a rebootless version can be created.
If a rebootless version is possible, it creates the image which, when applied, maps the new memory locations as needed, and installs the new compiled code as needed. If it’s not possible, then the update is distributed through the normal Linux update mechanisms, and a reboot is required after applying.
Minimal Interruption
Ksplice says the system is disabled for 0.7 milliseconds while the update is applied, which for most types of applications is an acceptable down-time, especially when compared alongside a hard reboot.
As mentioned, 88% of Linux kernel patches which require a reboot today would not require a reboot with Ksplice. The remaining 12% fall into the category of something expanding, whereby the new data structures in the update have increased in size and cannot physically be squeezed into the quantity of memory allocated for the previous version’s structures.
A Ksplice Uptrack service is available today for Ubuntu 9.04 (Jaunty), which according to the website, provides near 100% uptime and “rebootless updates”. See also their full brochure (PDF 320KB).
Linux Only
This technology is only for Linux at the current time. No features like this are available for Windows. The technology does require a kernel patch, as Ksplice itself must be integrated into the kernel. The installation software (.deb package) handles this for you.
See Ars Technica
Rick’s Opinion
This technique would allow enterprise-level Linux installations a greater percentage of uptime. Many service providers strive for what they call “five 9s” of uptime, which is 99.999% of the time, which over the course of a year means the system would only be down for a grand total of 5m 15s. Some organizations strive for six 9s, which means it would only be down for 31 seconds.
Having the ability to reboot only in 12% of kernel updates, which don’t occur that often today anyway (a few per month) would mean much longer up-time, should a person leave their machine on 24/7.
Ubuntu desktop Linux users would also see benefits as they would not have to reboot nearly as often during the course of the day, which is typically when the Update Manager says “Oh, here’s a host of 15 updates to install”, though these do rarely require an update.
This kind of tool would allow most kernel patches to be rolled out with a greater degree of frequency without interrupting anybody’s system. This means faster turn-around times from security holes and bug fixes, and without a disruption to people’s daily routines. What could be finer?
Monday, July 13, 2009
Limit the CPU usage of an application (process) - cpulimit
cpulimit is a simple program that attempts to limit the cpu usage of a process (expressed in percentage, not in cpu time). This is useful to control batch jobs, when you don't want them to eat too much cpu. It does not act on the nice value or other scheduling priority stuff, but on the real cpu usage. Also, it is able to adapt itself to the overall system load, dynamically and quickly.
Installation:
Download last stable version of cpulimit
Then extract the source and compile with make:
tar zxf cpulimit-xxx.tar.gz
cd cpulimit-xxx
make
Executable file name is cpulimit. You may want to copy it in /usr/bin.
Usage:
Limit the process 'bigloop' by executable name to 40% CPU:
cpulimit --exe bigloop --limit 40
cpulimit --exe /usr/local/bin/bigloop --limit 40
Limit a process by PID to 55% CPU:
cpulimit --pid 2960 --limit 55
cpulimit should run at least with the same user running the controlled process. But it is much better if you run cpulimit as root, in order to have a higher priority and a more precise control.
Note:
If your machine has one processor you can limit the percentage from 0% to 100%, which means that if you set for example 50%, your process cannot use more than 500 ms of cpu time for each second. But if your machine has four processors, percentage may vary from 0% to 400%, so setting the limit to 200% means to use no more than half of the available power. In any case, the percentage is the same of what you see when you run top.
Installation:
Download last stable version of cpulimit
Then extract the source and compile with make:
tar zxf cpulimit-xxx.tar.gz
cd cpulimit-xxx
make
Executable file name is cpulimit. You may want to copy it in /usr/bin.
Usage:
Limit the process 'bigloop' by executable name to 40% CPU:
cpulimit --exe bigloop --limit 40
cpulimit --exe /usr/local/bin/bigloop --limit 40
Limit a process by PID to 55% CPU:
cpulimit --pid 2960 --limit 55
cpulimit should run at least with the same user running the controlled process. But it is much better if you run cpulimit as root, in order to have a higher priority and a more precise control.
Note:
If your machine has one processor you can limit the percentage from 0% to 100%, which means that if you set for example 50%, your process cannot use more than 500 ms of cpu time for each second. But if your machine has four processors, percentage may vary from 0% to 400%, so setting the limit to 200% means to use no more than half of the available power. In any case, the percentage is the same of what you see when you run top.
Wednesday, July 8, 2009
Chrome OS
An innocuous posting on Google's official blog last night has sent huge waves throughout the IT community today. In that post, Google has announced the next battle in the war for operating system dominance has begun. And Linux will be their weapon of choice.
The Google blog has often been the launch point for major news from the Mountain View, CA company, and this piece of information was no exception: Google plans to release a new Chrome Operating System, touted as an extenstion of their Google Chrome browser.
Chrome, which was just released nine months ago, has proven to be a popular offering, though not nearly as popular yet as Firefox, the open source browser from the Mozilla Project. According a recent Net Applications survey, Firefox held 22.1% of the average daily market share at the end of June, Safari 9.0%, and Chrome with 2.0%. Microsoft’s Internet Explorer has slipped to 65.6% of the browser market.
The new Chrome OS "will initially be targeted at netbooks," according to the post from Sundar Pichai, VP Product Management and Linus Upson, Engineering Director. This will be a natural target platform for the new operating system, which should be ready for consumers in the latter half of 2010.
"Speed, simplicity and security are the key aspects of Google Chrome OS. We're designing the OS to be fast and lightweight, to start up and get you onto the web in a few seconds. The user interface is minimal to stay out of your way, and most of the user experience takes place on the web. And as we did for the Google Chrome browser, we are going back to the basics and completely redesigning the underlying security architecture of the OS so that users don't have to deal with viruses, malware and security updates. It should just work," wrote Pichai and Upson.
The technical details were sparse in the announcement, but some key bits of information were given. The new OS "will run on both x86 as well as ARM chips" and the new Chrome OS will be based on Linux.
"The software architecture is simple--Google Chrome running within a new windowing system on top of a Linux kernel. For application developers, the web is the platform. All web-based applications will automatically work and new applications can be written using your favorite web technologies. And of course, these apps will run not only on Google Chrome OS, but on any standards-based browser on Windows, Mac, and Linux thereby giving developers the largest user base of any platform," according to the announcement. Google plans to release the source code for Chrome OS later this year.
The blog entry was very clear to differentiate this new project from the existing Andriod project, another Linux-based platform project from Google.
Android was designed from the beginning to work across a variety of devices from phones to set-top boxes to netbooks. Google Chrome OS is being created for people who spend most of their time on the web, and is being designed to power computers ranging from small netbooks to full-size desktop systems," Pichai and Upson wrote, "While there are areas where Google Chrome OS and Android overlap, we believe choice will drive innovation for the benefit of everyone, including Google."
Reaction from the IT community has been invariably along one theme: that this is the biggest challenge Microsoft has faced to date as the top operating system provider. There is little evidence that this won't be the case, as Google has historically been a strong foil to Microsoft's business strategy.
In the meantime, developers and contributors in the Linux and in other open source communities should be busy, as the blog entry concludes: "We have a lot of work to do, and we're definitely going to need a lot of help from the open source community to accomplish this vision."
The Google blog has often been the launch point for major news from the Mountain View, CA company, and this piece of information was no exception: Google plans to release a new Chrome Operating System, touted as an extenstion of their Google Chrome browser.
Chrome, which was just released nine months ago, has proven to be a popular offering, though not nearly as popular yet as Firefox, the open source browser from the Mozilla Project. According a recent Net Applications survey, Firefox held 22.1% of the average daily market share at the end of June, Safari 9.0%, and Chrome with 2.0%. Microsoft’s Internet Explorer has slipped to 65.6% of the browser market.
The new Chrome OS "will initially be targeted at netbooks," according to the post from Sundar Pichai, VP Product Management and Linus Upson, Engineering Director. This will be a natural target platform for the new operating system, which should be ready for consumers in the latter half of 2010.
"Speed, simplicity and security are the key aspects of Google Chrome OS. We're designing the OS to be fast and lightweight, to start up and get you onto the web in a few seconds. The user interface is minimal to stay out of your way, and most of the user experience takes place on the web. And as we did for the Google Chrome browser, we are going back to the basics and completely redesigning the underlying security architecture of the OS so that users don't have to deal with viruses, malware and security updates. It should just work," wrote Pichai and Upson.
The technical details were sparse in the announcement, but some key bits of information were given. The new OS "will run on both x86 as well as ARM chips" and the new Chrome OS will be based on Linux.
"The software architecture is simple--Google Chrome running within a new windowing system on top of a Linux kernel. For application developers, the web is the platform. All web-based applications will automatically work and new applications can be written using your favorite web technologies. And of course, these apps will run not only on Google Chrome OS, but on any standards-based browser on Windows, Mac, and Linux thereby giving developers the largest user base of any platform," according to the announcement. Google plans to release the source code for Chrome OS later this year.
The blog entry was very clear to differentiate this new project from the existing Andriod project, another Linux-based platform project from Google.
Android was designed from the beginning to work across a variety of devices from phones to set-top boxes to netbooks. Google Chrome OS is being created for people who spend most of their time on the web, and is being designed to power computers ranging from small netbooks to full-size desktop systems," Pichai and Upson wrote, "While there are areas where Google Chrome OS and Android overlap, we believe choice will drive innovation for the benefit of everyone, including Google."
Reaction from the IT community has been invariably along one theme: that this is the biggest challenge Microsoft has faced to date as the top operating system provider. There is little evidence that this won't be the case, as Google has historically been a strong foil to Microsoft's business strategy.
In the meantime, developers and contributors in the Linux and in other open source communities should be busy, as the blog entry concludes: "We have a lot of work to do, and we're definitely going to need a lot of help from the open source community to accomplish this vision."
Tuesday, July 7, 2009
Linux on the Desktop
Desktop Linux adoption is primarily driven by cost reduction
When asked during a recent online survey of over a thousand IT professionals with experience of desktop Linux deployment in a business context, over 70% of respondents indicated cost education as the primary driver for adoption. Ease of securing the desktop and a general lowering of overheads associated with maintenance and support were cited as factors contributing to the benefit.
But deployment is currently limited, and challenges to further adoption frequently exist
The majority of desktop Linux adopters have only rolled out to less than 20% of their total PC user base at the moment, though the opportunity for more extensive deployment is clearly identified. In order for Linux to reach its full potential in an organization, however, it is necessary to pay particular attention to challenges in the areas of targeting, user acceptance and application compatibility.
Selective deployment based on objective targeting will yield the highest ROI and acceptance
Rolling out Linux to power users, creative staff and highly mobile professionals can represent a challenge from a migration cost, requirements fulfillment and user satisfaction perspective?
However, the needs of transaction workers and general professional users with lighter and more predictable requirements can be met cost-effectively with Linux without running into the same user acceptance issues. With groups such as this typically accounting for a high proportion of the user base, there is a clear opportunity to deploy desktop Linux selectively. Optimization of the desktop estate is therefore likely to be achieved through a mix of Windows and Linux in most situations.
Linux desktop roll out is easier than expected for properly targeted end-user groups
Those with experience are much more likely to regard non-technical users as primary targets for Linux. The message here is that in practice, Linux is easier to deploy to end users than many imagine before they try it. For the majority of application types, including office tools, email clients and browsers, there is a strong consensus that the needs of most users can be met by native Linux equivalents to traditional Windows solutions. Where this is not the case, thin client or browser based delivery and/or one of the various emulation or virtualization options are available.
A focus on usability reflects a maturing of thinking
In line with the acknowledged importance of a good user experience, usability is now the most sought after attribute of a Linux distribution. Together with the emphasis on cost reduction already seen, this suggests a maturing of attitudes in relation to Linux, shifting the previous focus on pure technical considerations to a more balanced view of what really matters in a business context. This observation is significant when reviewing the mainstream relevance of the desktop Linux proposition.
When asked during a recent online survey of over a thousand IT professionals with experience of desktop Linux deployment in a business context, over 70% of respondents indicated cost education as the primary driver for adoption. Ease of securing the desktop and a general lowering of overheads associated with maintenance and support were cited as factors contributing to the benefit.
But deployment is currently limited, and challenges to further adoption frequently exist
The majority of desktop Linux adopters have only rolled out to less than 20% of their total PC user base at the moment, though the opportunity for more extensive deployment is clearly identified. In order for Linux to reach its full potential in an organization, however, it is necessary to pay particular attention to challenges in the areas of targeting, user acceptance and application compatibility.
Selective deployment based on objective targeting will yield the highest ROI and acceptance
Rolling out Linux to power users, creative staff and highly mobile professionals can represent a challenge from a migration cost, requirements fulfillment and user satisfaction perspective?
However, the needs of transaction workers and general professional users with lighter and more predictable requirements can be met cost-effectively with Linux without running into the same user acceptance issues. With groups such as this typically accounting for a high proportion of the user base, there is a clear opportunity to deploy desktop Linux selectively. Optimization of the desktop estate is therefore likely to be achieved through a mix of Windows and Linux in most situations.
Linux desktop roll out is easier than expected for properly targeted end-user groups
Those with experience are much more likely to regard non-technical users as primary targets for Linux. The message here is that in practice, Linux is easier to deploy to end users than many imagine before they try it. For the majority of application types, including office tools, email clients and browsers, there is a strong consensus that the needs of most users can be met by native Linux equivalents to traditional Windows solutions. Where this is not the case, thin client or browser based delivery and/or one of the various emulation or virtualization options are available.
A focus on usability reflects a maturing of thinking
In line with the acknowledged importance of a good user experience, usability is now the most sought after attribute of a Linux distribution. Together with the emphasis on cost reduction already seen, this suggests a maturing of attitudes in relation to Linux, shifting the previous focus on pure technical considerations to a more balanced view of what really matters in a business context. This observation is significant when reviewing the mainstream relevance of the desktop Linux proposition.
Sunday, July 5, 2009
Query Apache logfiles via SQL
The Apache SQL Analyser (ASQL) is designed to read Apache log files and dynamically convert them to SQLite format so you can analyse them in a more meaningful way. Using the cut, uniq and wc commands, you can parse a log file by hand to figure out how many unique visitors came to your site, but using Apache SQL Analyser is much faster and means that the whole log gets parsed only once. Finding unique addresses is as simple as a SELECT DISTINCT command.
In terms of requirements you will need only the Perl modules for working with SQLite database, and the Term::Readline module. Upon a Debian system you may install both via:
apt-get install libdbd-sqlite3-perl libterm-readline-gnu-perl
Usage
Once installed, either via the package or via the source download, please start the shell by typing "asql". Once the shell starts you have several commands available to you, enter help for a complete list. The three most commonly used commands would be: load, select & show
The following sample session provides a demonstration of typical usage of the shell, it demonstrates the use of the alias command which may be used to create persistent aliases:
asql v0.6 - type 'help' for help.
asql> load /var/logs/apache/access.log
Loading: /var/logs/apache/access.log
sasql> select COUNT(id) FROM logs
46
asql> alias hits SELECT COUNT(id) FROM logs
ALIAS hits SELECT COUNT(id) FROM logs
asql> alias ips SELECT DISTINCT(source) FROM logs;
ALIAS ips SELECT DISTINCT(source) FROM logs;
asql> hits
46
asql> alias
ALIAS hits SELECT COUNT(id) FROM logs
ALIAS ips SELECT DISTINCT(source) FROM logs;
In terms of requirements you will need only the Perl modules for working with SQLite database, and the Term::Readline module. Upon a Debian system you may install both via:
apt-get install libdbd-sqlite3-perl libterm-readline-gnu-perl
Usage
Once installed, either via the package or via the source download, please start the shell by typing "asql". Once the shell starts you have several commands available to you, enter help for a complete list. The three most commonly used commands would be: load, select & show
The following sample session provides a demonstration of typical usage of the shell, it demonstrates the use of the alias command which may be used to create persistent aliases:
asql v0.6 - type 'help' for help.
asql> load /var/logs/apache/access.log
Loading: /var/logs/apache/access.log
sasql> select COUNT(id) FROM logs
46
asql> alias hits SELECT COUNT(id) FROM logs
ALIAS hits SELECT COUNT(id) FROM logs
asql> alias ips SELECT DISTINCT(source) FROM logs;
ALIAS ips SELECT DISTINCT(source) FROM logs;
asql> hits
46
asql> alias
ALIAS hits SELECT COUNT(id) FROM logs
ALIAS ips SELECT DISTINCT(source) FROM logs;
Wednesday, July 1, 2009
How to calculates CRC checksum and the byte count for file(s)
cksum prints the CRC checksum for each file along with the number of bytes in the file, and the file name unless no arguments were given.
cksum is typically used to ensure that files transferred by unreliable means have not been corrupted, by comparing the cksum output for the received files with the cksum output for the original files (typically given in the distribution).
The CRC algorithm is specified by the POSIX standard. It is not compatible with the BSD or System V sum algorithms and cksum is more robust.
The only options are --help and --version.
An exit status of zero indicates success, and a nonzero value indicates failure.
Example of using cksum:
Create file with following text:
$ echo "Open source is a development method for software that harnesses the power of distributed peer review and transparency of process." > file.txt
$ cksum file.txt
1121778036 130 file.txt
Here cksum calculate a cyclic redundancy check (CRC) of given file (file.txt). Users can check the integrity of file and see if the file has been modified. use your favorite text editor and remove a "." from the end of the sentence and run cksum again on the same file and observe the difference in the output
$ cksum file.txt
2131559972 129 file.txt
Another Example:
cksum can be also be used for checking the bunch of files, first get the checksum of the entire files withing the directory
$ cksum * > /someother/location/cksum.list
Above command generates checksums' file.
Now after transferring the files, run the cksum command on the same sets of file to get the new chksum figure and finally compare these two values to figure out if the files are been tempered or not.
$ cksum * > /someother/location/cksum.list-2
$ diff cksum.list cksum.list-2
cksum also can be used for fast searching for duplicates.
$ cksum *
cksum is typically used to ensure that files transferred by unreliable means have not been corrupted, by comparing the cksum output for the received files with the cksum output for the original files (typically given in the distribution).
The CRC algorithm is specified by the POSIX standard. It is not compatible with the BSD or System V sum algorithms and cksum is more robust.
The only options are --help and --version.
An exit status of zero indicates success, and a nonzero value indicates failure.
Example of using cksum:
Create file with following text:
$ echo "Open source is a development method for software that harnesses the power of distributed peer review and transparency of process." > file.txt
$ cksum file.txt
1121778036 130 file.txt
Here cksum calculate a cyclic redundancy check (CRC) of given file (file.txt). Users can check the integrity of file and see if the file has been modified. use your favorite text editor and remove a "." from the end of the sentence and run cksum again on the same file and observe the difference in the output
$ cksum file.txt
2131559972 129 file.txt
Another Example:
cksum can be also be used for checking the bunch of files, first get the checksum of the entire files withing the directory
$ cksum * > /someother/location/cksum.list
Above command generates checksums' file.
Now after transferring the files, run the cksum command on the same sets of file to get the new chksum figure and finally compare these two values to figure out if the files are been tempered or not.
$ cksum * > /someother/location/cksum.list-2
$ diff cksum.list cksum.list-2
cksum also can be used for fast searching for duplicates.
$ cksum *
Monday, June 22, 2009
Fedora 11 Provides Glimpse of Future for Red Hat Enterprise Linux
The Fedora Project, a Red Hat, Inc.-sponsored and community-supported open source collaboration project, has announced the availability of Fedora 11, the latest version of its free open source operating system.
Fedora 11's feature set provides improvements in virtualization, including an upgraded interactive console, a redesigned virtual machine guest creation wizard and better security with SELinux support for guests. There are also numerous desktop improvements such as automatic font and content handler installation using PackageKit, better fingerprint reader support, and an updated input method system for supporting international language users.
Fedora, which now has almost 29,000 project members, functions as a kind of community-oriented R&D lab. The project fulfills several purposes and "one of them is to give Red Hat a place to contribute code and have it integrated into a release that is in very wide distribution to millions and millions of users," Paul Frields, Fedora project leader at Red Hat, tells Linux Executive Report.
The Fedora Project aims to release a new complete, general-purpose, no-cost operating system approximately every six months. "If you look at that distribution, what you see in there are the latest technologies that are beaten into shape by our community; and bug fixes and all sorts of improvements are applied, and that resulting platform is something that anybody can install and use," says Frields. The project allows Red Hat to give back and interface closely with the open source community, and is also used by Red Hat engineers as a platform for participation in other open source communities.
Looking at Fedora today gives you an idea of where the Red Hat Enterprise Linux product is headed in the future, says Frields. Somewhere down the line, Red Hat looks at the Fedora product and "more or less makes a snapshot" of it, and starts to do its intense QA processes and work with hardware and software vendors for certifications to make sure that partners and customers get the features they need in an enterprise-ready product. "Eventually, what comes out at the end is Red Hat Enterprise Linux."
By separating the two segments of end users, the businesses versus the consumers and hobbyists, there is "a lot more clarity in the mission for each product," Frields observes.
Fedora 11's feature set provides improvements in virtualization, including an upgraded interactive console, a redesigned virtual machine guest creation wizard and better security with SELinux support for guests. There are also numerous desktop improvements such as automatic font and content handler installation using PackageKit, better fingerprint reader support, and an updated input method system for supporting international language users.
Fedora, which now has almost 29,000 project members, functions as a kind of community-oriented R&D lab. The project fulfills several purposes and "one of them is to give Red Hat a place to contribute code and have it integrated into a release that is in very wide distribution to millions and millions of users," Paul Frields, Fedora project leader at Red Hat, tells Linux Executive Report.
The Fedora Project aims to release a new complete, general-purpose, no-cost operating system approximately every six months. "If you look at that distribution, what you see in there are the latest technologies that are beaten into shape by our community; and bug fixes and all sorts of improvements are applied, and that resulting platform is something that anybody can install and use," says Frields. The project allows Red Hat to give back and interface closely with the open source community, and is also used by Red Hat engineers as a platform for participation in other open source communities.
Looking at Fedora today gives you an idea of where the Red Hat Enterprise Linux product is headed in the future, says Frields. Somewhere down the line, Red Hat looks at the Fedora product and "more or less makes a snapshot" of it, and starts to do its intense QA processes and work with hardware and software vendors for certifications to make sure that partners and customers get the features they need in an enterprise-ready product. "Eventually, what comes out at the end is Red Hat Enterprise Linux."
By separating the two segments of end users, the businesses versus the consumers and hobbyists, there is "a lot more clarity in the mission for each product," Frields observes.
Monday, June 15, 2009
Linux 2.6.30's best five features
Windows and Mac OS updates every few years. Windows 7 arrives on October 22nd and Apple's Snow Leopard will show up in September. The Linux kernel, the heart of Linux distributions, however, gets updated every few months.
What this means for you is that Windows and Mac OS are taking large, slow steps, while Linux is constantly evolving. Thus, Linux's changes may not be as big from version to version, but they tend to be more thoroughly tested and stable. What most users will like in this distribution starts with a faster boot-up for Linux.
1. Fast boot. Older versions of Linux spend a lot of time scanning for hard drives and other storage devices and then partitions on each of them. This eats up a lot of milliseconds because it looks for them one at a time. With the 2.6.30 boot-up, however, instead of waiting for this to get done the rest of the kernel continues to boot-up. At the same time, the storage devices are being checked in parallel, two or more at a time, to further improve the system's boot speed.
There are other efforts afoot to speed up Linux's boot times. The upshot of all this work will be to keep Linux the fastest booting operating system well into the future.
2. Storage improvements. Speaking of storage devices, there's a long, laundry list of file system improvements. I won't go into most of those in detail. Suffice it to say that no matter what file system you use either locally or on a network, chances are that it's performance and stability has been improved. For a high-level view of these changes see the Linux Kernel Newbie 2.6.30 reference page.
I will mention one issue though simply because, as Jonathan Corbet, Linux kernel developer and journalist put it, "Long, highly-technical, and animated discussion threads are certainly not unheard of on the linux-kernel mailing list. Even by linux-kernel standards, though, the thread that followed the 2.6.29 announcement was impressive." You can say that again.
The argument... ah discussion was over how file systems and block I/O (input/output) using the fsync() function in Linux should work. The really simple version of this discussion is that fsync has defaulted to forcing a system to write file system journal and related file data to be written immediately. Most I/O schedulers though push reads over writes. On a non-journaling file system, that's not a big deal. But, a journal write has to go through immediately and it can take up a lot of time while it's doing it.
On Ext3, probably the most widely used Linux file system, the result is that Ext3 is very stable, because it makes sure those journal writes go through, but at the same time it's very slow, once more because of those journal writes. You can argue almost endlessly over how to handle this problem, or even that Ext3 fsync function runs perfectly fine. Linus Torvalds, however, finally came down on the side of making the writes faster.
The arguments continue though on how to handle fsync(). And, in addition, side discussions on how to handle file reads, writes and creation continue on. For users most of this doesn't matter, developers who get down and dirty with file-systems though should continue to pay close attention.
3) Ext4 tuning. Linux's new Ext4 file system has been in the works for several years now. It's now being used in major Linux distributions like Ubuntu 9.04, and it's working well. That said, Ext4 has gotten numerous minor changes to improve its stability and performance.
I've been switching my Linux systems to Ext4 over the last few months. If you've been considering making the switch, wait until your distribution adopts the 2.6.30 kernel, and give it a try. I think you'll be pleased.
4) Kernel Integrity Management. Linux is more secure than most other operating systems. Notice, though, that I say it's more secure. I don't say, and I'd be an idiot if I did, that it's completely secure. Nothing is in this world. The operating system took a big step forward in making it harder for any would be cracker to break it though with the introduction of Integrity Management.
This is an old idea that's finally made it into the kernel. What it boils down to is that checks the integrity of files and their metadata when they're called by the operating system using an EVM (extended verification module) code. If a file appears to have been tampered with, the system can lock down the its use and notify the administrator that mischief is afoot.
While SE-Linux (Security Enhanced Linux) is far more useful for protecting most users, I can see Integrity Management being very handy for Linux devices that don't get a lot of maintenance such as Wi-Fi routers. Attacks on devices are begining to happen and a simple way to lock them down if their files have been changed strikes me as a really handy feature.
5) Network file system caching. How do you speed up a hard drive, or anything else with a file system on it for that matter? You use a cache. Now, with the adoption of FS-Cache, you can use caching with networked file systems.
Right now it only works with NFS (Network File System) and AFS (Andrew File System). These network file systems tend to be used in Unix and Linux-only shops, but there's no reason why you can't use FS-Cache on top of any file system that's network accessible.
I tend to be suspicious of network caching since it's all too easy to lose a network connection, which means you can be left with a real mess between what the server thinks has been changed, added, and saved and what your local cache thinks has been saved. FS-Cache addresses this problem of cache coherency by using journaling on the cache so you can bring the local and remote file systems back into agreement.
While 2.6.30 may not be the most exciting Linux kernel release, it does include several very solid and important improvements. Personally, I plan on switching my servers over to 2.6.30-based distributions as soon as they become available. If your concerns are mostly with the Linux desktop though I wouldn't be in that much of a hurry, most of the updates are more important for server administrators than desktop users.
What this means for you is that Windows and Mac OS are taking large, slow steps, while Linux is constantly evolving. Thus, Linux's changes may not be as big from version to version, but they tend to be more thoroughly tested and stable. What most users will like in this distribution starts with a faster boot-up for Linux.
1. Fast boot. Older versions of Linux spend a lot of time scanning for hard drives and other storage devices and then partitions on each of them. This eats up a lot of milliseconds because it looks for them one at a time. With the 2.6.30 boot-up, however, instead of waiting for this to get done the rest of the kernel continues to boot-up. At the same time, the storage devices are being checked in parallel, two or more at a time, to further improve the system's boot speed.
There are other efforts afoot to speed up Linux's boot times. The upshot of all this work will be to keep Linux the fastest booting operating system well into the future.
2. Storage improvements. Speaking of storage devices, there's a long, laundry list of file system improvements. I won't go into most of those in detail. Suffice it to say that no matter what file system you use either locally or on a network, chances are that it's performance and stability has been improved. For a high-level view of these changes see the Linux Kernel Newbie 2.6.30 reference page.
I will mention one issue though simply because, as Jonathan Corbet, Linux kernel developer and journalist put it, "Long, highly-technical, and animated discussion threads are certainly not unheard of on the linux-kernel mailing list. Even by linux-kernel standards, though, the thread that followed the 2.6.29 announcement was impressive." You can say that again.
The argument... ah discussion was over how file systems and block I/O (input/output) using the fsync() function in Linux should work. The really simple version of this discussion is that fsync has defaulted to forcing a system to write file system journal and related file data to be written immediately. Most I/O schedulers though push reads over writes. On a non-journaling file system, that's not a big deal. But, a journal write has to go through immediately and it can take up a lot of time while it's doing it.
On Ext3, probably the most widely used Linux file system, the result is that Ext3 is very stable, because it makes sure those journal writes go through, but at the same time it's very slow, once more because of those journal writes. You can argue almost endlessly over how to handle this problem, or even that Ext3 fsync function runs perfectly fine. Linus Torvalds, however, finally came down on the side of making the writes faster.
The arguments continue though on how to handle fsync(). And, in addition, side discussions on how to handle file reads, writes and creation continue on. For users most of this doesn't matter, developers who get down and dirty with file-systems though should continue to pay close attention.
3) Ext4 tuning. Linux's new Ext4 file system has been in the works for several years now. It's now being used in major Linux distributions like Ubuntu 9.04, and it's working well. That said, Ext4 has gotten numerous minor changes to improve its stability and performance.
I've been switching my Linux systems to Ext4 over the last few months. If you've been considering making the switch, wait until your distribution adopts the 2.6.30 kernel, and give it a try. I think you'll be pleased.
4) Kernel Integrity Management. Linux is more secure than most other operating systems. Notice, though, that I say it's more secure. I don't say, and I'd be an idiot if I did, that it's completely secure. Nothing is in this world. The operating system took a big step forward in making it harder for any would be cracker to break it though with the introduction of Integrity Management.
This is an old idea that's finally made it into the kernel. What it boils down to is that checks the integrity of files and their metadata when they're called by the operating system using an EVM (extended verification module) code. If a file appears to have been tampered with, the system can lock down the its use and notify the administrator that mischief is afoot.
While SE-Linux (Security Enhanced Linux) is far more useful for protecting most users, I can see Integrity Management being very handy for Linux devices that don't get a lot of maintenance such as Wi-Fi routers. Attacks on devices are begining to happen and a simple way to lock them down if their files have been changed strikes me as a really handy feature.
5) Network file system caching. How do you speed up a hard drive, or anything else with a file system on it for that matter? You use a cache. Now, with the adoption of FS-Cache, you can use caching with networked file systems.
Right now it only works with NFS (Network File System) and AFS (Andrew File System). These network file systems tend to be used in Unix and Linux-only shops, but there's no reason why you can't use FS-Cache on top of any file system that's network accessible.
I tend to be suspicious of network caching since it's all too easy to lose a network connection, which means you can be left with a real mess between what the server thinks has been changed, added, and saved and what your local cache thinks has been saved. FS-Cache addresses this problem of cache coherency by using journaling on the cache so you can bring the local and remote file systems back into agreement.
While 2.6.30 may not be the most exciting Linux kernel release, it does include several very solid and important improvements. Personally, I plan on switching my servers over to 2.6.30-based distributions as soon as they become available. If your concerns are mostly with the Linux desktop though I wouldn't be in that much of a hurry, most of the updates are more important for server administrators than desktop users.
Thursday, June 11, 2009
Squid Error : Name error: the domain name does not exist
Problem (example):
The requested URL could not be retrieved
While trying to retrieve the URL: http://intranet/
The following error was encountered:
Unable to determine IP address from host name for http://intranet
The dnsserver returned:
Name Error: The domain name does not exist.
This means that:
The cache was not able to resolve the hostname presented in the URL.
Check if the address is correct.
Solution:
append_domain : This directive helps Squid turn single-component hostnames into fully qualified domain names. For example, http://www/ becomes www.example.com/. This is especially important if you are participating in a cache hierarchy.
Add the following directive into you squid.conf file to solve the above problem
append_domain .domainname.com
The requested URL could not be retrieved
While trying to retrieve the URL: http://intranet/
The following error was encountered:
Unable to determine IP address from host name for http://intranet
The dnsserver returned:
Name Error: The domain name does not exist.
This means that:
The cache was not able to resolve the hostname presented in the URL.
Check if the address is correct.
Solution:
append_domain : This directive helps Squid turn single-component hostnames into fully qualified domain names. For example, http://www/ becomes www.example.com/. This is especially important if you are participating in a cache hierarchy.
Add the following directive into you squid.conf file to solve the above problem
append_domain .domainname.com
Tuesday, June 9, 2009
Setting the SUID/SGID bits
SetUID bit, the executable which has the SUID set runs with the ownership of the program owner. That is, if you own an executable, and another person issues the executable, then it runs with your permission and not his. The default is that a program runs with the ownership of the person executing the binary.
The SGID bit is the same as of SUID, only the case is that it runs with the permission of the group. Another use is it can be set on folders,making files or folders created inside the SGID set folder to have a common group ownership.
Note : Making SUID and SGID programs completely safe is very difficult (or maybe impossible) thus in case you are a system administrator it is best to consult some professionals before giving access rights to root owned applications by setting the SUID bit. As a home user (where you are both the normal user and the superuser) the SUID bit helps you do a lot of things easily without having to log in as the superuser every now and then
Setting SUID bits on the file:
Suppose I got the executable called "killprocess" and I need to set the suid bit on this file, go to command prompt and issue command: chmod u+s killprocess
Now check permission on the file with command ls -l killprocess, observe "s" that has been added for suid bit
-rwsr-xr-x 1 root root 6 Jun 7 12:16 killprocess
Setting GUID bits on the file:
go to command prompt and issue command: chmod g+s killprocess
This will set the GUID bit on the same file, check the permission on this file using command: ls -l killprocess
-rwsr-sr-x 1 root root
The SGID bit is the same as of SUID, only the case is that it runs with the permission of the group. Another use is it can be set on folders,making files or folders created inside the SGID set folder to have a common group ownership.
Note : Making SUID and SGID programs completely safe is very difficult (or maybe impossible) thus in case you are a system administrator it is best to consult some professionals before giving access rights to root owned applications by setting the SUID bit. As a home user (where you are both the normal user and the superuser) the SUID bit helps you do a lot of things easily without having to log in as the superuser every now and then
Setting SUID bits on the file:
Suppose I got the executable called "killprocess" and I need to set the suid bit on this file, go to command prompt and issue command: chmod u+s killprocess
Now check permission on the file with command ls -l killprocess, observe "s" that has been added for suid bit
-rwsr-xr-x 1 root root 6 Jun 7 12:16 killprocess
Setting GUID bits on the file:
go to command prompt and issue command: chmod g+s killprocess
This will set the GUID bit on the same file, check the permission on this file using command: ls -l killprocess
-rwsr-sr-x 1 root root
Wednesday, June 3, 2009
Information Security Incident Rating Category
Category I: Unauthorized Root/Administrator Access
A Category I event occurs when an unauthorized party gains 'root' or 'administrator' control of a client computer. Unauthorized parties include human adversaries and automated malicious code, such as a worm. On UNIX-like systems, the 'root' account is the 'super-user,' generally capable of taking any action desired by the unauthorized party. (Note that so-called 'Trusted' operating systems (OS), like Sun Microsystem's 'Trusted Solaris,' divide the powers of the root account among various operators. Compromise of any one of these accounts on a 'Trusted' OS constitutes a category I incident.) On Windows systems, the 'administrator' has near complete control of the computer, although some powers remain with the 'SYSTEM' account used internally by the OS itself. (Compromise of the SYSTEM account is considered a category I event as well.) Category I incidents are potentially the most damaging type of event.
Category II: Unauthorized User Access
A Category II event occurs when an unauthorized party gains control of any non-root or non-administrator account on a client computer. User accounts include those held by people as well as applications. For example, services may be configured to run or interact with various non-root or non-administrator accounts, such as 'apache' for the Apache web server or 'IUSR_machinename' for Microsoft's Internet Information Services (IIS). Category II incidents are treated as though they will quickly escalate to Category I events. Skilled attackers will elevate their privileges once they acquire user status on the victim machine.
Category III: Attempted Unauthorized Access
A Category III event occurs when an unauthorized party attempts to gain root/administrator or user level access on a client computer. The exploitation attempt fails for one of several reasons. First, the target may be properly patched to reject the attack. Second, the attacker may find a vulnerable machine, but he may not be sufficiently skilled to execute the attack. Third, the target may be vulnerable to the attack, but its configuration prevents compromise. (For example, an IIS web server may be vulnerable to an exploit employed by a worm, but the default locations of critical files have been altered.)
Category IV: Successful Denial of Service Attack
A Category IV event occurs when an adversary takes damaging action against the resources or processes of a target machine or network. Denial of service attacks may consume CPU cycles, bandwidth, hard drive space, user's time, and many other resources.
Category V: Poor Security Practice or Policy Violation
A Category V event occurs when an analyst detects a condition which exposes the network and/or sytems on the network to an unnecessary risk of exploitation. For example, should an analyst discover that a domain name system server allows zone transfers to all Internet users, he would classify the incident as a category V event. (Zone transfers provide complete information on the host names and IP addresses of client machines.) Violations of a client's security policy also constitutes a category V incident. Should a client forbid the use of peer-to-peer file sharing applications, detections of Napster or Gnutella traffic will be reported as category V events.
Category VI: Reconnaissance/Probes/Scans
A Category VI event occurs when an adversary attempts to learn about a target system or network, with the presumed intent to later compromise that system or network. Reconnaissance events include port scans, enumeration of NetBIOS shares on Windows systems, inquiries concerning the version of applications on servers, unauthorized zone transfers, and similar activity. Category VI activity also includes limited attempts to guess user names and passwords. Sustained, intense guessing of user names and passwords would be considered category III events if unsuccessful.
Category VII: Virus Infection
A Category VII event occurs when a client system becomes infected by a virus. Note the emphasis here is on the term virus, as opposed to a worm. Viruses depend on one or both of the following conditions: (1) human interaction is required to propagate the virus; (2) the virus must attach itself to a 'host' file, such as an email message, Word document, or web page. Worms, on the other hand, are capable of propagating themselves without human interaction or host files. A compromise caused by a worm would qualify as a category I or II event.
A Category I event occurs when an unauthorized party gains 'root' or 'administrator' control of a client computer. Unauthorized parties include human adversaries and automated malicious code, such as a worm. On UNIX-like systems, the 'root' account is the 'super-user,' generally capable of taking any action desired by the unauthorized party. (Note that so-called 'Trusted' operating systems (OS), like Sun Microsystem's 'Trusted Solaris,' divide the powers of the root account among various operators. Compromise of any one of these accounts on a 'Trusted' OS constitutes a category I incident.) On Windows systems, the 'administrator' has near complete control of the computer, although some powers remain with the 'SYSTEM' account used internally by the OS itself. (Compromise of the SYSTEM account is considered a category I event as well.) Category I incidents are potentially the most damaging type of event.
Category II: Unauthorized User Access
A Category II event occurs when an unauthorized party gains control of any non-root or non-administrator account on a client computer. User accounts include those held by people as well as applications. For example, services may be configured to run or interact with various non-root or non-administrator accounts, such as 'apache' for the Apache web server or 'IUSR_machinename' for Microsoft's Internet Information Services (IIS). Category II incidents are treated as though they will quickly escalate to Category I events. Skilled attackers will elevate their privileges once they acquire user status on the victim machine.
Category III: Attempted Unauthorized Access
A Category III event occurs when an unauthorized party attempts to gain root/administrator or user level access on a client computer. The exploitation attempt fails for one of several reasons. First, the target may be properly patched to reject the attack. Second, the attacker may find a vulnerable machine, but he may not be sufficiently skilled to execute the attack. Third, the target may be vulnerable to the attack, but its configuration prevents compromise. (For example, an IIS web server may be vulnerable to an exploit employed by a worm, but the default locations of critical files have been altered.)
Category IV: Successful Denial of Service Attack
A Category IV event occurs when an adversary takes damaging action against the resources or processes of a target machine or network. Denial of service attacks may consume CPU cycles, bandwidth, hard drive space, user's time, and many other resources.
Category V: Poor Security Practice or Policy Violation
A Category V event occurs when an analyst detects a condition which exposes the network and/or sytems on the network to an unnecessary risk of exploitation. For example, should an analyst discover that a domain name system server allows zone transfers to all Internet users, he would classify the incident as a category V event. (Zone transfers provide complete information on the host names and IP addresses of client machines.) Violations of a client's security policy also constitutes a category V incident. Should a client forbid the use of peer-to-peer file sharing applications, detections of Napster or Gnutella traffic will be reported as category V events.
Category VI: Reconnaissance/Probes/Scans
A Category VI event occurs when an adversary attempts to learn about a target system or network, with the presumed intent to later compromise that system or network. Reconnaissance events include port scans, enumeration of NetBIOS shares on Windows systems, inquiries concerning the version of applications on servers, unauthorized zone transfers, and similar activity. Category VI activity also includes limited attempts to guess user names and passwords. Sustained, intense guessing of user names and passwords would be considered category III events if unsuccessful.
Category VII: Virus Infection
A Category VII event occurs when a client system becomes infected by a virus. Note the emphasis here is on the term virus, as opposed to a worm. Viruses depend on one or both of the following conditions: (1) human interaction is required to propagate the virus; (2) the virus must attach itself to a 'host' file, such as an email message, Word document, or web page. Worms, on the other hand, are capable of propagating themselves without human interaction or host files. A compromise caused by a worm would qualify as a category I or II event.
Tuesday, May 19, 2009
Perform Client-Server Cross-Platform Backups with Bacula (Part 2)
Database Setup
Now that you've modified the configuration files to suit your needs, use Bacula's scripts to create and define the database tables that it will use.
To set up for MySQL:
# cd /usr/ports/sysutils/bacula/work/bacula-1.32c/src/cats
# ./grant_mysql_privileges
# ./create_mysql_database
# ./make_mysql_tables
If you have a password set for the MySQL root account, add -p to these commands and you will be prompted for the password. You now have a working database suitable for use by Bacula.
Testing Your Tape Drive
Some tape drives are not standard. They require their own proprietary software and can be temperamental when used with other software. Regardless of what software it uses, each drive model can have its own little quirks that need to be catered to. Fortunately, Bacula comes with btape, a handy little utility for testing your drive.
My tape drive is at /dev/sa1. Bacula prefers to use the non-rewind variant of the device, but it can handle the raw variant as well. If you use the rewinding device, then only one backup job per tape is possible. This command will test the non-rewind device /dev/nrsa1:
# /usr/local/sbin/btape -c /usr/local/etc/bacula-sd.conf /dev/nrsa1
Running Without Root
It is a good idea to run daemons with the lowest possible privileges. The Storage Daemon and the Director Daemon do not need root permissions. However, the File Daemon does, because it needs to access all files on your system.
In order to run daemons with nonroot accounts, you need to create a user and a group. Here, I used vipw to create the user. I selected a user ID and group ID of 1002, as they were unused on my system.
bacula:*:1002:1002::0:0:Bacula Daemon:/var/db/bacula:/sbin/nologin
I also added this line to /etc/group:
bacula:*:1002:
The bacula user (as opposed to the Bacula daemon) will have a home directory of /var/db/bacula, which is the default location for the Bacula database.
Now that you have both a bacula user and a bacula group, you can secure the bacula home directory by issuing this command:
# chown -R bacula:bacula /var/db/bacula/
Starting the Bacula Daemons
To start the Bacula daemons on a FreeBSD system, issue the following command:
# /usr/local/etc/rc.d/bacula.sh start
To confirm they are all running:
# ps auwx | grep bacula
root 63416 0.0 0.3 2040 1172 ?? Ss 4:09PM 0:00.01
/usr/local/sbin/bacula-sd -v -c /usr/local/etc/bacula-sd.conf
root 63418 0.0 0.3 1856 1036 ?? Ss 4:09PM 0:00.00
/usr/local/sbin/bacula-fd -v -c /usr/local/etc/bacula-fd.conf
root 63422 0.0 0.4 2360 1440 ?? Ss 4:09PM 0:00.00
/usr/local/sbin/bacula-dir -v -c /usr/local/etc/bacula-dir.conf
Using the Bacula Console
The console is the main interface through which you run jobs, query system status, and examine the Catalog contents, as well as label, mount, and unmount tapes. There are two consoles available: one runs from the command line, and the other is a GNOME GUI. I will concentrate on the command-line console.
To start the console, I use this command:
# /usr/local/sbin/console -c /usr/local/etc/console.conf
Connecting to Director laptop:9101
1000 OK: laptop-dir Version: 1.32c (30 Oct 2003)
*
You can obtain a list of the available commands with the help command. The status all command is a quick and easy way to verify that all components are up and running. To label a Volume, use the label command.
Bacula comes with a preset backup job to get you started. It will back up the directory from which Bacula was installed. Once you get going and have created your own jobs, you can safely remove this job from the Director configuration file.
Not surprisingly, you use the run command to run a job. Once the job runs, the results will be sent to you via email, according to the Messages resource settings within your Director configuration file.
To restore a job, use the restore command. You should choose the restore location carefully and ensure there is sufficient disk space available.
It is easy to verify that the restored files match the original:
# diff -ruN \
/tmp/bacula-restores/usr/ports/sysutils/bacula/work/bacula-1.32c \
/usr/ports/sysutils/bacula/work/bacula-1.32c
#
Creating Backup Schedules
For my testing, I wanted to back up files on my Windows XP machine every hour. I created this schedule:
Schedule {
Name = "HourlyCycle"
Run = Full 1st sun at 1:05
Run = Differential 2nd-5th sun at 1:05
Run = Incremental Hourly
}
Any Job that uses this schedule will be run at the following times:
• A full backup will be done on the first Sunday of every month at 1:05 AM.
• A differential backup will be run on the 2nd, 3rd, 4th, and 5th Sundays of every month at 1:05 AM.
• Every hour, on the hour, an incremental backup will be done.
Creating a Client-only Install
So far we have been testing Bacula on the server. With the FreeBSD port, installing a client-only version of Bacula is easy:
# cd /usr/ports/sysutils/bacula
# make -DWITH_CLIENT_ONLY install
You will also need to tell the Director about this client by adding a new Client resource to the Director configuration file. You will also want to create a Job and FileSet resource.
When you change the Bacula configuration files, remember to restart the daemons:
# /usr/local/etc/rc.d/bacula.sh restart
Stopping the Storage daemon
Stopping the File daemon
Stopping the Director daemon
Starting the Storage daemon
Starting the File daemon
Starting the Director daemon
#
Now that you've modified the configuration files to suit your needs, use Bacula's scripts to create and define the database tables that it will use.
To set up for MySQL:
# cd /usr/ports/sysutils/bacula/work/bacula-1.32c/src/cats
# ./grant_mysql_privileges
# ./create_mysql_database
# ./make_mysql_tables
If you have a password set for the MySQL root account, add -p to these commands and you will be prompted for the password. You now have a working database suitable for use by Bacula.
Testing Your Tape Drive
Some tape drives are not standard. They require their own proprietary software and can be temperamental when used with other software. Regardless of what software it uses, each drive model can have its own little quirks that need to be catered to. Fortunately, Bacula comes with btape, a handy little utility for testing your drive.
My tape drive is at /dev/sa1. Bacula prefers to use the non-rewind variant of the device, but it can handle the raw variant as well. If you use the rewinding device, then only one backup job per tape is possible. This command will test the non-rewind device /dev/nrsa1:
# /usr/local/sbin/btape -c /usr/local/etc/bacula-sd.conf /dev/nrsa1
Running Without Root
It is a good idea to run daemons with the lowest possible privileges. The Storage Daemon and the Director Daemon do not need root permissions. However, the File Daemon does, because it needs to access all files on your system.
In order to run daemons with nonroot accounts, you need to create a user and a group. Here, I used vipw to create the user. I selected a user ID and group ID of 1002, as they were unused on my system.
bacula:*:1002:1002::0:0:Bacula Daemon:/var/db/bacula:/sbin/nologin
I also added this line to /etc/group:
bacula:*:1002:
The bacula user (as opposed to the Bacula daemon) will have a home directory of /var/db/bacula, which is the default location for the Bacula database.
Now that you have both a bacula user and a bacula group, you can secure the bacula home directory by issuing this command:
# chown -R bacula:bacula /var/db/bacula/
Starting the Bacula Daemons
To start the Bacula daemons on a FreeBSD system, issue the following command:
# /usr/local/etc/rc.d/bacula.sh start
To confirm they are all running:
# ps auwx | grep bacula
root 63416 0.0 0.3 2040 1172 ?? Ss 4:09PM 0:00.01
/usr/local/sbin/bacula-sd -v -c /usr/local/etc/bacula-sd.conf
root 63418 0.0 0.3 1856 1036 ?? Ss 4:09PM 0:00.00
/usr/local/sbin/bacula-fd -v -c /usr/local/etc/bacula-fd.conf
root 63422 0.0 0.4 2360 1440 ?? Ss 4:09PM 0:00.00
/usr/local/sbin/bacula-dir -v -c /usr/local/etc/bacula-dir.conf
Using the Bacula Console
The console is the main interface through which you run jobs, query system status, and examine the Catalog contents, as well as label, mount, and unmount tapes. There are two consoles available: one runs from the command line, and the other is a GNOME GUI. I will concentrate on the command-line console.
To start the console, I use this command:
# /usr/local/sbin/console -c /usr/local/etc/console.conf
Connecting to Director laptop:9101
1000 OK: laptop-dir Version: 1.32c (30 Oct 2003)
*
You can obtain a list of the available commands with the help command. The status all command is a quick and easy way to verify that all components are up and running. To label a Volume, use the label command.
Bacula comes with a preset backup job to get you started. It will back up the directory from which Bacula was installed. Once you get going and have created your own jobs, you can safely remove this job from the Director configuration file.
Not surprisingly, you use the run command to run a job. Once the job runs, the results will be sent to you via email, according to the Messages resource settings within your Director configuration file.
To restore a job, use the restore command. You should choose the restore location carefully and ensure there is sufficient disk space available.
It is easy to verify that the restored files match the original:
# diff -ruN \
/tmp/bacula-restores/usr/ports/sysutils/bacula/work/bacula-1.32c \
/usr/ports/sysutils/bacula/work/bacula-1.32c
#
Creating Backup Schedules
For my testing, I wanted to back up files on my Windows XP machine every hour. I created this schedule:
Schedule {
Name = "HourlyCycle"
Run = Full 1st sun at 1:05
Run = Differential 2nd-5th sun at 1:05
Run = Incremental Hourly
}
Any Job that uses this schedule will be run at the following times:
• A full backup will be done on the first Sunday of every month at 1:05 AM.
• A differential backup will be run on the 2nd, 3rd, 4th, and 5th Sundays of every month at 1:05 AM.
• Every hour, on the hour, an incremental backup will be done.
Creating a Client-only Install
So far we have been testing Bacula on the server. With the FreeBSD port, installing a client-only version of Bacula is easy:
# cd /usr/ports/sysutils/bacula
# make -DWITH_CLIENT_ONLY install
You will also need to tell the Director about this client by adding a new Client resource to the Director configuration file. You will also want to create a Job and FileSet resource.
When you change the Bacula configuration files, remember to restart the daemons:
# /usr/local/etc/rc.d/bacula.sh restart
Stopping the Storage daemon
Stopping the File daemon
Stopping the Director daemon
Starting the Storage daemon
Starting the File daemon
Starting the Director daemon
#
Perform Client-Server Cross-Platform Backups with Bacula (Part 1)
Bacula is a powerful, flexible, open source backup program. .
Having problems finding a backup solution that fits all your needs? One that can back up both Unix and Windows systems? That is flexible enough to back up systems with irregular backup needs, such as laptops? That allows you to run scripts before or after the backup job? That provides browsing capabilities so you can decide upon a restore point? Bacula may be what you're looking for.
Introducing Bacula
Bacula is a client-server solution composed of several distinct parts:
Director
The Director is the most complex part of the system. It keeps track of all clients and files to be backed up. This daemon talks to the clients and to the storage devices.
Client/File Daemon
The Client (or File) Daemon runs on each computer which will be backed up by the Director. Some other backup solutions refer to this as the Agent.
Storage Daemon
The Storage Daemon communicates with the backup device, which may be tape or disk.
Console
The Console is the primary interface between you and the Director. I use the command-line Console, but there is also a GNOME GUI Console.
Each File Daemon will have an entry in the Director configuration file. Other important entries include FileSets and Jobs. A FileSet identifies a set of files to back up. A Job specifies a single FileSet, the type of backup (incremental, full, etc.), when to do the backup, and what Storage Device to use. Backup and restore jobs can be run automatically or manually.
Installation
Bacula stores details of each backup in a database. You can use either SQLite or MySQL, and starting with Bacula Version 1.33, PostgreSQL. Before you install Bacula, decide which database you want to use.
The existing Bacula documentation provides detailed installation instructions if you're installing from source. To install instead the SQLite version of the FreeBSD port:
# cd /usr/ports/sysutils/bacula
# make install
Or, if you prefer to install the MySQL version:
# cd /usr/ports/sysutils/bacula
# make -DWITH_MYSQL install
Configuration Files
Bacula installs several configuration files that should work for your environment with few modifications.
File Daemon on the backup client
The first configuration file, /usr/local/etc/bacula-fd.conf, is for the File Daemon. This file needs to reside on each machine you want to back up. For security reasons, only the Directors specified in this file will be able to communicate with this File Daemon. The name and password specified in the Director resource must be supplied by any connecting Director.
You can specify more than one Director { } resource. Make sure the password matches the one in the Client resource in the Director's configuration file.
The FileDaemon { } resource identifies this system and specifies the port on which it will listen for Directors. You may have to create a directory manually to match the one specified by the Working Directory.
Storage Daemon on the backup server
The next configuration file, /usr/local/etc/bacula-sd.conf, is for the Storage Daemon. The default values should work unless you need to specify additional storage devices.
As with the File Daemon, the Director { } resource specifies the Director(s) that may contact this Storage Daemon. The password must match that found in the Storage resource in the Director's configuration file.
Director on the backup server
The Director's configuration is by necessity the largest of the daemons. Each Client, Job, FileSet, and Storage Device is defined in this file.
In the following example configuration, I've defined the Job Client1 to back up the files defined by the FileSet Full Set on a laptop. The backup will be performed to the File storage device, which is really a disk located at laptop.example.org.
# more /usr/local/etc/bacula-dir.conf
Director {
Name = laptop-dir
DIRport = 9101
QueryFile = "/usr/local/etc/query.sql"
WorkingDirectory = "/var/db/bacula"
PidDirectory = "/var/run"
Maximum Concurrent Jobs = 1
Password = "lLftflC4QtgZnWEB6vAGcOuSL3T6n+P7jeH+HtQOCWwV"
Messages = Standard
}
Job {
Name = "Client1"
Type = Backup
Client = laptop-fd
FileSet = "Full Set"
Schedule = "WeeklyCycle"
Storage = File
Messages = Standard
Pool = Default
Write Bootstrap = "/var/db/bacula/Client1.bsr"
Priority = 10
}
FileSet {
Name = "Full Set"
Include = signature=MD5 {
/usr/ports/sysutils/bacula/work/bacula-1.32c
}
# If you backup the root directory, the following two excluded
# files can be useful
#
Exclude = { /proc /tmp /.journal /.fsck }
}
Client {
Name = laptop-fd
Address = laptop.example.org
FDPort = 9102
Catalog = MyCatalog
Password = "laptop-client-password"
File Retention = 30 days
Job Retention = 6 months
AutoPrune = yes
}
# Definition of file storage device
Storage {
Name = File
Address = laptop.example.org
SDPort = 9103
Password = "TlDGBjTWkjTS/0HNMPF8ROacI3KlgIUZllY6NS7+gyUp"
Device = FileStorage
Media Type = File
}
Note that the password given by any connecting Console must match the one here.
Having problems finding a backup solution that fits all your needs? One that can back up both Unix and Windows systems? That is flexible enough to back up systems with irregular backup needs, such as laptops? That allows you to run scripts before or after the backup job? That provides browsing capabilities so you can decide upon a restore point? Bacula may be what you're looking for.
Introducing Bacula
Bacula is a client-server solution composed of several distinct parts:
Director
The Director is the most complex part of the system. It keeps track of all clients and files to be backed up. This daemon talks to the clients and to the storage devices.
Client/File Daemon
The Client (or File) Daemon runs on each computer which will be backed up by the Director. Some other backup solutions refer to this as the Agent.
Storage Daemon
The Storage Daemon communicates with the backup device, which may be tape or disk.
Console
The Console is the primary interface between you and the Director. I use the command-line Console, but there is also a GNOME GUI Console.
Each File Daemon will have an entry in the Director configuration file. Other important entries include FileSets and Jobs. A FileSet identifies a set of files to back up. A Job specifies a single FileSet, the type of backup (incremental, full, etc.), when to do the backup, and what Storage Device to use. Backup and restore jobs can be run automatically or manually.
Installation
Bacula stores details of each backup in a database. You can use either SQLite or MySQL, and starting with Bacula Version 1.33, PostgreSQL. Before you install Bacula, decide which database you want to use.
The existing Bacula documentation provides detailed installation instructions if you're installing from source. To install instead the SQLite version of the FreeBSD port:
# cd /usr/ports/sysutils/bacula
# make install
Or, if you prefer to install the MySQL version:
# cd /usr/ports/sysutils/bacula
# make -DWITH_MYSQL install
Configuration Files
Bacula installs several configuration files that should work for your environment with few modifications.
File Daemon on the backup client
The first configuration file, /usr/local/etc/bacula-fd.conf, is for the File Daemon. This file needs to reside on each machine you want to back up. For security reasons, only the Directors specified in this file will be able to communicate with this File Daemon. The name and password specified in the Director resource must be supplied by any connecting Director.
You can specify more than one Director { } resource. Make sure the password matches the one in the Client resource in the Director's configuration file.
The FileDaemon { } resource identifies this system and specifies the port on which it will listen for Directors. You may have to create a directory manually to match the one specified by the Working Directory.
Storage Daemon on the backup server
The next configuration file, /usr/local/etc/bacula-sd.conf, is for the Storage Daemon. The default values should work unless you need to specify additional storage devices.
As with the File Daemon, the Director { } resource specifies the Director(s) that may contact this Storage Daemon. The password must match that found in the Storage resource in the Director's configuration file.
Director on the backup server
The Director's configuration is by necessity the largest of the daemons. Each Client, Job, FileSet, and Storage Device is defined in this file.
In the following example configuration, I've defined the Job Client1 to back up the files defined by the FileSet Full Set on a laptop. The backup will be performed to the File storage device, which is really a disk located at laptop.example.org.
# more /usr/local/etc/bacula-dir.conf
Director {
Name = laptop-dir
DIRport = 9101
QueryFile = "/usr/local/etc/query.sql"
WorkingDirectory = "/var/db/bacula"
PidDirectory = "/var/run"
Maximum Concurrent Jobs = 1
Password = "lLftflC4QtgZnWEB6vAGcOuSL3T6n+P7jeH+HtQOCWwV"
Messages = Standard
}
Job {
Name = "Client1"
Type = Backup
Client = laptop-fd
FileSet = "Full Set"
Schedule = "WeeklyCycle"
Storage = File
Messages = Standard
Pool = Default
Write Bootstrap = "/var/db/bacula/Client1.bsr"
Priority = 10
}
FileSet {
Name = "Full Set"
Include = signature=MD5 {
/usr/ports/sysutils/bacula/work/bacula-1.32c
}
# If you backup the root directory, the following two excluded
# files can be useful
#
Exclude = { /proc /tmp /.journal /.fsck }
}
Client {
Name = laptop-fd
Address = laptop.example.org
FDPort = 9102
Catalog = MyCatalog
Password = "laptop-client-password"
File Retention = 30 days
Job Retention = 6 months
AutoPrune = yes
}
# Definition of file storage device
Storage {
Name = File
Address = laptop.example.org
SDPort = 9103
Password = "TlDGBjTWkjTS/0HNMPF8ROacI3KlgIUZllY6NS7+gyUp"
Device = FileStorage
Media Type = File
}
Note that the password given by any connecting Console must match the one here.
Sunday, May 17, 2009
六点策略建议大力发展Linux
随着Linux以及开源软件对世界软件产业格局的改变,随着开源软件向主流软件地位的发展,软件 产业重新洗牌过程中蕴育着千载难逢的机遇。我国既有的软件产业发展基础和巨大的国内软市场,为我们抓住这个难得的产业发展机遇提供了实力和条件,发展Linux以及开源软件将成为我国软件产业向更高层次跃进的突破口。
在此,我们提出以下大力发展Linux的策略建议:
(1) 制定标准和规范,彻底解决兼容性问题
目前各种版本Linux的差异、在此基础上开发的应用软件之间的差异以及与各种基于Windows的 各类硬件设备的接口的差异,是阻碍Linux软件发展的最主要问题。国家要加快Linux标准的制定,规范Linux及基于Linux的数据库软件、中间件以及各类应用软件的开发,彻底解决各类应用软件的接口。
(2)兼容性问题。
加强开源规则及法律的研究,保证国内Linux产业的良性发展 处理好作为自有软件的Linux和基于Linux开发的软件之间的知识产权关系,快速推进自主知识产 权的Linux软件发展的同时,继续加强开源规则及法律的研究,保证国内Linux产业的良性发展。
(3)建设开放的公共平台,促进技术创新
加大资金投入,建立为Linux软件产业发展提供技术支撑的公共、开放、国际化的开发平台,形成高效灵活的产业服务机制,引导企业合作开发、利益共享,促进产业的技术创新水平。
(4)细分市场,选择重点行业和省市占领应用市场
积极推广国产Linux软件,选择有条件的机构和重点行业在电子政务、行业信息化和企业信息化等方面开展Linux国产软件产业化试点。鼓励有条件的地方建立有特色的国家Linux产业化基地。
(5)完善Linux产业链,全方位推动Linux产业发展
引导ISV(独立软件商)向Linux转型,形成从操作系统、数据库、中间件到各类应用软件的完善的Linux产业链。只有形成完善的产业链,才能使Linux系列软件得到真正的广泛应用。
(6)组建国家层次上的开放源代码组织,加强国际合作
推动企业和行业协会组建国内Linux开放源代码社区组织,积极参与国际上有影响力的开放源代码组织,制定国产Linux技术发展路线,控制国内Linux软件发展的主导权。鼓励并帮助国内开源社区及国内Linux企业参与国际合作,创造良好的国际交流环境与渠道等
在此,我们提出以下大力发展Linux的策略建议:
(1) 制定标准和规范,彻底解决兼容性问题
目前各种版本Linux的差异、在此基础上开发的应用软件之间的差异以及与各种基于Windows的 各类硬件设备的接口的差异,是阻碍Linux软件发展的最主要问题。国家要加快Linux标准的制定,规范Linux及基于Linux的数据库软件、中间件以及各类应用软件的开发,彻底解决各类应用软件的接口。
(2)兼容性问题。
加强开源规则及法律的研究,保证国内Linux产业的良性发展 处理好作为自有软件的Linux和基于Linux开发的软件之间的知识产权关系,快速推进自主知识产 权的Linux软件发展的同时,继续加强开源规则及法律的研究,保证国内Linux产业的良性发展。
(3)建设开放的公共平台,促进技术创新
加大资金投入,建立为Linux软件产业发展提供技术支撑的公共、开放、国际化的开发平台,形成高效灵活的产业服务机制,引导企业合作开发、利益共享,促进产业的技术创新水平。
(4)细分市场,选择重点行业和省市占领应用市场
积极推广国产Linux软件,选择有条件的机构和重点行业在电子政务、行业信息化和企业信息化等方面开展Linux国产软件产业化试点。鼓励有条件的地方建立有特色的国家Linux产业化基地。
(5)完善Linux产业链,全方位推动Linux产业发展
引导ISV(独立软件商)向Linux转型,形成从操作系统、数据库、中间件到各类应用软件的完善的Linux产业链。只有形成完善的产业链,才能使Linux系列软件得到真正的广泛应用。
(6)组建国家层次上的开放源代码组织,加强国际合作
推动企业和行业协会组建国内Linux开放源代码社区组织,积极参与国际上有影响力的开放源代码组织,制定国产Linux技术发展路线,控制国内Linux软件发展的主导权。鼓励并帮助国内开源社区及国内Linux企业参与国际合作,创造良好的国际交流环境与渠道等
Sunday, May 10, 2009
How to allow access to the mail server by individual domains - Sendmail
The access database (normally in /etc/mail/access) allows a mail administrator to administratively allow access to the mail server by individual domains. Each database entry consists of a domain name or network number as the key and an action as the value.
Keys can be a fully or partly qualified host or domain name such as host.subdomain.domain.com, subdomain.domain.com, or domain.com. The last two forms match any host or subdomain under the specified domain.
Keys can also be a network address or subnetwork, e.g., 205.199.2.250, 205.199.2, or 205.199. The latter two forms match any host in the indicated subnetwork. Lastly, keys can be user@host.domain to reject mail from a specific user.
Values can be REJECT to refuse connections from this host, DISCARD to accept the message but silently discard it (the sender will think it has been accepted), OK to allow access (overriding other built-in checks), RELAY to allow access including relaying SMTP through your machine, or an arbitrary message to reject the mail with the customized message.
For example, a database might contain:
abc.com REJECT
sendmail.org RELAY
spam@buyme.com 550 Spammer
to reject all mail from any host in the abc.com domain, allow any relaying to or from any host in the sendmail.org domain, and reject mail from spam@buyme.com with a specific message.
Note that the access database is a map and just as with all maps, the database must be generated using makemap. For example: makemap hash /etc/mail/access < /etc/mail/access
Keys can be a fully or partly qualified host or domain name such as host.subdomain.domain.com, subdomain.domain.com, or domain.com. The last two forms match any host or subdomain under the specified domain.
Keys can also be a network address or subnetwork, e.g., 205.199.2.250, 205.199.2, or 205.199. The latter two forms match any host in the indicated subnetwork. Lastly, keys can be user@host.domain to reject mail from a specific user.
Values can be REJECT to refuse connections from this host, DISCARD to accept the message but silently discard it (the sender will think it has been accepted), OK to allow access (overriding other built-in checks), RELAY to allow access including relaying SMTP through your machine, or an arbitrary message to reject the mail with the customized message.
For example, a database might contain:
abc.com REJECT
sendmail.org RELAY
spam@buyme.com 550 Spammer
to reject all mail from any host in the abc.com domain, allow any relaying to or from any host in the sendmail.org domain, and reject mail from spam@buyme.com with a specific message.
Note that the access database is a map and just as with all maps, the database must be generated using makemap. For example: makemap hash /etc/mail/access < /etc/mail/access
Tuesday, May 5, 2009
Iptraf - Ncurses based LAN monitor
IPTraf is a network monitoring utility for IP networks. It intercepts packets on the network and gives out various pieces of information about the current IP traffic over it. Information returned by IPTraf include:
* Total, IP, TCP, UDP, ICMP, and non-IP byte counts
* TCP source and destination addresses and ports
* TCP packet and byte counts
* TCP flag statuses
* UDP source and destination information
* ICMP type information
* OSPF source and destination information
* TCP and UDP service statistics
* Interface packet counts
* Interface IP checksum error counts
* Interface activity indicators
* LAN station statistics
IPTraf can be used to monitor the load on an IP network, the most used types of network services, the proceedings of TCP connections, and others.
Installation:
OpenSuSe 11.1 - here
OpenSuSe 11.0 - here
Others:
Download the latest version. Once you have it downloaded, move it to /usr/local/src and untar it by running: # tar -zxvf iptraf-3.0.0.tar.gz
To compile and install, just change to the iptraf-3.0.0 top-level directory and type:./Setup
This will automatically compile and install the software and install the binaries into /usr/local/bin so make sure that directory is in your PATH.
The traditional way to do it ..
cd src
make clean
make
make install
Precompiled binaries are available in the iptraf-3.0.0.i386.bin.tar.gz file. This contains no source code and is expected to run on Intel x86 Linux with the GNU C Library 2.1 or later.
Once you have it installed, start it up by typing /usr/local/bin/iptraf as root. An ncurses based main menu will come up on your screen and you will have a list of options that you can select.
* Total, IP, TCP, UDP, ICMP, and non-IP byte counts
* TCP source and destination addresses and ports
* TCP packet and byte counts
* TCP flag statuses
* UDP source and destination information
* ICMP type information
* OSPF source and destination information
* TCP and UDP service statistics
* Interface packet counts
* Interface IP checksum error counts
* Interface activity indicators
* LAN station statistics
IPTraf can be used to monitor the load on an IP network, the most used types of network services, the proceedings of TCP connections, and others.
Installation:
OpenSuSe 11.1 - here
OpenSuSe 11.0 - here
Others:
Download the latest version. Once you have it downloaded, move it to /usr/local/src and untar it by running: # tar -zxvf iptraf-3.0.0.tar.gz
To compile and install, just change to the iptraf-3.0.0 top-level directory and type:./Setup
This will automatically compile and install the software and install the binaries into /usr/local/bin so make sure that directory is in your PATH.
The traditional way to do it ..
cd src
make clean
make
make install
Precompiled binaries are available in the iptraf-3.0.0.i386.bin.tar.gz file. This contains no source code and is expected to run on Intel x86 Linux with the GNU C Library 2.1 or later.
Once you have it installed, start it up by typing /usr/local/bin/iptraf as root. An ncurses based main menu will come up on your screen and you will have a list of options that you can select.
Sunday, May 3, 2009
Making a bootable OpenBSD install CD 4.5
On the date of the release of the newest version of OpenBSD you have a few choices concerning install media.
Here is the shell script command to download and create ISO file for OpenBSD
#!/usr/local/bin/bash
#
## Calomel.org -- Making a bootable OpenBSD CD
## calomel_make_boot_cd.sh
#
arch="amd64" # Architecture, Depend on you machine type
version="4.5" # OS version
#
echo "building the environment"
mkdir -p /tmp/OpenBSD/$version/$arch
cd /tmp/OpenBSD/$version/$arch
#
echo "getting the release files"
wget --passive-ftp --reject "*iso" ftp://ftp.openbsd.org/pub/OpenBSD/$version/$arch/*
#
echo "building the ISO"
cd /tmp/OpenBSD
mkisofs -r -no-emul-boot -b $version/$arch/cdbr -c boot.catalog -o OpenBSD.iso /tmp/OpenBSD/
#
echo "burning the bootable cd"
nice -18 cdrecord -eject -v speed=32 dev=/dev/rcd0c:0,0,0 -data -pad /tmp/OpenBSD/OpenBSD.iso
#
echo "DONE."
#
Here is the shell script command to download and create ISO file for OpenBSD
#!/usr/local/bin/bash
#
## Calomel.org -- Making a bootable OpenBSD CD
## calomel_make_boot_cd.sh
#
arch="amd64" # Architecture, Depend on you machine type
version="4.5" # OS version
#
echo "building the environment"
mkdir -p /tmp/OpenBSD/$version/$arch
cd /tmp/OpenBSD/$version/$arch
#
echo "getting the release files"
wget --passive-ftp --reject "*iso" ftp://ftp.openbsd.org/pub/OpenBSD/$version/$arch/*
#
echo "building the ISO"
cd /tmp/OpenBSD
mkisofs -r -no-emul-boot -b $version/$arch/cdbr -c boot.catalog -o OpenBSD.iso /tmp/OpenBSD/
#
echo "burning the bootable cd"
nice -18 cdrecord -eject -v speed=32 dev=/dev/rcd0c:0,0,0 -data -pad /tmp/OpenBSD/OpenBSD.iso
#
echo "DONE."
#
Tuesday, April 28, 2009
Creating backup/restore images using dd
Create a hard disk image: dd if=/dev/hda1 of=/home/hda1.bin
Create a compressed disk image: dd if=/dev/hda1 | gzip > /home/hda1.bin.gz
Back up the MBR: dd if=/dev/hda of=/home/hda.boot.mbr bs=512 count=1
Restore MBR (from a Live CD): dd if=/mnt/hda1/home/hda.boot.mbr of=/dev/hda bs=512 count=1
Backup a drive to another drive: dd if=/dev/hda of=/dev/hdb conv=noerror,sync bs=4k
The command:
dd -if /dev/hda1 > partitionimage.dd
will backup "/dev/hda1" partition. A whole drive (including the MBR) could be backed up using just /dev/hda as the input "file". Restoring is done by: dd -if partitionimage.dd -of /dev/hda1
If you have a complete new harddrive and want to restore the backup (or copy your old system to the new drive). First, the new drive has to be bigger or exactly the same size as the old one. First go superuser and switch to runlevel 1 so that you can fumble around with the harddisk without other services interfering
restore either the whole disk to the new drive or one partition (depending on how you made the backup): dd -if partitionimage.dd -of /dev/hda1
If you restored the whole drive (/dev/hda), the system will not automatically create the devices (/dev/hda1, /dev/hda2) if you just restored the whole drive. If you know how to make the devices show up without reboot, write it here, otherwise this is a good moment to reboot.
If you restored the system to a new drive, and your device names changed (for example from /dev/hda to /dev/sda) then you must adapt the bootloader and the mount points. While still on runlevel 1, edit these files:
/boot/grub/menu.list
/etc/fstab
After your system is able to boot and runs again, you can resize your partitions to fill the rest of the new harddisk (if you want that) as described here
Create a compressed disk image: dd if=/dev/hda1 | gzip > /home/hda1.bin.gz
Back up the MBR: dd if=/dev/hda of=/home/hda.boot.mbr bs=512 count=1
Restore MBR (from a Live CD): dd if=/mnt/hda1/home/hda.boot.mbr of=/dev/hda bs=512 count=1
Backup a drive to another drive: dd if=/dev/hda of=/dev/hdb conv=noerror,sync bs=4k
The command:
dd -if /dev/hda1 > partitionimage.dd
will backup "/dev/hda1" partition. A whole drive (including the MBR) could be backed up using just /dev/hda as the input "file". Restoring is done by: dd -if partitionimage.dd -of /dev/hda1
If you have a complete new harddrive and want to restore the backup (or copy your old system to the new drive). First, the new drive has to be bigger or exactly the same size as the old one. First go superuser and switch to runlevel 1 so that you can fumble around with the harddisk without other services interfering
restore either the whole disk to the new drive or one partition (depending on how you made the backup): dd -if partitionimage.dd -of /dev/hda1
If you restored the whole drive (/dev/hda), the system will not automatically create the devices (/dev/hda1, /dev/hda2) if you just restored the whole drive. If you know how to make the devices show up without reboot, write it here, otherwise this is a good moment to reboot.
If you restored the system to a new drive, and your device names changed (for example from /dev/hda to /dev/sda) then you must adapt the bootloader and the mount points. While still on runlevel 1, edit these files:
/boot/grub/menu.list
/etc/fstab
After your system is able to boot and runs again, you can resize your partitions to fill the rest of the new harddisk (if you want that) as described here
Sunday, April 19, 2009
Single application for Traffic Control, Accounting, Bandwidth Shaping & Management
TraffPro is a Linux based Traffic Control, Traffic Accounting, Traffic Shapping, Bandwidth Management and Bandwidth Control System that helps your Company/SOHO run steadie and more efficiently
KEY FEATURES
* Monitors the consumption of bandwidth by LAN users
* Reports on users overall traffic
* Receive reports from users of the total traffic consumed on weekdays
* Receive reports on user traffic based on ports
* Receive reports on user traffic based on ports and days
* Report on user status (by ip + port, and if determined by domain)
* Protect against unauthorized access to the Internet (By IP, MAC address, Login Name and Password Control based authorization)
* Use the system for traffic and badwidth accounting without user authorization and only authorization by IP or IP + MAC
* Use a Web Based Authentication for users through a Web-Interface to access the internet
* Restrict User Access to Resources Outside Specified Ports and Domains
* User can view amount of traffic used through a web-interface
* Protect the server from attacks external intrusions (Using a Built-In Firewall)
* Control Server Bandwidth and Traffic
* Use the system together with a DHCP server
* Use the system together with a proxy server
* Use the distributed computing of traffic (via multiple gateway access to the internet with a database and a single administrative console)
The system provides reports on bandwidth consumption by users, consists of a terminal administrator under two operating systems Windows and Linux, the module also has a Web client, which does not depend on the platform.
KEY FEATURES
* Monitors the consumption of bandwidth by LAN users
* Reports on users overall traffic
* Receive reports from users of the total traffic consumed on weekdays
* Receive reports on user traffic based on ports
* Receive reports on user traffic based on ports and days
* Report on user status (by ip + port, and if determined by domain)
* Protect against unauthorized access to the Internet (By IP, MAC address, Login Name and Password Control based authorization)
* Use the system for traffic and badwidth accounting without user authorization and only authorization by IP or IP + MAC
* Use a Web Based Authentication for users through a Web-Interface to access the internet
* Restrict User Access to Resources Outside Specified Ports and Domains
* User can view amount of traffic used through a web-interface
* Protect the server from attacks external intrusions (Using a Built-In Firewall)
* Control Server Bandwidth and Traffic
* Use the system together with a DHCP server
* Use the system together with a proxy server
* Use the distributed computing of traffic (via multiple gateway access to the internet with a database and a single administrative console)
The system provides reports on bandwidth consumption by users, consists of a terminal administrator under two operating systems Windows and Linux, the module also has a Web client, which does not depend on the platform.
Wednesday, April 15, 2009
FreeBSD:PortAudit
The portaudit utility allows you to check your installed ports against a database of published security vulnerabilities. This database is maintained by the FreeBSD port administrators and the FreeBSD Security Team. If a security advisory exists for an installed port, a web link to the security advisory is provided for more information.
To install portaudit, enter:
# cd /usr/ports/ports-mgmt/portaudit
# make install clean
# rehash
To check installed ports against the current portaudit database, enter:
# portaudit -Fda
To install portaudit, enter:
# cd /usr/ports/ports-mgmt/portaudit
# make install clean
# rehash
To check installed ports against the current portaudit database, enter:
# portaudit -Fda
Thursday, April 9, 2009
E m a i l T i p s f o r H I PA A C o m p l i a n c e
Why should you care about HIPAA?
Among other requirements, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) directs healthcare and insurance providers to protect personally identifiable electronic healthcare information from illicit access, while ensuring the information is continuously available to authorized parties—such as patients and their doctors and insurers.
Why has controlling access to electronic healthcare records suddenly become so
important?
With today’s epidemic of identity theft, it’s much easier for electronic records to fall into the wrong hands. It’s also easier for electronic records to be ccidentally deleted or intentionally falsified.To manage provider risk and ensure patient privacy and safety, enforcement of HIPAA-mandated security requirements has increased.
What enterprises are covered by the HIPAA privacy rule?
Individual and group health plans, HMOs, long-term care insurers, employer-sponsored
and multi-employer-sponsored plans, government- and church-sponsored plans fall under
compliance. This also includes all other organizations who use email in connection with healthcare claims, benefits eligibility inquiries, referral authorization requests, and other HHS-specified transactions.Healthcare clearinghouses and any business that processes personal health information (PHI) also need to comply with HIPAA.
What does HIPAA have to do with email?
Everything! An email with an attached diagnosis or prescription is defined as a HIPAA-protected record.An incoming email from a patient asking for clarification regarding an explanation of benefits can be regarded as a protected record. Even a “thank you” email can be subject to HIPAA if it mentions a specific procedure.
How can providers and insurers identify, secure and archive emails that contain
protected health information?
• Implement policy-based filtering to automatically scan incoming and outgoing emails and attachments for potentially protected information. Sendmail provides software or appliances for powerful policy definition and enforcement. In addition,Sendmail provides a pre-built lexicon for turnkey identification of protected information.
• Encryption. Sendmail automatically encrypts messages that contain protected
information with no user intervention required.
• Implement a quarantine and secure storage to ensure full compliance. Sendmail
provides a framework to scan, capture and quarantine non-compliant and suspect
messages.Once in quarantine, Sendmail enables role-based privledges for review
and action.
Among other requirements, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) directs healthcare and insurance providers to protect personally identifiable electronic healthcare information from illicit access, while ensuring the information is continuously available to authorized parties—such as patients and their doctors and insurers.
Why has controlling access to electronic healthcare records suddenly become so
important?
With today’s epidemic of identity theft, it’s much easier for electronic records to fall into the wrong hands. It’s also easier for electronic records to be ccidentally deleted or intentionally falsified.To manage provider risk and ensure patient privacy and safety, enforcement of HIPAA-mandated security requirements has increased.
What enterprises are covered by the HIPAA privacy rule?
Individual and group health plans, HMOs, long-term care insurers, employer-sponsored
and multi-employer-sponsored plans, government- and church-sponsored plans fall under
compliance. This also includes all other organizations who use email in connection with healthcare claims, benefits eligibility inquiries, referral authorization requests, and other HHS-specified transactions.Healthcare clearinghouses and any business that processes personal health information (PHI) also need to comply with HIPAA.
What does HIPAA have to do with email?
Everything! An email with an attached diagnosis or prescription is defined as a HIPAA-protected record.An incoming email from a patient asking for clarification regarding an explanation of benefits can be regarded as a protected record. Even a “thank you” email can be subject to HIPAA if it mentions a specific procedure.
How can providers and insurers identify, secure and archive emails that contain
protected health information?
• Implement policy-based filtering to automatically scan incoming and outgoing emails and attachments for potentially protected information. Sendmail provides software or appliances for powerful policy definition and enforcement. In addition,Sendmail provides a pre-built lexicon for turnkey identification of protected information.
• Encryption. Sendmail automatically encrypts messages that contain protected
information with no user intervention required.
• Implement a quarantine and secure storage to ensure full compliance. Sendmail
provides a framework to scan, capture and quarantine non-compliant and suspect
messages.Once in quarantine, Sendmail enables role-based privledges for review
and action.
Subscribe to:
Comments (Atom)