The Fedora Project, a Red Hat, Inc.-sponsored and community-supported open source collaboration project, has announced the availability of Fedora 11, the latest version of its free open source operating system.
Fedora 11's feature set provides improvements in virtualization, including an upgraded interactive console, a redesigned virtual machine guest creation wizard and better security with SELinux support for guests. There are also numerous desktop improvements such as automatic font and content handler installation using PackageKit, better fingerprint reader support, and an updated input method system for supporting international language users.
Fedora, which now has almost 29,000 project members, functions as a kind of community-oriented R&D lab. The project fulfills several purposes and "one of them is to give Red Hat a place to contribute code and have it integrated into a release that is in very wide distribution to millions and millions of users," Paul Frields, Fedora project leader at Red Hat, tells Linux Executive Report.
The Fedora Project aims to release a new complete, general-purpose, no-cost operating system approximately every six months. "If you look at that distribution, what you see in there are the latest technologies that are beaten into shape by our community; and bug fixes and all sorts of improvements are applied, and that resulting platform is something that anybody can install and use," says Frields. The project allows Red Hat to give back and interface closely with the open source community, and is also used by Red Hat engineers as a platform for participation in other open source communities.
Looking at Fedora today gives you an idea of where the Red Hat Enterprise Linux product is headed in the future, says Frields. Somewhere down the line, Red Hat looks at the Fedora product and "more or less makes a snapshot" of it, and starts to do its intense QA processes and work with hardware and software vendors for certifications to make sure that partners and customers get the features they need in an enterprise-ready product. "Eventually, what comes out at the end is Red Hat Enterprise Linux."
By separating the two segments of end users, the businesses versus the consumers and hobbyists, there is "a lot more clarity in the mission for each product," Frields observes.
Monday, June 22, 2009
Monday, June 15, 2009
Linux 2.6.30's best five features
Windows and Mac OS updates every few years. Windows 7 arrives on October 22nd and Apple's Snow Leopard will show up in September. The Linux kernel, the heart of Linux distributions, however, gets updated every few months.
What this means for you is that Windows and Mac OS are taking large, slow steps, while Linux is constantly evolving. Thus, Linux's changes may not be as big from version to version, but they tend to be more thoroughly tested and stable. What most users will like in this distribution starts with a faster boot-up for Linux.
1. Fast boot. Older versions of Linux spend a lot of time scanning for hard drives and other storage devices and then partitions on each of them. This eats up a lot of milliseconds because it looks for them one at a time. With the 2.6.30 boot-up, however, instead of waiting for this to get done the rest of the kernel continues to boot-up. At the same time, the storage devices are being checked in parallel, two or more at a time, to further improve the system's boot speed.
There are other efforts afoot to speed up Linux's boot times. The upshot of all this work will be to keep Linux the fastest booting operating system well into the future.
2. Storage improvements. Speaking of storage devices, there's a long, laundry list of file system improvements. I won't go into most of those in detail. Suffice it to say that no matter what file system you use either locally or on a network, chances are that it's performance and stability has been improved. For a high-level view of these changes see the Linux Kernel Newbie 2.6.30 reference page.
I will mention one issue though simply because, as Jonathan Corbet, Linux kernel developer and journalist put it, "Long, highly-technical, and animated discussion threads are certainly not unheard of on the linux-kernel mailing list. Even by linux-kernel standards, though, the thread that followed the 2.6.29 announcement was impressive." You can say that again.
The argument... ah discussion was over how file systems and block I/O (input/output) using the fsync() function in Linux should work. The really simple version of this discussion is that fsync has defaulted to forcing a system to write file system journal and related file data to be written immediately. Most I/O schedulers though push reads over writes. On a non-journaling file system, that's not a big deal. But, a journal write has to go through immediately and it can take up a lot of time while it's doing it.
On Ext3, probably the most widely used Linux file system, the result is that Ext3 is very stable, because it makes sure those journal writes go through, but at the same time it's very slow, once more because of those journal writes. You can argue almost endlessly over how to handle this problem, or even that Ext3 fsync function runs perfectly fine. Linus Torvalds, however, finally came down on the side of making the writes faster.
The arguments continue though on how to handle fsync(). And, in addition, side discussions on how to handle file reads, writes and creation continue on. For users most of this doesn't matter, developers who get down and dirty with file-systems though should continue to pay close attention.
3) Ext4 tuning. Linux's new Ext4 file system has been in the works for several years now. It's now being used in major Linux distributions like Ubuntu 9.04, and it's working well. That said, Ext4 has gotten numerous minor changes to improve its stability and performance.
I've been switching my Linux systems to Ext4 over the last few months. If you've been considering making the switch, wait until your distribution adopts the 2.6.30 kernel, and give it a try. I think you'll be pleased.
4) Kernel Integrity Management. Linux is more secure than most other operating systems. Notice, though, that I say it's more secure. I don't say, and I'd be an idiot if I did, that it's completely secure. Nothing is in this world. The operating system took a big step forward in making it harder for any would be cracker to break it though with the introduction of Integrity Management.
This is an old idea that's finally made it into the kernel. What it boils down to is that checks the integrity of files and their metadata when they're called by the operating system using an EVM (extended verification module) code. If a file appears to have been tampered with, the system can lock down the its use and notify the administrator that mischief is afoot.
While SE-Linux (Security Enhanced Linux) is far more useful for protecting most users, I can see Integrity Management being very handy for Linux devices that don't get a lot of maintenance such as Wi-Fi routers. Attacks on devices are begining to happen and a simple way to lock them down if their files have been changed strikes me as a really handy feature.
5) Network file system caching. How do you speed up a hard drive, or anything else with a file system on it for that matter? You use a cache. Now, with the adoption of FS-Cache, you can use caching with networked file systems.
Right now it only works with NFS (Network File System) and AFS (Andrew File System). These network file systems tend to be used in Unix and Linux-only shops, but there's no reason why you can't use FS-Cache on top of any file system that's network accessible.
I tend to be suspicious of network caching since it's all too easy to lose a network connection, which means you can be left with a real mess between what the server thinks has been changed, added, and saved and what your local cache thinks has been saved. FS-Cache addresses this problem of cache coherency by using journaling on the cache so you can bring the local and remote file systems back into agreement.
While 2.6.30 may not be the most exciting Linux kernel release, it does include several very solid and important improvements. Personally, I plan on switching my servers over to 2.6.30-based distributions as soon as they become available. If your concerns are mostly with the Linux desktop though I wouldn't be in that much of a hurry, most of the updates are more important for server administrators than desktop users.
What this means for you is that Windows and Mac OS are taking large, slow steps, while Linux is constantly evolving. Thus, Linux's changes may not be as big from version to version, but they tend to be more thoroughly tested and stable. What most users will like in this distribution starts with a faster boot-up for Linux.
1. Fast boot. Older versions of Linux spend a lot of time scanning for hard drives and other storage devices and then partitions on each of them. This eats up a lot of milliseconds because it looks for them one at a time. With the 2.6.30 boot-up, however, instead of waiting for this to get done the rest of the kernel continues to boot-up. At the same time, the storage devices are being checked in parallel, two or more at a time, to further improve the system's boot speed.
There are other efforts afoot to speed up Linux's boot times. The upshot of all this work will be to keep Linux the fastest booting operating system well into the future.
2. Storage improvements. Speaking of storage devices, there's a long, laundry list of file system improvements. I won't go into most of those in detail. Suffice it to say that no matter what file system you use either locally or on a network, chances are that it's performance and stability has been improved. For a high-level view of these changes see the Linux Kernel Newbie 2.6.30 reference page.
I will mention one issue though simply because, as Jonathan Corbet, Linux kernel developer and journalist put it, "Long, highly-technical, and animated discussion threads are certainly not unheard of on the linux-kernel mailing list. Even by linux-kernel standards, though, the thread that followed the 2.6.29 announcement was impressive." You can say that again.
The argument... ah discussion was over how file systems and block I/O (input/output) using the fsync() function in Linux should work. The really simple version of this discussion is that fsync has defaulted to forcing a system to write file system journal and related file data to be written immediately. Most I/O schedulers though push reads over writes. On a non-journaling file system, that's not a big deal. But, a journal write has to go through immediately and it can take up a lot of time while it's doing it.
On Ext3, probably the most widely used Linux file system, the result is that Ext3 is very stable, because it makes sure those journal writes go through, but at the same time it's very slow, once more because of those journal writes. You can argue almost endlessly over how to handle this problem, or even that Ext3 fsync function runs perfectly fine. Linus Torvalds, however, finally came down on the side of making the writes faster.
The arguments continue though on how to handle fsync(). And, in addition, side discussions on how to handle file reads, writes and creation continue on. For users most of this doesn't matter, developers who get down and dirty with file-systems though should continue to pay close attention.
3) Ext4 tuning. Linux's new Ext4 file system has been in the works for several years now. It's now being used in major Linux distributions like Ubuntu 9.04, and it's working well. That said, Ext4 has gotten numerous minor changes to improve its stability and performance.
I've been switching my Linux systems to Ext4 over the last few months. If you've been considering making the switch, wait until your distribution adopts the 2.6.30 kernel, and give it a try. I think you'll be pleased.
4) Kernel Integrity Management. Linux is more secure than most other operating systems. Notice, though, that I say it's more secure. I don't say, and I'd be an idiot if I did, that it's completely secure. Nothing is in this world. The operating system took a big step forward in making it harder for any would be cracker to break it though with the introduction of Integrity Management.
This is an old idea that's finally made it into the kernel. What it boils down to is that checks the integrity of files and their metadata when they're called by the operating system using an EVM (extended verification module) code. If a file appears to have been tampered with, the system can lock down the its use and notify the administrator that mischief is afoot.
While SE-Linux (Security Enhanced Linux) is far more useful for protecting most users, I can see Integrity Management being very handy for Linux devices that don't get a lot of maintenance such as Wi-Fi routers. Attacks on devices are begining to happen and a simple way to lock them down if their files have been changed strikes me as a really handy feature.
5) Network file system caching. How do you speed up a hard drive, or anything else with a file system on it for that matter? You use a cache. Now, with the adoption of FS-Cache, you can use caching with networked file systems.
Right now it only works with NFS (Network File System) and AFS (Andrew File System). These network file systems tend to be used in Unix and Linux-only shops, but there's no reason why you can't use FS-Cache on top of any file system that's network accessible.
I tend to be suspicious of network caching since it's all too easy to lose a network connection, which means you can be left with a real mess between what the server thinks has been changed, added, and saved and what your local cache thinks has been saved. FS-Cache addresses this problem of cache coherency by using journaling on the cache so you can bring the local and remote file systems back into agreement.
While 2.6.30 may not be the most exciting Linux kernel release, it does include several very solid and important improvements. Personally, I plan on switching my servers over to 2.6.30-based distributions as soon as they become available. If your concerns are mostly with the Linux desktop though I wouldn't be in that much of a hurry, most of the updates are more important for server administrators than desktop users.
Thursday, June 11, 2009
Squid Error : Name error: the domain name does not exist
Problem (example):
The requested URL could not be retrieved
While trying to retrieve the URL: http://intranet/
The following error was encountered:
Unable to determine IP address from host name for http://intranet
The dnsserver returned:
Name Error: The domain name does not exist.
This means that:
The cache was not able to resolve the hostname presented in the URL.
Check if the address is correct.
Solution:
append_domain : This directive helps Squid turn single-component hostnames into fully qualified domain names. For example, http://www/ becomes www.example.com/. This is especially important if you are participating in a cache hierarchy.
Add the following directive into you squid.conf file to solve the above problem
append_domain .domainname.com
The requested URL could not be retrieved
While trying to retrieve the URL: http://intranet/
The following error was encountered:
Unable to determine IP address from host name for http://intranet
The dnsserver returned:
Name Error: The domain name does not exist.
This means that:
The cache was not able to resolve the hostname presented in the URL.
Check if the address is correct.
Solution:
append_domain : This directive helps Squid turn single-component hostnames into fully qualified domain names. For example, http://www/ becomes www.example.com/. This is especially important if you are participating in a cache hierarchy.
Add the following directive into you squid.conf file to solve the above problem
append_domain .domainname.com
Tuesday, June 9, 2009
Setting the SUID/SGID bits
SetUID bit, the executable which has the SUID set runs with the ownership of the program owner. That is, if you own an executable, and another person issues the executable, then it runs with your permission and not his. The default is that a program runs with the ownership of the person executing the binary.
The SGID bit is the same as of SUID, only the case is that it runs with the permission of the group. Another use is it can be set on folders,making files or folders created inside the SGID set folder to have a common group ownership.
Note : Making SUID and SGID programs completely safe is very difficult (or maybe impossible) thus in case you are a system administrator it is best to consult some professionals before giving access rights to root owned applications by setting the SUID bit. As a home user (where you are both the normal user and the superuser) the SUID bit helps you do a lot of things easily without having to log in as the superuser every now and then
Setting SUID bits on the file:
Suppose I got the executable called "killprocess" and I need to set the suid bit on this file, go to command prompt and issue command: chmod u+s killprocess
Now check permission on the file with command ls -l killprocess, observe "s" that has been added for suid bit
-rwsr-xr-x 1 root root 6 Jun 7 12:16 killprocess
Setting GUID bits on the file:
go to command prompt and issue command: chmod g+s killprocess
This will set the GUID bit on the same file, check the permission on this file using command: ls -l killprocess
-rwsr-sr-x 1 root root
The SGID bit is the same as of SUID, only the case is that it runs with the permission of the group. Another use is it can be set on folders,making files or folders created inside the SGID set folder to have a common group ownership.
Note : Making SUID and SGID programs completely safe is very difficult (or maybe impossible) thus in case you are a system administrator it is best to consult some professionals before giving access rights to root owned applications by setting the SUID bit. As a home user (where you are both the normal user and the superuser) the SUID bit helps you do a lot of things easily without having to log in as the superuser every now and then
Setting SUID bits on the file:
Suppose I got the executable called "killprocess" and I need to set the suid bit on this file, go to command prompt and issue command: chmod u+s killprocess
Now check permission on the file with command ls -l killprocess, observe "s" that has been added for suid bit
-rwsr-xr-x 1 root root 6 Jun 7 12:16 killprocess
Setting GUID bits on the file:
go to command prompt and issue command: chmod g+s killprocess
This will set the GUID bit on the same file, check the permission on this file using command: ls -l killprocess
-rwsr-sr-x 1 root root
Wednesday, June 3, 2009
Information Security Incident Rating Category
Category I: Unauthorized Root/Administrator Access
A Category I event occurs when an unauthorized party gains 'root' or 'administrator' control of a client computer. Unauthorized parties include human adversaries and automated malicious code, such as a worm. On UNIX-like systems, the 'root' account is the 'super-user,' generally capable of taking any action desired by the unauthorized party. (Note that so-called 'Trusted' operating systems (OS), like Sun Microsystem's 'Trusted Solaris,' divide the powers of the root account among various operators. Compromise of any one of these accounts on a 'Trusted' OS constitutes a category I incident.) On Windows systems, the 'administrator' has near complete control of the computer, although some powers remain with the 'SYSTEM' account used internally by the OS itself. (Compromise of the SYSTEM account is considered a category I event as well.) Category I incidents are potentially the most damaging type of event.
Category II: Unauthorized User Access
A Category II event occurs when an unauthorized party gains control of any non-root or non-administrator account on a client computer. User accounts include those held by people as well as applications. For example, services may be configured to run or interact with various non-root or non-administrator accounts, such as 'apache' for the Apache web server or 'IUSR_machinename' for Microsoft's Internet Information Services (IIS). Category II incidents are treated as though they will quickly escalate to Category I events. Skilled attackers will elevate their privileges once they acquire user status on the victim machine.
Category III: Attempted Unauthorized Access
A Category III event occurs when an unauthorized party attempts to gain root/administrator or user level access on a client computer. The exploitation attempt fails for one of several reasons. First, the target may be properly patched to reject the attack. Second, the attacker may find a vulnerable machine, but he may not be sufficiently skilled to execute the attack. Third, the target may be vulnerable to the attack, but its configuration prevents compromise. (For example, an IIS web server may be vulnerable to an exploit employed by a worm, but the default locations of critical files have been altered.)
Category IV: Successful Denial of Service Attack
A Category IV event occurs when an adversary takes damaging action against the resources or processes of a target machine or network. Denial of service attacks may consume CPU cycles, bandwidth, hard drive space, user's time, and many other resources.
Category V: Poor Security Practice or Policy Violation
A Category V event occurs when an analyst detects a condition which exposes the network and/or sytems on the network to an unnecessary risk of exploitation. For example, should an analyst discover that a domain name system server allows zone transfers to all Internet users, he would classify the incident as a category V event. (Zone transfers provide complete information on the host names and IP addresses of client machines.) Violations of a client's security policy also constitutes a category V incident. Should a client forbid the use of peer-to-peer file sharing applications, detections of Napster or Gnutella traffic will be reported as category V events.
Category VI: Reconnaissance/Probes/Scans
A Category VI event occurs when an adversary attempts to learn about a target system or network, with the presumed intent to later compromise that system or network. Reconnaissance events include port scans, enumeration of NetBIOS shares on Windows systems, inquiries concerning the version of applications on servers, unauthorized zone transfers, and similar activity. Category VI activity also includes limited attempts to guess user names and passwords. Sustained, intense guessing of user names and passwords would be considered category III events if unsuccessful.
Category VII: Virus Infection
A Category VII event occurs when a client system becomes infected by a virus. Note the emphasis here is on the term virus, as opposed to a worm. Viruses depend on one or both of the following conditions: (1) human interaction is required to propagate the virus; (2) the virus must attach itself to a 'host' file, such as an email message, Word document, or web page. Worms, on the other hand, are capable of propagating themselves without human interaction or host files. A compromise caused by a worm would qualify as a category I or II event.
A Category I event occurs when an unauthorized party gains 'root' or 'administrator' control of a client computer. Unauthorized parties include human adversaries and automated malicious code, such as a worm. On UNIX-like systems, the 'root' account is the 'super-user,' generally capable of taking any action desired by the unauthorized party. (Note that so-called 'Trusted' operating systems (OS), like Sun Microsystem's 'Trusted Solaris,' divide the powers of the root account among various operators. Compromise of any one of these accounts on a 'Trusted' OS constitutes a category I incident.) On Windows systems, the 'administrator' has near complete control of the computer, although some powers remain with the 'SYSTEM' account used internally by the OS itself. (Compromise of the SYSTEM account is considered a category I event as well.) Category I incidents are potentially the most damaging type of event.
Category II: Unauthorized User Access
A Category II event occurs when an unauthorized party gains control of any non-root or non-administrator account on a client computer. User accounts include those held by people as well as applications. For example, services may be configured to run or interact with various non-root or non-administrator accounts, such as 'apache' for the Apache web server or 'IUSR_machinename' for Microsoft's Internet Information Services (IIS). Category II incidents are treated as though they will quickly escalate to Category I events. Skilled attackers will elevate their privileges once they acquire user status on the victim machine.
Category III: Attempted Unauthorized Access
A Category III event occurs when an unauthorized party attempts to gain root/administrator or user level access on a client computer. The exploitation attempt fails for one of several reasons. First, the target may be properly patched to reject the attack. Second, the attacker may find a vulnerable machine, but he may not be sufficiently skilled to execute the attack. Third, the target may be vulnerable to the attack, but its configuration prevents compromise. (For example, an IIS web server may be vulnerable to an exploit employed by a worm, but the default locations of critical files have been altered.)
Category IV: Successful Denial of Service Attack
A Category IV event occurs when an adversary takes damaging action against the resources or processes of a target machine or network. Denial of service attacks may consume CPU cycles, bandwidth, hard drive space, user's time, and many other resources.
Category V: Poor Security Practice or Policy Violation
A Category V event occurs when an analyst detects a condition which exposes the network and/or sytems on the network to an unnecessary risk of exploitation. For example, should an analyst discover that a domain name system server allows zone transfers to all Internet users, he would classify the incident as a category V event. (Zone transfers provide complete information on the host names and IP addresses of client machines.) Violations of a client's security policy also constitutes a category V incident. Should a client forbid the use of peer-to-peer file sharing applications, detections of Napster or Gnutella traffic will be reported as category V events.
Category VI: Reconnaissance/Probes/Scans
A Category VI event occurs when an adversary attempts to learn about a target system or network, with the presumed intent to later compromise that system or network. Reconnaissance events include port scans, enumeration of NetBIOS shares on Windows systems, inquiries concerning the version of applications on servers, unauthorized zone transfers, and similar activity. Category VI activity also includes limited attempts to guess user names and passwords. Sustained, intense guessing of user names and passwords would be considered category III events if unsuccessful.
Category VII: Virus Infection
A Category VII event occurs when a client system becomes infected by a virus. Note the emphasis here is on the term virus, as opposed to a worm. Viruses depend on one or both of the following conditions: (1) human interaction is required to propagate the virus; (2) the virus must attach itself to a 'host' file, such as an email message, Word document, or web page. Worms, on the other hand, are capable of propagating themselves without human interaction or host files. A compromise caused by a worm would qualify as a category I or II event.
Subscribe to:
Comments (Atom)