Monday, June 30, 2008
Default password list in the system/Application
Password Generic System
This following resources provide information about detail password in many type of system
http://www.phenoelit-us.org/dpl/dpl.html
http://www.cirt.net/passwords
http://www.dopeman.org/default_passwords.html
http://www.redoracle.com/index.php?option=com_password&task=rlist
http://www.virus.org/default-password/
2. Network Devices Specific (Router, Firewall, IPS System)http://www.routerpasswords.com/http://www.governmentsecurity.org/
3. System SpecificOracle Specifichttp://www.petefinnigan.com/default/default_password_list.htm
4.SAP Specifichttp://www.petefinnigan.com/default/sap_default_users.htm
5.Cisco Specifichttp://www.cisco.com/warp/public/707/cisco-sa-20040407-username.shtml
Making your web server more secure
- Don’t install any unnecessary components on the server – more code means more vulnerability for crackers to exploits.
Sign up to your operating system security notifications.
Patch all operating systems and any applications with official security fix.
Run up-to-date anti virus software on the web server, regardless of what operating system you are using.
IIS users
Don’t enable directory browsing unless you really need it.
Disable any FrontPage server extensions that are not being used.
Apache Users
Deny “all resources” by default and only allow the necessary functionality to each specific resource.
Log all web requests to allow you to spot suspicious activity.
Writing safer code
Always initialize global variables (avoiding the danger of them being initialized by a fake GET or POST request).
Turn off error reporting and log to file instead (making it more difficult for crackers to get the information they need).
Never trust any user input and output, so use filter functions to strip out special SQL characters and escape sequences.
Friday, June 27, 2008
Advantages of Open Source Backup Approach
Open Source does not use proprietary tools and data layouts when backing up the data. Open Source uses tar, dump, or Schilly tar as backup tools. These are readily available, industry-standard tools. Their specification and data layouts have been stable for many years, with the promise that this will continue.
One of the examples of Open Source Backup software is Amanda. Amanda adds value to these tools by providing several features:
- Amanda keeps an index of files for ease of recovery.
- Amanda provides its own toolset for recovering files. These tools provide ease of use and index navigation.
- It automates the running of the standard tools.
- Amanda can send the output of the standard tools across the network to a centralized backup server.
- The output of several backup clients, using the standardized tools, can be written to tape on the backup server.
- Additionally, data from Amanda dumps can be restored without the use of any Amanda commands.
Wednesday, June 25, 2008
Drawback of URL-based Filtering.
The major drawback with such software is that it can only effectively filter content that has already been screened by the maintainers of the database – if a site is not in the database, this is not necessarily because it is not undesirable; rather, the software is simply unable to make any useful classification.
Given that the Internet consists of tens of billions of pages with millions more added every day, such a system can realistically only cover a tiny percentage of existing sites, and will always be fighting a losing battle against creators of undesirable content. Some particularly naïve URL filters can also be very easily bypassed.
Thursday, June 19, 2008
Securing SSH
PermitRootLogin no
Also ensure to have privilege separation enabled where the daemon is split into two parts. With privilege separation a small part of the code runs as root and the rest of the code runs in a chroot jail environment. Note that on older RHEL systems this feature can break some functionality.
UsePrivilegeSeparation yes
Since SSH protocol version 1 is not as secure you may want to limit the protocol to version 2 only:
Protocol 2
You may also want to prevent SSH from setting up TCP port and X11 forwarding if you don't need it:
AllowTcpForwarding no
X11Forwarding no
Ensure the StrictModes directive is enabled which checks file permissions and ownerships of some important files in the user's home directory like ~/.ssh, ~/.ssh/authorized_keys etc. If any checks fail, the user won't be able to login.
StrictModes yes
Ensure that all host-based authentications are disabled. These methods should be avoided as primary authentication.
IgnoreRhosts yes
HostbasedAuthentication no
RhostsRSAAuthentication no
Disable sftp if it's not needed:
#Subsystem sftp /usr/lib/misc/sftp-server
After changing any directives make sure to restart the sshd daemon:
/etc/init.d/sshd restart
Sunday, June 15, 2008
Wednesday, June 11, 2008
Configuring Squid as Reverse Proxy (http accelerator)
Usually it is found in either /usr/local/squid/etc, when installed directly from source code, or /etc/squid when pre-installed on Red Hat Linux systems. The squid.conf file is used to set and configure all the different options for the Squid proxy server. As root open the squid.conf file in your favorite text editor. If the real web server runs on a separate machine than the Squid reverse proxy, edit the following options in the squid.conf file:
http_port 80 # Port of Squid proxy
httpd_accel_host 172.16.1.115 # IP address of web server
httpd_accel_port 80 # Port of web server
httpd_accel_single_host on # Forward uncached requests to single host
httpd_accel_with_proxy on #
httpd_accel_uses_host_header off
If the web server runs on the same machine where Squid is running, the web server daemon must be set to run on port 81 (or any other port than 80). With the Apache web server, it can do by assigning the line "Port 80" to "Port 81" in its httpd.conf file. The Squid.conf must also be modified to redirect missed requests to port 81 of the local machine:
http_port 80 # Port of Squid proxy
httpd_accel_host localhost # IP address of web server
httpd_accel_port 81 # Port of web server
httpd_accel_single_host on # Forward uncached requests to single host
httpd_accel_with_proxy on #
httpd_accel_uses_host_header off
We describe these options in greater detail.
http_port 80
The option http_port specifies the port number where Squid will listen for HTTP client requests. If this option is set to port 80, the client will have the illusion of being connected to the actual web server. These options should always be port 80.
httpd_accel_host 172.16.1.115 and httpd_accel_port 80
The options httpd_accel_host and httpd_accel_port specify the IP address and port number of the real HTTP Server, such as Apache. In our configuration, the real HTTP Web Server is on the IP address 172.16.1.115 and on port 80.
If we are using the reverse proxy for more than one web server, then we must use the word virtual as the httpd_accel_host. Uncached requests can only be forwarded to one port. There is no table that associates accelerated hosts and a destination port. When the web server is running on the same machine as Squid, set the web server to listen for connections on a different port (8000, for example), and set the httpd_accel_port option to the same value.
httpd_accel_single_host on
To run Squid with a single back end web server, set httpd_accel_single_host option to on. Squid will forward all uncached requests to this web server regardless of what any redirectors or Host headers says. If the Squid reverse proxy must support multiple back end web servers, set this option to off, and use a redirector (or host table or private DNS) to map the requests to the appropriate back end servers. Note that the mapping needs to be a 1-1 mapping between requested and backend (from redirector) domain names or caching will fail, as caching is performed using the URL returned from the redirector.
httpd_accel_with_proxy on
If one wants to use Squid as both an httpd accelerator and as a proxy for local client machines, set the httpd_accel_with_proxy to on. By default, it is off. Note however that your proxy users may have trouble reaching the accelerated domains, unless their browsers are configured not to use the Squid proxy for those domains. The no_proxy option can be used to direct clients not to use the proxy for certain domains.
httpd_accel_uses_host_header off
Requests in HTTP version 1.1 include a Host header, specifying the host name (or IP address) of the URL. This option should remain off in reverse proxy mode. The only time this option must be set to on is when Squid is configured as a Transparent proxy.
It's important to note that acls (access control lists) are checked before this translation. You must combine this option with strict source-address checks, so you cannot use this option to accelerate multiple back end servers.
Monday, June 9, 2008
What is Reverse Proxy Cache?
By deploying Reverse Proxy Server alongside web servers, sites will:
• Avoid the capital expense of purchasing additional web servers by increasing the capacity of existing servers.
• Serve more requests for static content from web servers.
• Serve more requests for dynamic content from web servers.
• Increase profitability of the business by reducing operating expenses including the cost of bandwidth required to serve content.
• Accelerate the response time of web and accelerate page download times to end users, delivering a faster, better and experience to site visitors.
When planning Reverse Proxy implementation the origin server's content should be written with the proxy server in mind, i.e. it should be "Cache Friendly". If the origin server's content is not "cache aware", it will not be able to take full advantage of the reverse proxy cache. In Reverse Proxy mode, the Proxy Server functions more like a web server with respect to the clients it services. Unlike internal clients, external clients are not reconfigured to access the proxy server. Instead, the site URL routes the client to the proxy as if it were a web server. Replicated content is delivered from the proxy cache to the external client without exposing the origin server or the private network residing safely behind the firewall. Multiple reverse proxy servers can be used to balance the load on an overtaxed web server in much the same way. The objective of this white paper is to explain the implementation of
Squid as a Reverse proxy also known as Web Server accelerator. The basic concept of caching is explained followed by the actual implementation and testing of the reverse proxy mode of squid.
Squid is an Open source high-performance Proxy caching server designed to run on Unix systems. National Science Foundation funds squid project, Squid has its presence in numerous ISP's and corporate around the globe. Squid can do much more than what most of the proxy servers around can do.
Reverse Proxy compared with other Proxy caches
There are three main ways that proxy caches can be configured on a network:
Standard Proxy Cache A standard proxy cache is used to cache static web pages (html and images) to a machine on the local network. When the page is requested a second time, the browser returns the data from the local proxy instead of the origin web server. The browser is explicitly configured to direct all HTTP requests to the proxy cache, rather than the target web server. The cache then either satisfies the request itself or passes on the request to the target server.
Transparent Cache
A transparent cache achieves the same goal as a standard proxy cache, but operates transparently to the browser. The browser does not need to be explicitly configured to access the cache. Instead, the transparent cache intercepts network traffic, filters HTTP traffic (on port 80), and handles the request if the item is in the cache. If the item is not in the cache, the packets are forwarded to the origin web server. For Linux, the transparent cache uses iptables or ipchains to intercept and filter the network traffic. Transparent caches are especially useful to ISPs, because they require no browser setup modification. Transparent caches are also the simplest way to use a cache internally on a network (at peering hand off points between an ISP and a larger network, for example), because they don't require explicit coordination with other caches.
Reverse Proxy Cache
A reverse proxy cache differs from standard and transparent caches, in that it reduces load on the origin web server, rather than reducing upstream network bandwidth on the client side. Reverse Proxy Caches offload client requests for static content from the web server, preventing unforeseen traffic surges from overloading the origin server. The proxy server sits between the Internet and the Web site and handles all traffic before it can reach the Web server. A reverse proxy server intercepts requests to the Web server and instead responds to the request out of a store of cached pages. This method improves the performance by reducing the amount of pages actually created "fresh" by the Web server.
How reverse proxy caches work.
When a client browser makes an HTTP request, the DNS will route the request to the reverse proxy machine, not the actual web server. The reverse proxy will check its cache to see if it contains the requested item. If not, it connects to the real web server and downloads the requested item to its disk cache. The reverse proxy can only server cacheable URLs (such as html pages and images).
Dynamic content such as cgi scripts and Active Server Pages cannot be cached. The proxy caches static pages based on HTTP header tags that are returned from the web page
The four most important header tags are:
Last-Modified: Tells the proxy when the page was last modified.
Expires: Tells the proxy when to drop the page from the cache.
Cache-Control: Tells the proxy if the page should be cached.
Pragma: Also tells the proxy if the page should be cached.
For example, by default all Active Server Pages return "Cache-control: private."Therefore, no Active Server Pages will be cached on a reverse proxy server
Monday, June 2, 2008
Current DNS Attack Vectors
1. MITM
· Spoofing data is trivial
· Single UDP packet request/response
· Exists all along chain
2. ID Guessing
· Guess 16 bits nonce and possibly randomly selected port
· Works on recursive resolvers and stubs
3. Birthday Attack
· Subset of ID guessing
· Send multiple requests to the targets recursive resolver targeting the same authoritative server.
· Send your poisoning attacks, which can match any the results from the queries.
· 50% success with 300 packets, conventional poisoning needs 32K packets for 50% success.
· Mitigated by late bind 9 by combining aggregating queries.
· Made much more difficult by query source port randomization in djbdns.
4. Name Chaining
· Cache poisoning attack only, doesn’t affect stub resolvers.
· Must use one of the former methods to insert it.
· Differs from conventional poisoning attacks in that only requested information is returned but with falsified answers.
5. Rogue DNS Servers
· DNS servers usually assigned by DHCP
· Survey of DNS servers that attempt to poison old clients by returning bogus information.
6. DOS attacks
· Attacks against the DNS servers themselves.
· Attacking other system with DNS amplification.
· Both of these attacks are made easier by DNSSEC.
7. Information Removal
· Special case of MITM problem.
· Mitigated by DNS denial of existence.