Archive for the ‘ Testing ’ Category

Linux File Systems: Ext2 vs Ext3 vs Ext4

ext2, ext3 and ext4 are all filesystems created for Linux. This article explains the following:

  • High level difference between these filesystems.
  • How to create these filesystems.
  • How to convert from one filesystem type to another.

Follow the link to get the whole article from http://www.thegeekstuff.com

Nikto2 – comprehensive web server scanner

Nikto2 | CIRT.net.

Nikto is an Open Source (GPL) web server scanner which performs comprehensive tests against web servers for multiple items, including over 6400 potentially dangerous files/CGIs, checks for outdated versions of over 1000 servers, and version specific problems on over 270 servers. It also checks for server configuration items such as the presence of multiple index files, HTTP server options, and will attempt to identify installed web servers and software. Scan items and plugins are frequently updated and can be automatically updated.
Nikto is not designed as an overly stealthy tool. It will test a web server in the quickest time possible, and is fairly obvious in log files. However, there is support for LibWhisker’s anti-IDS methods in case you want to give it a try (or test your IDS system).
Not every check is a security problem, though most are. There are some items that are “info only” type checks that look for things that may not have a security flaw, but the webmaster or security engineer may not know are present on the server. These items are usually marked appropriately in the information printed. There are also some checks for unknown items which have been seen scanned for in log files.

Nikto is written by Chris Sullo and David Lodge.

Open source web-crawler: Web-Harvest

Web-Harvest Project Home Page.

Web-Harvest is Open Source Web Data Extraction tool written in Java. It offers a way to collect desired Web pages and extract useful data from them. In order to do that, it leverages well established techniques and technologies for text/xml manipulation such as XSLT, XQuery and Regular Expressions. Web-Harvest mainly focuses on HTML/XML based web sites which still make vast majority of the Web content. On the other hand, it could be easily supplemented by custom Java libraries in order to augment its extraction capabilities.

Process of extracting data from Web pages is also referred as Web Scraping or Web Data Mining. World Wide Web, as the largest database, often contains various data that we would like to consume for our needs. The problem is that this data is in most cases mixed together with formatting code – that way making human-friendly, but not machine-friendly content. Doing manual copy-paste is error prone, tedious and sometimes even impossible. Web software designers usually discuss how to make clean separation between content and style, using various frameworks and design patterns in order to achieve that. Anyway, some kind of merge occurs usually at the server side, so that the bunch of HTML is delivered to the web client.