Monday, January 29, 2007

Who Killed the Webmaster?

Back in the frontier days of the web–when flaming skulls, scrolling marquees, and rainbow divider lines dominated the landscape–”Webmaster” was a vaunted, almost mythical, title. The Webmaster was a techno-shaman versed the black arts needed to make words and images appear on this new-fangled Information Superhighway. With the rise of the Webmaster coinciding with the explosive growth of the web, everyone predicted the birth of a new, well paying, and in-demand profession. Yet in 2007, this person has somehow vanished; even the term is scarcely mentioned. What happened? A decade later I’m left wondering “Who killed the Webmaster?”

Suspect #1: The march of technology


By 2000, I think every person in the developed world had a brother-in-law who created websites on the side. Armed with Frontpage and a pirated copy of Photoshop, he’d charge a reasonable fee per page (though posting more than three images cost extra.)

Eventually the web hit equilibrium and just having a website didn’t make a company hip and cutting-edge. Now management demanded that their website look better than the site immediately ranked above in search results. And as expensive as the sites were, ought they not “do something” too? Companies increasingly wanted an exceptional website requiring a sophisticated combination of talent to pull off. HTML and FTP skills, as useful as they had been, were no longer a sharp enough tool in the Webmaster’s toolbox. Technologies such as CSS and multi-tier web application development rapidly made WYSIWYG editors useless for all but ordinary websites. And with the explosion of competition and possibilities on the Internet few businesses were willing to pay for “ordinary”.

In 1995, the “professional web design firm” was single, talented person working from home. Today it’s a diverse team of back-end developers, front-end developers, graphic artists, UI designers, database and systems administrators, search engine marketing experts, analytics specialists, copywriters, editors, and project managers. The industry has simply grown so specialized, so quickly, for one person to hardly be a master of anything more than a single strand in the web.

Suspect #2: Is it the economy, stupid?


Then again, perhaps the disappearance of the Webmaster can better be explained by an underwhelming economy rather than overwhelming technology. Riding high on the bull market of the late 90’s, companies were increasingly willing to assume more risk to reach potential customers. This was especially true of small businesses, which traditionally have miniscule advertising and marketing budgets. Everyone wanted a piece of the Internet pie and each turned to the Webmaster to deliver. More than just a few Webmasters made a respectable living by cranking out a couple $500 websites every week.

Once the bubble burst in early 2000, the dot-com hangover left many small businesses clutching their heads and checking their wallets. As companies braced to solely maintain what they already had, the first cut inevitably was to marketing and advertising. In-house Webmasters were summarily let go, their duties hastily transferred to an already overworked office manager. Freelance Webmasters were hit even harder as business owners struggled to first take care of their own. The gold rush had crumbled to fools’ gold even faster than it had started.

While a few Webmaster were able to weather the storm—mostly those with either extraordinary skills or a gainfully employed spouse—the majority were forced to abandon their budding profession and return to the world of the mundane.

Suspect #3: The rise of Web 2.0


Another strong possibility is that the Internet has simply evolved beyond the Webmaster. “Web 2.0″ is the naked emperor of technological neologisms; we all nod our head at the term but then stammer when pressed for a definition. As far as I can tell, Web 2.0 is mostly about rounded corners, low-contrast pastel colors, and domain names with missing vowels. But it also seems to be about an emphasis on social collaboration. This may seem like a no-brainer given the connectedness of the Internet itself; however, thinking back to Web 1.0 there was a distinct lack of this philosophy. Web 1.0 was more an arms race to build “mindshare” and “eyeballs” in order to make it to the top of the hill with the most venture capital. Even the Web 1.0 term of “portal” conjures up an image of Lewis Carroll’s Alice tumbling down a hole and into an experience wholly managed by the resident experts–the Webmasters. Despite the power and promises to be so much more, the web wasn’t much different than network television or print. Even the most interesting and successful business models of the Web 1.0 era could have been accomplished years prior with an automated telephone system.

It wasn’t until after the failure of the initial experiment did people begin to rethink the entire concept of the Internet. Was the Webmaster as gatekeeper really necessary? If we all have a story to share, why can’t everyone contribute to the collective experience? Perhaps it was the overabundance Herman Miller chairs, but Web 1.0 was inarguably about style over substance. Yet, as anyone who’s ever visited MySpace can attest, today content is king. With all of us simultaneously contributing and consuming on blogs, MySpace, YouTube, Flickr, Digg, and SecondLife, who needs a Webmaster anymore?

Sing and Search the Internet

The search engines are more and more powerful, allowing you to search for all kinds of information including jobs, blog posts, videos, news and even products to buy. At this time, Google is the most known search engine on the internet, being continuously challenged by Yahoo, Ask or Live Search. From time to time, other companies are trying to design innovative services meant to compete with the ones provided by the giant firms.

Midomi is one of the interesting services that were designed to represent a new search technology on the internet that is currently unavailable on the most



known search engines.

“midomi is the ultimate music search tool because it is powered by your voice. Sing, hum, or whistle to instantly find your favorite music and connect with a community that shares your musical interests. Give it a try. It's truly amazing! Our mission is to build the most comprehensive database of searchable music. You can contribute to the database by singing in midomi's online recording studio in any language or genre. The next time anyone searches for a song, your performance might be the top result,” it is mentioned in the description of the search technology.

The idea presented by the midomi is simple: all you need to do is go to the official website of the service, press on the Start Voice Search button and then sing your song. The solution will record your voice and then look for a song that matches your query and return certain results. Midomi is not only a search technology, but it also represents a large community of users that are continuously posting new recordings for songs from all around the world, offering you the possibility to search using predefined or prerecorded sounds.

If you want to test this exciting search engine, you should follow this link and try to avoid blaming your singing talent if the service doesn’t return any relevant result.

What is Better than Sex?

I guess you would do anything to find out what is that thing that makes people forget about sex, wouldn't you? I felt the same but after finding out what that thing was, I kind of had a feeling of being misled. I suppose you will feel the same but, what the heck, here it goes.

According to a survey conducted by the UK mobile phone retailer Dial-a-Phone, 30 percent of the men and a pretty good slice of the number of women (approximately 42.5 percent) would answer their cellphone while they have sex.

If that is not a clear sign that they would enjoy more talking with their friends, with the family members or who knows with what other individuals, then what is? And if that survey is true, the things seem to go on the same down falling route as in the case of the dinosaurs :). Where will we be and how many of us will still think of sex if the mobile phone manufacturers keep releasing better and better handsets each day?

Giving credit to the guys that made this study, you should also know that 24 percent of the participating women also declared they would



rather give up sex instead of their handsets for a whole month. I suppose this would be a pretty good way for the monasteries to assure the nuns follow the celibacy oath: give a nun a couple of hundred cellphones to keep her busy and the danger of her wanting some male attention reduces dramatically :).

Flic Everett, the relationship expert (as he calls himself) from Dial-a-Phone, has expressed his total disagreement about the whole thing and said to “never ever answer your phone during sex...There's a time and a place for mobile phones! Turning them off occasionally or even switching them to silent will make your loved-one feel as though they have your attention.”

Furthermore, besides being something way better than sex in the mind of some troubled humans from the United Kingdom, the phones also are an important part for some of us when having to start a new relationship or even when wanting to end one without having to deal with our “worse” half's discontent.

As the relationship expert from Dial-a-Phone has once again said: “singletons consider their mobile phone their most valuable dating weapon - arranging dates and getting to know prospective partners through text messaging (sending on average 12 before they meet up) to relaying the success of the date during the event to their mates (four out of five contact a mate during a date). "

Concluding all the things I wrote above, I guess one single word could sum up very well where we are heading to in case cellphones become better than ever before and things follow the pattern discovered by Dial-a-Phone: EXTINCTION! (you have remembered the dinosaurs didn't you? :) (they probably also had cellphones or at least something veery similar).

Inside the Lucasfilm datacenter

"Where can you find a (rhetorical) 11.38 petabits per second bandwidth? It appears to be inside the Lucasfilm Datacenter. At least, that is the headline figure mentioned in this report on a tour of the datacenter. The story is a bit light on the down-and-dirty details, but mentions a 10 gig ethernet backbone (adding up the bandwidth of a load of network connections seems to be how they derived the 11.38 petabits p/s figure. In that case, I have a 45 gig network at home.) Power utilization is a key differentiator when buying hardware, a "legacy" cycle of a couple of months, and 300TB of storage in a 10.000 square foot datacenter. To me, the story comes across as somewhat hyped up — "look at us, we have a large datacenter" kind of thing, "look how cool we are". Over the last couple of years, I have been in many datacenters, for banks, pharma and large enterprise to name a few, that have somewhat larger and more complex setups."

Debian Gets Win32 Installer

"Debian hacker Robert Millan has just announced the availability of a Debian-Installer Loader for win32. The program, inspired by Ubuntu's similar project, features 64-bit CPU auto-detection, download of linux/initrd netboot images, and chainloading into Debian-Installer via grub4dos. The frontend site goodbye-microsoft.com/ has been set up for advocacy purposes. Here are some screenshots."

Google TV - An Elaborate Hoax

A heavily produced YouTube video from Mark Erickson at “Infinite Solutions” shows users how to get in on the super-secret (and non existent) Google TV beta. It involves sending yourself an email and then logging in and out of Gmail multiple times until a tv icon appears in the Gmail logo. In the comments to the video, some users have tried logging in and out of Gmail hundreds of times without it working.This is almost certainly a fake, as Google Blogoscoped reports. Erickson then posted a second video to prove the authenticity and saying that Google had increased the login requirements “substantially”. A+ for effort and originality. Both videos are below.








Sphere It

Sunday, January 28, 2007

Intel, IBM Announce Chip Breakthrough

Intel announced a major breakthrough in microprocessor design Friday that will allow it to keep on the curve of Moore's Law a while longer. IBM, working with AMD, rushed out a press release announcing essentially equivalent advances. Both companies said they will be using alloys of hafnium as insulating layers, replacing the silicon dioxide that has been used for more than 40 years. The New York Times story (and coverage from the AP and others) features he-said, she-said commentary from dueling analysts. If there is a consensus, it's that Intel is 6 or more months ahead for the next generation. IBM vigorously disputes this, saying that they and AMD are simply working in a different part of the processor market — concentrating on the high-end server space, as opposed to the portable, low-power end.

Intel announces 45nm breakthrough


1/27/2007 2:17:45 PM, by Jon Stokes



It's a shame that Intel happened to pick a Saturday when I'm trying to move to make major news with their upcoming 45nm process. This means that I can't do more than quickly summarize what was announced, but I can point you to two good articles that can take you further if you want to know more.

In a nutshell, Intel has announced a pair of advances in their 45nm process that will cut down drastically on leakage current (see below for more), enabling the company to make the transistors on their next generation of chips much smaller without worrying so much about current bleeding through when the transistor is in the "off" position. The first of these advances is the use of a high-k gate dielectric, a first in commercial semiconductor production. The dielectric is essentially an insulator that can now be made very thin without allowing electrical current to seep through (due to quantum tunneling) when the transistor is in the "off" position.

To complement this high-k dielectric, Intel has also moved to a metal gate electrode. This metal gate electrode is more compatible with the new hafnium-based dielectric than the polysilicon electrode used in previous process steps.

The new 45nm process will be used for Intel's forthcoming Penryn microarchitecture, which is basically just a die shrink of Woodcrest with more cache.

According to David Kanter at RealWorldTech, IBM and AMD don't plan to move to a similar high-k dielectric until the 32nm process node, a decision that may put them at a disadvantage versus Intel at 45 nanometers. Kanter summarizes the situation as follows:
The high-k dielectrics and metal gates will give Intel an advantage on their 45nm process. However, this transistor level advantage will not directly translate to microprocessor performance, without corresponding advances or clever engineering to address wire delay. It will be up to Intel's MPU designers and marketers to make the most of these benefits, by increasing clock speed or reducing power. The real question is whether the combination of high-k dielectrics and metal gates will shut the window of opportunity for AMD, when they introduce their own 45nm process in mid to late 2008, and only time will tell where the chips will fall.

For an in-depth look at the new announcements, be sure and head over to RWT and read David's article. If you want a more high-level overview with more background and big-picture perspective than I've provided here, John Markoff at the New York Times has a good piece on it that's worth checking out. Also, Robert Scoble has a video tour of the new fab if you're interested in seeing where all the magic happens.

Leakage current and clockspeed: a primer


When you're reading up on this announcement, many of you will probably need a refresher on the relationship between feature size, leakage current, power dissipation, and clockspeed. To help you out, I'm going to paste in a short discussion of power density from one version of Chapter 12 of my book, Inside the Machine. (I'm not sure if this is the final copy that's in the book or not, though, since I'd have to hunt around and compare this text with what's in the proofs).

Power Density


The amount of power that a chip dissipates per unit area is called its power density, and there are two types of power density that concern processor architects: dynamic power density and static power density.

Dynamic Power Density


Each transistor on a chip dissipates a small amount of power when it is switched, and transistors that are switched rapidly dissipate more power than transistors that are switched slowly. The total amount of power dissipated per unit area due to switching of a chip's transistors is called dynamic power density. There are two factors that work together to cause an increase in dynamic power density: clockspeed and transistor density.

Increasing a processor's clockspeed involves switching its transistors more rapidly, and as I just mentioned, transistors that are switched more rapidly dissipate more power. Therefore, as a processor's clockspeed rises, so does its dynamic power density, because each of those rapidly switching transistors contributes more to the device's total power dissipation. You can also increase a chip's dynamic power density by cramming more transistors into the same amount of surface area.

Figure 12-1 illustrates how transistor density and clockspeed work together to increase dynamic power density. As the clockspeed of the device and the number of transistors per unit area rise, so does the overall dynamic power density.


Figure 12-1: Dynamic power density Static Power Density

In addition to clockspeed-related increases in dynamic power density, chip designers must also contend with the fact that even transistors that aren't switching will still leak current during idle periods, much like how a faucet that is shut off can still leak water if the water pressure behind it is high enough. This leakage current causes an idle transistor to constantly dissipate a trace amount of power. The amount of power dissipated per unit area due to leakage current is called static power density.

Transistors leak more current as they get smaller, and consequently static power densities begin to rise across the chip when more transistors are crammed into the same amount of space. Thus even relatively low clockspeed devices with very small transistor sizes are still subject to increases in power density if leakage current is not controlled. If a silicon device's overall power density gets high enough, it will begin to overheat and will eventually fail entirely. Thus it's critical that designers of highly integrated devices like modern x86 processors take power efficiency into account when designing a new microarchitecture.

Inside the Windows Vista Kernel

Reader trparky recommends an article on Technet (which, be warned, is rather chaotically formatted). Mark Russinovich, whose company Winternals Software was recently bought by Microsoft, has published the first of a series of articles on what's new in the Vista kernel. Russinovich writes: "In this issue, I'll look at changes in the areas of processes and threads, and in I/O. Future installments will cover memory management, startup and shutdown, reliability and recovery, and security. The scope of this article comprises changes to the Windows Vista kernel only, specifically Ntoskrnl.exe and its closely associated components. Please remember that there are many other significant changes in Windows Vista that fall outside the kernel proper and therefore won't be covered."

Saturday, January 27, 2007

Windows Vista Home Basic, Home Premium, Business, Enterprise and Ultimate – Comparison

With Windows Vista just two days away I thought I would provide you with a detailed comparison between the various editions of Windows Vista. And as the saying goes... one picture is worth a thousand



words, the images at the bottom illustrate all the features of the operating system according to edition.

But of course, you will also be able to judge the differences in your own house. Buy a Windows Vista DVD with a license for Home Basic. Although the license is just for Home Basic, you will be able to install and test all the editions of the operating system, with the exception of Enterprise, which is available only via volume licensing.

However, the single Vista DVD will permit you to install Home Basic, Home Premium, Business and Ultimate and to test drive each edition for free for 30 days. How? Well... during the installation process of the operating system you will be asked to enter the license key. The license key will define the edition of Windows Vista that will be deployed. However, you have the possibility to not enter any license key, install whichever version you prefer and test it. As I've said above, the operating system will deliver a 30 days Initial Grace period with full functionality. You will then be able to upgrade to either Home Premium, Business or Ultimate via Windows Vista Anytime Upgrade.

This is a method that will keep you from spending $399 for Windows Vista Ultimate, when the $239 Vista Home Premium is more than enough for your needs.










Oscillations In the Sun's Magnetic Field Cause Ice Ages on Earth

What makes the Earth pass through Ice Ages?

Robert Ehrlich of George Mason University in Fairfax, Virginia thinks that the sun havs cycles of rise and fall on timescales of around 100,000 years. He made a computer model depicting how temperature fluctuates in the sun's interior.

Standard says the temperature of the sun's interior is maintained constant by gravity and nuclear fusion.

Ehrlich made his suppositions based on the fact that slight variations should be possible on the research of Attila Grandpierre of the Konkoly Observatory of the Hungarian Academy of Sciences, which in 2005 found that magnetic fields in the sun's core could generate small instabilities in the solar plasma, correlated to local oscillations in temperature.

The computer model reveals that some oscillations could enhance one another, turning into long-lived temperature variations. The sun's



interior temperature would oscillate around its medium temperature of 13.6 million kelvin in cycles of 100,000 or 41,000 years.

These timescales coincide with Earth's glaciations: in the past two million years, ice ages have installed approximately each 100,000 years and before their rhythm was at each 41,000 years.

The most accepted idea is that the glaciations are provoked by subtle changes in the Earth's orbit, named the Milankovitch cycles: Earth's orbit gradually shifts pattern from a circle to a slight ellipse and back again roughly every 100,000 years, changing the amount of sun heat the Earth receives.

But Milankovitch cycles cannot explain why the glaciations shifted frequency a million years ago. "In Milankovitch, there is certainly no good idea why the frequency should change from one to another," says Neil Edwards, a climatologist at the Open University in Milton Keynes, UK.

And the temperature shifts provoked by Milankovitch cycles seem not to be big enough to induce glaciations; they should be enforced by feedback mechanisms on Earth, like an alteration of carbon dioxide circuit made by the ice, the weakening of the greenhouse effect. "If you add their effects together, there is more than enough feedback to make Milankovitch work," he says. "Milankovitch cycles give us ice ages roughly when we observe them to happen. We can calculate where we are in the cycle and compare it with observation," he says. "I can't see any way of testing [Ehrlich's] idea to see where we are in the temperature oscillation."

Ehrlich agrees that his theory is hard to prove, as variation over 41,000 to 100,000 years is too slow to be studied. "If there is a way to test this theory on the sun, I can't think of one that is practical," he said. There would be one way: red dwarfs, much smaller stars than the sun and consequently with short enough oscillation periods to be watched.

Image credit: NASA

The Oldest Person in the World

The oldest person in the world is now Emma Faust Tillman, 114 years, born near Greensboro, North Carolina, in November 22, 1892, after the death of Emiliano Mercado del Toro, at his home in Puerto Rico aged 115 years and 115-year-old Julie Winnifred Bertrand of Canada, last week.

Emma lives in Hartford, Connecticut. The woman and her parents were former slaves in the decades following the U.S. Civil War.

Guinness World Records has confirmed this. “Emma's family is characterized by longevity: Though none of her 23 siblings have matched her 114 years, three sisters and a brother lived past 100,” said her great-nephew John Stewart Jr.

Tillman graduated in 1909 as the only black student in her high



school and later worked as a cook, maid, party caterer and caretaker for children of several wealthy families.

She also worked as a household servant for the actress Katharine Hepburn. "At 114, she's lived a good, honorable, straight life," said Stewart, who is 76. "Her comment is always, 'If you want to know about longevity and why I lived so long, ask the man upstairs."

"Sometimes, she doesn't feel like talking," Stewart said. "But when you're 114, you can call your own shots."

“Tillman never smoked, drank or wore eyeglasses,” Stewart said. “Until a few months ago Tillman spent much of her time caring for an ailing roommate more than 20 years her junior, who has since died.” said Karen Chadderton, administrator of the Riverside Health and Rehabilitation Center, where Tillman lives.

"About a month ago, she started feeling less energetic," said Chadderton. "During the morning she has energy, she's up and about, in a wheelchair, but in the afternoon, once she goes to sleep, she doesn't want to be bothered."

The International Committee on Supercentenarians says there are at this moment 86 people aged 110 or older in the world today, out of which 80 are women. The world's next-oldest person is the Japanese Yone Minagawa, born in 1893. “Tillman is the youngest title holder in six years,” said Robert D Young, senior consultant for gerontology for Guinness World Records. “Her ascent to the top position was particularly speedy. The average time for a person to be the world's oldest was about eight months,” Young said.

IBM to Open Source Novel Identity Protection Software

coondoggie handed us a link to a Network World article reporting that IBM plans to open source the project 'Identity Mixer'. Developed by a Zurich-based research lab for the company, Identity Mixer is a novel approach to protecting user identities online. The project, which is a piece of XML-based software, uses a type of digital certificate to control who has access to identity information in a web browser. IBM is enthusiastic about widespread adoption of this technology, and so plans to open source the project through the Eclipse Open Source Foundation. The company hopes this tactic will see the software's use in commercial, medical, and governmental settings.

Thursday, January 25, 2007

Google Web Search Now Integrates Blog Results

Type in a phrase into Google.com, add the word "blog" at the end of your query and not only will you get web results but more up-to-date gems from Google Blog Search too. That's the word from the Google Operating System weblog. This has been running as a test since November.

Capstone Mobile Selects GE864-Quad for Fleet Tracking Application

Telit Wireless Solutions, Inc., the US-based m2m mobile technology arm of Telit Communications, today announced that Capstone Mobile has signed a supply agreement with Telit. Telit’s module will be used in Capstone Mobile’s fleet tracking devices to enable mobile monitoring and tracking of high value assets.

Capstone Mobile has developed a system that allows customers to manage their vehicle fleets and view tracking information on portable devices such as laptops and PDAs. Useful for not only monitoring fleets of vehicles, this application will monitor the fleets for critical factors such as temperature and humidity level changes, breaches in containers, and, based on variances, recommend environmental adjustments.

“Telit’s modules possess the ability to be both backwards compatible and easily programmable,” said Scott Williamson, Vice President of Capstone Mobile. “Telit is at the top of a launching industry. The need for critical data has never been greater, and Telit has provided the ideal solution for our needs.”

Thanks to its small, external dimensions of 30 x 30 x 2.8 mm and light weight of only seven grams, the GE864 is especially ideal for applications requiring sub-compact form-factors. With the GE864, Telit is the world's first and only module manufacturer to offer a GSM / GPRS module with a ball grid array (BGA) installation concept.

BGA is based on tiny solder balls placed on the underside of a module allowing for direct mounting to the application circuit board, without the need for plugs, cables, or connectors. The module can now be assembled using an automated pick-and-place assembly for standard SMD components. This not only reduces material costs, but also installation time and assembly costs. The board-to-board BGA mounting is extremely stable and reliable. Together, the compact shape and reduced assembly costs are crucial advantages for use in cost-sensitive applications, such as those for the fleet management and consumer markets. The GE864 is the market’s only module viable for very large scale production in these categories.

“Capstone Mobile has a proven, well thought out approach to the application of wireless technology, and we welcome them as the first U.S.-based customer for Telit Wireless Solutions,” said Roger Dewey, President and CEO of Telit Wireless Solutions, Inc. “Their applications are at the cutting edge of the wireless revolution, and together we will ensure they stay there.”

Telit’s approach to m2m is unique—their products are divided into families, each addressing the demands of various vertical markets application groups according to size, production scale, etc. Within these families, products have the same form factor and functionality irrespective of their wireless technology (GSM, CDMA). The advantage for customers is immediately apparent because all modules within a family are interchangeable, due to uniformity in size, shape, connectors and software interface. Customers can easily replace any module with its successor because there is little or no change required to the application.

There are at least ten times more machines, equipment, vehicles and robots than there are humans in the world, creating a critical need to transfer information efficiently between machines or from machines to humans. The relatively new m2m industry delivers increased efficiency, time savings, improved customer orientation and greater flexibility.

BBC To Host Multi-OS Debate

"BBC is currently seeking submissions from all you Microsoft Windows, Mac and Linux devotees "in 100 words or less, why you are such a supporter of your chosen operating system and what features you love about it". They will then select one user of each platform to go head to head in a debate that will be part of the BBC's Microsoft Vista launch coverage on January 30th."

Street Fighting Robot Challenge

"There's no better way to assure the eventual destruction of mankind then by the event sponsored by Singapore's Defence Science and Technology Agency. Newscientist has a good writeup of the robot challenge, which is to build a robot that can operate autonomously in urban warfare conditions, moving in and out of buildings to search and destroy targets like a human soldier."

Wednesday, January 24, 2007

Blu-ray Protection Bypassed

ReluctantRefactorer writes with an article in the Register reporting that Blu-ray copy-protection technology has been sidestepped by muslix64, the same hacker who bypassed the DRM technology of rival HD DVD discs last month. From the article: "muslix64's work has effectively sparked off a [cat]-and-mouse game between hackers and the entertainment industry, where consumers are likely to face compatibility problems while footing the bill for the entertainment industry's insistence on pushing ultimately flawed DRM technology on an unwilling public." WesleyTech also covers the crack and links the doom9 forum page where BackupBluRayv021 was announced.

Linspire's CNR Goes Multi-Distro

S3Indiana writes with news that Linspire is opening its Click 'N Run installation software to other Linux distributions. After 5 years of development on CNR, the new site cnr.com will be a single source repository for Linux users. Distributions to be supported initially during 2007 are (alphabetically): Debian, Fedora, Freespire, Linspire, OpenSUSE, and Ubuntu; other distributions will follow. See the FAQ and the screenshots for more details.
Linspire announced today that it plans to expand its CNR ("Click 'N Run") digital download and software management service to support multiple desktop Linux distributions beyond Linspire and Freespire, initially adding Debian, Fedora, OpenSUSE, and Ubuntu, using both .deb and .rpm packages. And, the standard CNR service will remain free.





Spread the word:
digg this story

CNR was developed by Linspire in 2002 to allow desktop Linux users to find, install, uninstall, manage, and update thousands of software programs on their Linspire-based Linux computers.

Previously available only for Linspire and Freespire desktop Linux users, the CNR Service will begin providing users of other desktop Linux distributions a free and easy way to access more than 20,000 desktop Linux products, packages and libraries, a Linspire spokesperson said.

Support for different Linux distributions will begin in the second quarter of 2007 via a new website, CNR.com. Debian, Fedora, OpenSUSE, and Ubuntu will be the first supported, with others planned to follow.

Even as the Linux desktop has made strong advances in usability and capabilities, the difficulties of finding, installing, and updating software -- with each distribution requiring its own installation process -- has remained one of the most commonly cited complaints among desktop Linux users. With more than five years of development behind it, Linspire CEO Kevin Carmony hopes that CNR will now normalize these tasks for the most popular Debian- and RPM package-based distributions.

Carmony stated, "The CNR Service was designed to solve the complexity of finding and installing desktop Linux applications, as well as educating the world about all the quality Linux software available. It only made sense to expand our successful CNR Service to additional desktop Linux distributions and their users. CNR will normalize the process of installing software across most of the popular distributions, something Linux really needs to gain mainstream adoption."

CNR, which became a free service last August, will remain so for all the distributions supported, Carmony added.

Linspire also said in August that it would release the CNR client under an open source license later in the year. Since then, according to Carmony, Linspire has completely redesigned, updated, and enhanced the CNR technology to support multiple distributions -- both Debian- and RPM-based, Carmony said.

At CNR.com, users of supported distributions will be able to search for applications by title, popularity, user rating, category, function, and so on. An open-sourced plugin for each supported distribution will provide the one-click installation functionality. Not only will the new, multi-distribution CNR system support different distributions, it will also support various versions within each supported distribution, Carmony said.

According to a Linspire spokesperson, the new multi-distribution CNR technology will standardize the installation process for users of multiple Linux distributions without requiring a new or altered packaging system. CNR uses standard .deb and .rpm files, while shielding the user from the complexity of these packaging systems. Application developers can continue using their same packaging methods (.deb or .rpm), and various supported distributions can continue with their normal release management practices.

By building CNR around existing packaging systems, tens of thousands of existing Linux applications are immediately available via the CNR system. According to Linspire, CNR adds both server- and client-side intelligence that overcome the traditional dependency challenges presented by current packaging systems, but without the need for altering these ubiquitous systems.

CNR.com will also allow give users of multiple distros the opportunity to purchase commercial products and services, such as "legally-licensed" DVD players, Sun's StarOffice, Win4Lin Pro, CodeWeavers's CrossOver, and TransGaming's Cedega. CNR's commercial software offerings currently span various categories, including media playback, personal and business productivity, finances, virtualization, development tools, and games.

The spokesperson also said that CNR would allow users quick access to multimedia codecs and hardware drivers, bringing one-click support for MP3, Windows Media, Quick Time, Java, Flash, ATI and nVidia graphics, and so on.

The new CNR.com web site is now active with an informational placeholder where users can learn more about the plans for the multi-distribution CNR.

--Chris Preimesberger

Tuesday, January 23, 2007

MSI GeForce 8800 GTX (NX8800GTX-T2D768E)


As enthusiasts, we are always anxiously awaiting new GPU architectures, with additional pipelines and more memory. All of these aspects usually go into creating the next must-have flagship video card. Like us, you've probably been wondering when single GPU cards would sport a full gigabyte of memory. Dual-GPU cards, like NVIDIA's GeForce 7950 GX2, already boast 1GB of high-speed GDDR3. And as exciting as that may sound, it isn't quite as geek-tastic as having 1GB of memory coupled to a single GPU.

We all know it's coming; it's only a matter of time. NVIDIA's latest line-up, the GeForce 8 Series, got us one step closer as the 8800 GTX features 768MB of fast GDDR3 memory. New architecture, new features, DirectX 10 support, 768MB of GDDR3 RAM and a 384-bit memory interface; the 8800 GTX has a lot of exciting things going for it. And at over ten and a half inches in length, the 8800 GTX is definitely a monster.

Today, we have an MSI 8800 GTX (specifically, the NX8800GTX-T2D768E) on the test bench. In addition to comparing it to a GeForce 7950 GX2 and Radeon X1900 XTX, we'll hook it up to a 65-inch 1080p TV for some big-screen gaming.
















MSI 8800 GTX: Features & Specs
Model Number: NX8800GTX-T2D768E














Memory: 768MB DDR3

Video Output Function:
- TV-out + HDTV Support
- Two Dual-link DVI Connectors
- SLI Bridge

384-Bit Memory Interface
Clocks:
- GPU: 575 MHz
- Memory: 1.8 GHz (effective)

Performance:
- Memory Bandwidth (GB/sec): 86.4
- Fill Rate (Billion pixels/sec): 36.8
NVIDIA unified architecture:
Fully unified shader core dynamically allocates processing power to geometry, vertex, physics, or pixel shading operations, delivering up to 2x the gaming performance of prior generation GPUs.
GigaThread Technology:
Massively multi-threaded architecture supports thousands of independent, simultaneous threads, providing extreme processing efficiency in advanced, next generation shader programs.

Full Microsoft DirectX 10 Support:
World's first DirectX 10 GPU with full Shader Model 4.0 support delivers unparalleled levels of graphics realism and film-quality effects.

NVIDIA SLI Technology:
Delivers up to 2x the performance of a single graphics card configuration for unequaled gaming experiences by allowing two cards to run in parallel. The must-have feature for performance PCI Express graphics, SLI dramatically scales performance on today's hottest games.

NVIDIA Lumenex Engine:
Delivers stunning image quality and floating point accuracy at ultra-fast frame rates.
16x Anti-aliasing: Lightning fast, high-quality anti-aliasing at up to 16x sample rates obliterates jagged edges.


128-bit floating point High Dynamic-Range (HDR):
Twice the precision of prior generations for incredibly realistic lighting effects - now with support for anti-aliasing.

NVIDIA Quantum Effects Technology:
Advanced shader processors architected for physics computation enable a new level of physics effects to be simulated and rendered on the GPU - all while freeing the CPU to run the game engine and AI.

NVIDIA ForceWare Unified Driver Architecture (UDA):
Delivers a proven record of compatibility, reliability, and stability with the widest range of games and applications. ForceWare provides the best out-of-box experience and delivers continuous performance and feature updates over the life of NVIDIA GeForce GPUs.

OpenGL 2.0 Optimizations and Support:
Ensures top-notch compatibility and performance for OpenGL applications.

NVIDIA nView Multi-Display Technology:
Advanced technology provides the ultimate in viewing flexibility and control for multiple monitors.

PCI Express Support:
Designed to run perfectly with the PCI Express bus architecture, which doubles the bandwidth of AGP 8X to deliver over 4 GB/sec. in both upstream and downstream data transfers.

Dual 400MHz RAMDACs:
Blazing-fast RAMDACs support dual QXGA displays with ultra-high, ergonomic refresh rates - up to 2048x1536@85Hz.

Dual Dual-link DVI Support:
Able to drive the industry's largest and highest resolution flat-panel displays up to 2560x1600.
Built for Microsoft Windows Vista:
NVIDIA's fourth-generation GPU architecture built for Windows Vista gives users the best possible experience with the Windows Aero 3D graphical user interface.

NVIDIA PureVideo HD Technology:
The combination of high-definition video decode acceleration and post-processing that delivers unprecedented picture clarity, smooth video, accurate color, and precise image scaling for movies and video.

Discrete, Programmable Video Processor:
NVIDIA PureVideo HD is a discrete programmable processing core in NVIDIA GPUs that provides superb picture quality and ultra-smooth movies with low CPU utilization and power.

Hardware Decode Acceleration:
Provides ultra-smooth playback of H.264, VC-1, WMV and MPEG-2 HD and SD movies.

HDCP Capable:
Designed to meet the output protection management (HDCP) and security specifications of the Blu-ray Disc and HD DVD formats, allowing the playback of encrypted movie content on PCs when connected to HDCP-compliant displays.

Spatial-Temporal De-Interlacing:
Sharpens HD and standard definition interlaced content on progressive displays, delivering a crisp, clear picture that rivals high-end home-theater systems.

High-Quality Scaling:
Enlarges lower resolution movies and videos to HDTV resolutions, up to 1080i, while maintaining a clear, clean image. Also provides downscaling of videos, including high-definition, while preserving image detail.

Inverse Telecine (3:2 & 2:2 Pulldown Correction):
Recovers original film images from films-converted-to-video (DVDs, 1080i HD content), providing more accurate movie playback and superior picture quality.

Bad Edit Correction:
When videos are edited after they have been converted from 24 to 25 or 30 frames, the edits can disrupt the normal 3:2 or 2:2 pulldown cadences. PureVideo HD uses advanced processing techniques to detect poor edits, recover the original content, and display perfect picture detail frame after frame for smooth, natural looking video.

Video Color Correction:
NVIDIA's Color Correction Controls, such as Brightness, Contrast and Gamma Correction let you compensate for the different color characteristics of various RGB monitors and TVs ensuring movies are not too dark, overly bright, or washed out regardless of the video format or display type.

Integrated SD and HD TV Output:
Provides world-class TV-out functionality via Composite, S-Video, Component, or DVI connections. Supports resolutions up to 1080p depending on connection type and TV capability.

Noise Reduction:
Improves movie image quality by removing unwanted artifacts.

Edge Enhancement:
Sharpens movie images by providing higher contrast around lines and objects.




Like the last MSI card we reviewed, the MSI 8800 GTX is packed in a box sporting an angelic woman and a carrying handle. The back of the box of course boasts some of the main features of the card. Once we opened the box, we were glad to see the $600 card surrounded by styrofoam.


We were also glad to see a decent bundle included with this flagship card. MSI includes a handful of accessories: two VGA-to-DVI adapters, two PCI Express power cables, one S-video cable and one TV-out cable (with S-video and component video connectors). In addition to the accessories, the bundle includes a bunch of software, including Serious Sam II, CyberLink PowerCinema and Power2Go, and all the utilities listed below.




































































































MSI developed software
.VGA Driver
.MSI Live Update Series ( Live VGA BIOS & Live VGA Driver)
-Automatically online download & update VGA BIOS & Drivers, reduce the risk of getting the wrong files, and never have the trouble on web site searching.
.StarOSD
-StarOSD can monitor system information, adjust monitor figuration, and overclock system.
.Dual Core Center
.GoodMen
Automatically release the system memory space, reduce the risk of system hang-up.
.LockBox
-Instantly enter the data lock mode when you must leave your system for a while.
.WMIinfor
-Automatically list the detail system configuration, it helpful for engineering service people.
.MSI VIVID
-Vivid brings the easiest way to optimize graphic quality. Colorize your vision when browsing photos!!! Sharpen characters edge! Enhance contrast when playing game!
.MSI Live
-Including all real time life information you need, such as Live MSI Product News, Live Daily Information, Live Personal Schedule Manager, Live Search and more.
.MSI Secure DOC
.E-Color
.MediaRing
.ShowShift
.ThinSoft Be Twin
.Adobe Acrobat Reader
.Norton Internet Security 2005
.Microsoft DirectX 9.0

Internet Multitasking Disorder -- And How We Read the News

 


On any given day, Lindsay, a 24-year-old office worker in New York, can be found with several windows open in her Web browser. When she is not sending e-mails or browsing MySpace, she is chatting with her friends or co-workers through AIM – sometimes with multiple conversations going at once.


The New York Times is her homepage, giving her the latest news every morning and recently she downloaded Firefox where she checks the RSS feed for the latest headlines.

This is how Lindsay stays connected to the world. But at times the overflow of information can cause distractions. In an age of instant communication people are accustomed to getting news from multiple sources quickly and constantly, but with so many means of information on her plate, it's easy for Lindsay to just click away – another case of Internet Multitasking Syndrome.

"I think for the most part I'm accustomed to reading the news online as opposed to hardcopy and people like me are the type of person that will be persuaded to click somewhere else," said Lindsay.


Ergositsmall Although Internet Multitasking Syndrome is not a known medical disorder (I just made it up five minutes ago), it is not uncommon for people to become so immersed in their online activities that their cognitive abilities wane. After hours starring at a screen, flipping between web pages and information outlets, people can develop a feeling of anxiety, stress and a decrease in mental performance, said John Suler, author of The Psychology of Cyberspace. "There are limits to how much information one person can process," he continued.

These are symptoms akin to sensory overload, and while it is rare for a person to become so addicted to the Internet that it damages their relationship to friends, family, or performance at work, Suler said that the juggling act which we perform online can effect the way we read the news. "You are getting a cursory understanding of several different sources of information at one time, it's a delicate balance – do you want to get a shallow understanding of lots of different things or a deep grasp of one topic," asked Suler.

Online, most people opt for the quick glance. The average Internet attention span is roughly 10 seconds according to some statistics, allowing humans to just barely beat out gold fish in terms of staying on topic. One major cause of this decrease in attention are hyperlinks, which are built into Web pages allowing people to jump from one digital location to the next.

Blue highlighted words inserted in our text have become commonplace on the Internet, but no one has made an attempt to study what effect it has on our digital culture, said Joseph Turow, a professor at Penn's Annenberg School for Communications.

"We haven't really taken the time to ask what it means when we have an approach to the world where we think of connections in this way. What are the hidden assumptions that places like Google, Yahoo and MSM create for you, to see the world one way or another," said Turow.

Knowing that readers have such a short attention span and that they can click away at any moment, journalists have to approach writing for the Web in a different manner.

"When you write for the Web… you don't beat around the bush, you get to the point quickly, because you don't have the luxury of putting the reader in the mood or creating the intellectual framework for your argument," said Jack Shafer, author of Press Box a column at Slate Magazine.

But writing for the Web doesn't just mimic the wire stories of old. While the speed at which information comes across is similar, the nature of the information is somewhat different. Shafer uses links only as a referencing tool -- a way to site primary sources that back his arguments -- but often bloggers and even journalists use it as a means to highlight conversations that share their political sentiments.

In these circles people are called "ditto-heads," groups of writers and readers that only link and read views that echo their own beliefs. While this might make for a "good" read – and can even capture a readers attention – it fails to address one of journalisms main components: to create a healthy civic discussion.

It leads to wonder if the Internet is a good place to get the news at all. But many journalists see a great potential for online news to inform readers on important issues in their lives. Saul Hansell, a writer for the New York Times, said that newspapers have always known people don't read to the end of a story. With news online this becomes apparent, but it at least provides a breadth of material that is easy to access.

Readers always have something in their peripheral vision and while the Internet allows them to investigate these issues on a shallow level, it also gives them the tools to focus intimately on other matters, said Hansell. "I would take any of these problems over having too little information or information that is to hard to get at."

Lindsay prefers to get her news online. But as she sifts between news feeds, social networking sites, gossip blogs and chats, the lines between reading the news and just plain reading are blurred and one has to wonder if the news suffers.