Tuesday, October 28, 2008

Happy Diwali!

Here’s wishing a very happy Diwali to you and your family!



Deepavali, or Diwali, is a major Indian holiday, and a significant festival in Hinduism, Sikhism, Buddhism, and Jainism. Many legends are associated with Diwali. Today it is celebrated by Hindus, Jains and Sikhs across the globe as the "Festival of Lights," where the lights or lamps signify victory of good over the evil within every human being. Diwali is celebrated on the fifteenth day of the month Kartika.

In many parts of India, it is the homecoming of King Rama of Ayodhya after a 14-year exile in the forest, after he defeated the evil Ravana. The people of Ayodhya (the capital of his kingdom) welcomed Rama by lighting rows (avali) of lamps (deeva), thus its name: Deepavali. This word, in due course, became Diwali in Hindi. But, in South Indian languages, the word did not undergo any change, and hence the festival is called Deepavali in southern India. There are many different observances of the holiday across India.

In India, Diwali is now considered to be a national festival, and the aesthetic aspect of the festival is enjoyed by most Indians regardless of faith.

Monday, October 27, 2008

Virtual worlds and their evolution…

As humankind progresses, there are different avenues for social interaction which are becoming possible. The basic intrinsic need of human beings – is to interact. Higher from basic food and nourishment needs, arise the need to interact with fellow beings or fellow intelligences…

The need to interact with intelligence or sentient being is progressively seen in our search for intelligences from beyond our own world. Search for extra-terrestrial life is an extension of humankind’s desire to seek intelligence. In a manner, this direct result of our deep seated, is almost base fear of being alone.

Earlier, the speed of communication was as fast as one could run or as far as one could shout and carry his own voice – of course I am not counting telepathy here since majority of human beings are denied this facility. This changed with the technological revolution (not OUR technological revolution) in form of mastery of fire. This afforded, among other things, the ability to send out signals through fire and smoke which could be seen over large distances. Domestication of horses also helped to a large extent. The speed of communication was the speed of the fastest horse!

Things have advanced consideration from that point. Today, we have reached a stage where we have almost instantaneous communication with anyone on the planet is possible. First the telegraph, then the radio and telephone brought about an unprecedented revolution in the way people communicated.

They coupled with email (considerably later), were in fact, the ‘killer applications” of the 19th and 20th century!

Development and literal explosion of computing age coupled with networking capabilities have taken this to the next level. Now we have computers talking to other computers – almost instantaneously and the social consequences could not be far behind.

With the coming of gaming industry onto computer platform, we have witnessed another social revolution, of a sort, in terms of virtual worlds.

A virtual world is a computer-based simulated environment intended for its users to inhabit and interact via avatars. These avatars are usually depicted as textual, two-dimensional, or three-dimensional graphical representations, although other forms are possible (auditory and touch sensations for example). Some, but not all, virtual worlds allow for multiple users.

According to an article on Wikipedia – “The concept of virtual worlds predates computers and could be traced in some sense to Pliny. The mechanical-based 1962 Sensorama machine used the senses of vision, sound, balance, smells and touch (via wind) to simulate its world. Among the earliest virtual worlds to be implemented by computers were not games but generic virtual reality simulators, such as Ivan Sutherland's 1968 virtual reality device. This form of virtual reality is characterized by bulky headsets and other types of sensory input simulation. Contemporary virtual worlds, multi-user online virtual environments, emerged mostly independently of this virtual reality technology research, fueled instead by the gaming industry but drawing on similar inspiration. While classic sensory-imitating virtual reality relies on tricking the perceptual system into experiencing an immersive environment, virtual worlds typically rely on mentally and emotionally engaging content which gives rise to an immersive experience.

I won’t go much into the detailed history of virtual worlds – there is sufficient literature already available in the article quoted above. But the core philosophy behind virtual worlds is that the computer accesses a computer-simulated world and presents perceptual stimuli to the user, who in turn can manipulate elements of the modeled world and thus experiences telepresence to a certain degree.

As virtual world is a fairly vague and inclusive term, the above can generally be divided along a spectrum ranging from:

- massively multiplayer online role-playing games or MMORPGs where the user playing a specific character is a main feature of the game (World Of Warcraft for example).

- massively multiplayer online real-life/rogue-like games or MMORLGs, the user can edit and alter their avatar at will, allowing them to play a more dynamic role, or multiple roles

Some would argue that the MMO versions of RTS and FPS games are also virtual worlds if the world editors allow for open editing of the terrains if the "source file" for the terrain is shared. Emerging concepts include basing the terrain of such games on real satellite photos, such as those available through the Google Maps API or through a simple virtual geocaching of "easter eggs" on WikiMapia or similar mashups, where permitted.

Such modeled worlds may appear similar to the real world or instead depict fantasy worlds. The model world may simulate rules based on the real world or some hybrid fantasy world. Example rules are gravity, topography, locomotion, real-time actions, and communication. Communication between users has ranged from text, graphical icons, visual gesture, sound, and rarely, forms using touch and balance senses.

There are many different types of virtual worlds; however there are six features all of them have in common:

1. Shared Space: the world allows many users to participate at once.

2. Graphical User Interface: the world depicts space visually, ranging in style from 2D "cartoon" imagery to more immersive 3D environments.

3. Immediacy: interaction takes place in real time.

4. Interactivity: the world allows users to alter, develop, build, or submit customized content.

5. Persistence: the world's existence continues regardless of whether individual users are logged in.

6. Socialization/Community: the world allows and encourages the formation of in-world social groups like teams, guilds, clubs, cliques, housemates, neighborhoods, etc

There is a virtual plethora of information available on virtual worlds at the virtual worlds review.

The use of virtual worlds in training arena is picking up fast, including military training. Simulators for various types of aircrafts are, at the end of the day, a kind of virtual world only.

If we speculate further, the kind of social revolution that has come about due to virtual worlds is tremendous. People can meet over media which is more immersive than ever and I think I have seen the future of virtual worlds as well – for those who are followers of the Star Trek – holodeck is the ultimate expression of virtual world!

Personally, I eagerly await the development of virtual world technology to holodeck level. It would afford those who are infirm or otherwise unable to travel due to certain kind of disability – to visit people, culture and places they would otherwise have never dreamt of…

Sunday, October 26, 2008

Back to the moon…

India, with its launch of the unmanned moon probe entered the race of return to the moon – earth’s only and somewhat abnormally large – natural satellite.

Abnormal? What is so abnormal about our moon – which has inspired different emotions in humankind throughout the ages – from awe, superstition to much more tender emotions such as love, etc. well, as it happens, the Moon happens to quite large as compared to its capital body around which is revolves, in fact it is the fifth largest natural satellite in the entire SOL’s (sun) Solar System. Take for example other large moons of the solar system such as Triton, Titan, IO, Europa, Cheron (what!!! – you didn’t know that Pluto had a moon too?) – They are all huge, but pale when compared to their “parent” body. In fact, Jupiter and Saturn are so big (full of gas) that their moons or satellites are very small indeed when compared to their own mass or size. The earth’s moon, in comparison, is very large indeed – in terms of diameter - a little more than a quarter that of the Earth. This means that the Moon's volume is about 2 percent that of Earth and the pull of gravity at its surface about 17 percent that of the Earth. In fact, in astronomy, it is found quite astonishing to have such a large body orbiting a relatively small core planet. Indeed, sometimes, from galactic perspective, Earth-Moon system is often referred to as the double-planet system.

By the middle of the 17th century, Galileo and other early astronomers made telescopic observations, noting an almost endless overlapping of craters. It has also been known for more than a century that the Moon is less dense than the Earth. Although a certain amount of information was ascertained about the Moon before the space age, this new era has revealed many secrets barely imaginable before that time. Current knowledge of the Moon is greater than for any other solar system object except Earth.

Various facts, especially the NASA photographs of Apollo missions are lucidly presented in this article by Rosanna L. Hamilton.

But really, how much do we know about our own galactic backyard?

The Moon makes a complete orbit around the Earth every 27.3 days (the orbital period), and the periodic variations in the geometry of the Earth–Moon–Sun system are responsible for the lunar phases that repeat every 29.5 days (the synodic period). The Moon is in synchronous rotation, meaning that it keeps nearly the same face turned towards the Earth at all times. Early in the Moon's history, its rotation slowed and became locked in this configuration as a result of frictional effects associated with tidal deformations caused by the Earth. The far side had never been seen by any human until the launch of moon probes in the last 1950’s.

You can see the Virtual Reality Moon Phase Pictures here.

The Moon is the only celestial body to which humans have travelled and upon which humans have landed. The first artificial object to escape Earth's gravity and pass near the Moon was the Soviet Union's Luna 1, the first artificial object to impact the lunar surface was Luna 2, and the first photographs of the normally occluded far side of the Moon were made by Luna 3, all in 1959. The first spacecraft to perform a successful lunar soft landing was Luna 9, and the first unmanned vehicle to orbit the Moon was Luna 10, both in 1966. The United States (U.S.) Apollo program achieved the only manned missions to date, resulting in six landings between 1969 and 1972. Human exploration of the Moon ceased with the conclusion of the Apollo program, although several countries have announced plans to send people or robotic spacecraft to the Moon – well – India and China are amongst the nations now raring to literally reach for the moon!

Okay, back to moon J

One distinguishing feature of the far side is its almost complete lack of maria. The dark and relatively featureless lunar plains which can clearly be seen with the naked eye are called maria (singular mare), Latin for seas, since they were believed by ancient astronomers to be filled with water. These are now known to be vast solidified pools of ancient basaltic lava. The majority of these lavas erupted or flowed into the depressions associated with impact basins that formed by the collisions of meteors and comets with the lunar surface. Maria are found almost exclusively on the near side of the Moon, with the far side having only a few scattered patches covering only about 2% of its surface compared with about 31% on the near side.

The lighter-colored regions of the Moon are called terrae, or more commonly just highlands, since they are higher than most maria. Several prominent mountain ranges on the near side are found along the periphery of the giant impact basins, many of which have been filled by mare basalt. These are believed to be the surviving remnants of the impact basin's outer rims. In contrast to the Earth, no major lunar mountains are believed to have formed as a result of tectonic events.

The Moon's surface shows obvious evidence of having been affected by impact cratering. Impact craters form when asteroids and comets collide with the lunar surface, and globally about half a million craters with diameters greater than 1 km can be found. Since impact craters accumulate at a nearly constant rate, the number of craters per unit area superposed on a geologic unit can be used to estimate the age of the surface (see crater counting). The lack of an atmosphere, weather and recent geological processes ensures that many of these craters have remained relatively well preserved in comparison to those found on Earth. The largest crater on the Moon, which also has the distinction of being one of the largest known craters in the Solar System, is the South Pole-Aitken basin. This impact basin is located on the far side, between the South Pole and equator, and is some 2,240 km in diameter and 13 km in depth.

Blanketed atop the Moon's crust is a highly comminuted (broken into ever smaller particles) and "impact gardened" surface layer called regolith. Since the regolith forms by impact processes, the regolith of older surfaces is generally thicker than for younger surfaces. In particular, it has been estimated that the regolith varies in thickness from about 3–5 m in the maria, and by about 10–20 m in the highlands. In other words, the soil is thick and slick…

But why the rush of back to moon, why now?

India launched Chandrayaan-1 - in a historic feat, on October 22, 2008 from the Satish Dhawan Space Centre in Sriharikota. The successful launch of India's maiden unmanned moon mission Chandrayaan-1 has catapulted the country into the league of a select group of nations. One of the prime reasons is national pride and then the other is the possibilities it affords. With the western economy in decline and ascendency of India and China in this century, it was only a matter of time before these two nations realized the importance of breaking the bounds of earth’s puny gravity well and soar beyond.

But beyond the political hype and all the aspirations of becoming a superpower, there is a much more practical aspect to race towards the moon.

The Moon holds several minerals and elements not found or manufactured easily on earth. Helium-3 for example. Then there are spin-off benefits from the technology that must be developed to reach moon. The sophistication and cost effectiveness of the journey outwards ultimately holds the key to cheap and profitable exploration of outer space.

India's love fest with deep space has only just begun. It could well become a force to reckon with giving the established space agencies a run for their money as the Indian moon mission is the cheapest till date of all moon missions in this century, but one also which creates a world record of carrying the largest suite of scientific instruments ever to be carried to the moon till date.

And frankly, I would want to see a difference from NASA – whose every mission has to cost a billion dollars and then explode either while leaving or entering earth’s gravity well. I’ve got nothing against NASA – bunch of great guys (lot of Indians there in fact if I heard it right), but everything they do - why DOES IT HAVE TO COST SO MUCH?

If the technology wasn’t ready to allow human exploration of space in a safe way, why send humans? Robotic vehicles can also operate with certain amount of efficiency and the money could have been better utilized in developing technologies which would have ultimately resulted in cheaper access to outer space.

And now NASA’s mantra has indeed become – faster, cheaper and better (FCB) – evident in the Mars probes.

In fact, if the price tag wasn’t so high, space exploration would have proceeded at a much quicker pace than what happened in the aftermath of Apollo missions.

We keep on saying that man has landed on the moon and we are not exploring our own cosmic backyard. But, without the intention of belittling our (humankind’s) achievements so far, what we have done till now is slingshot a few missions to moon (with humans in it), put a space station in orbit (with the ever present danger of it falling down on our heads – e.g. MIR) and sent some probes to near by planets. Voyager I & II and the pioneer missions were an exception. They were real value for their money because of the wealth of information that they afforded to humanity about the outer side of our solar system.
I would closely watch India and humanity’s collective progress of back to the moon, mars and then ultimately the solar system.

Monday, October 20, 2008

Extra-terrestrial origins of life?

The concept of life being a cosmic phenomenon is rapidly gaining support, with new evidence from space science, geology and biology. In this picture life on Earth resulted from the introduction of bacteria from comets, and the subsequent evolution of life required the continuing input of genes from comets.

Fred Hoyle was an important scientist who worked at the frontiers of astronomy and theoretical physics. In 1983 he published a well-illustrated popular book for nonscientists in which he attacked the whole idea that life originated and evolved on Earth and replaced it by 'intelligent cosmic control'.

Although Hoyle has been accused to siding with creationism – I personally disagree with this assessment. He was in favor of cosmic connection and control of origin of life on earth and elsewhere in universe and was not particularly hinting at any divine control.

In an interview, N Chandra Wickramasinghe, one of the foremost authority on the idea of life from outer space a student and collaborator of Sir Fred Hoyle – in the frontline – mentions that two recent experiments in the United States have once again drawn the attention of scientists to the theory of panspermia.

Fred Hoyle and Chandra Wickramasinghe demonstrated correctly that interstellar clouds contain some organic molecules but their subsequent proposal for the extraterrestrial origin of life on earth and for the ability of microbes to survive in space are not substantiated by hardcore evidence. However, with the hard core evidence from other quarters coming in, this theory is gaining ground and acceptance in mainstream scientific thinking.

Panspermia - which literally means seeds everywhere, underlies the hypothesis that the (biological) stuff of life did not have its origins in terrestrial resources but in inter-stellar space.

The theory maintains that life on the earth was seeded from space and that life's evolution to higher forms depends on complex genes (including those of viruses and diseases) that the earth receives from space from time to time.)

The two experiments discussed included:

In one experiment reported in October, environmental biologists Russell H. Vreeland and William D. Rosenzweig claimed that they discovered the longest surviving (250 million years) bacterial spores locked inside a salt crystal formation in Mexico that could be revived. This was considered as evidence that life - even one-celled micro-organisms - could survive in suspended animation for eons and float on comets to far away planets

In another experiment reported, a team of scientists from the California Institute of Technology, Vanderbilt and McGill Universities discovered that small pieces of space rock could be transferred from Mars to earth without its interior get ting excessively heated up, thus enabling living organisms to ride in them

The renewed interest in panspermia also comes in the wake of space-based discoveries that include recent findings of some simple amino acids and sugars in inter-stellar space; the announcement by the National Aeronautics and Space Administration (NASA) in August 1996 of evidence of fossilized ancient life in a meteorite from Mars; evidence in the same year by geneticists that many genes are much older than what the fossil record would indicate; the discovery by a Russian microbiologist in 1998 of a micro-fossil in a meteorite was a previously unknown bacterium; and the announcement by NASA in April this year of the detection of very large organic molecules in space in its Stardust Mission launched in February and that the non-biological origins of such large molecules are not known


Early history of panspermia

Until the late 19th century panspermia meant the passage of organisms through Earth’s own atmosphere, not an incidence from outside Earth. In this form it seems to have been used first by the Abbee Lazzaro Spallanzoni (1729-99). But almost a century before that, Francesco Redi had carried out what can be seen as a classic experiment in the subject. He had shown that maggots appear in decaying meat only when the meat is exposed to air, inferring that whatever it was that gave rise to the maggots must have travelled to the meat through the air.

A very long wait until the 1860’s then ensued, until Louis Pasteur’s3 experiments on the souring of milk and the fermentation of wine showed that similar results occurred when the air-borne agents were bacteria, replicating as bacteria but not producing a visible organism like maggots. The world then permitted Pasteur to get away with a huge generalization, and honored him greatly both at the time and in history for it. Because by then the world was anxious to be done with the old Aristotelian concept of life emerging from the mixing of warm earth and morning dew. The same old concept was to arise again in the mid-twenties of the past century, however, but with a different name. Instead of Aristotle’s warm earth and morning dew it became “a warm organic soup.”

Pasteur’s far-ranging generalization implied that each generation of every plant or animal is preceded by a generation of the same plant or animal. This view was taken up enthusiastically by others, particularly by physicists, among them John Tyndall, who lectured frequently on the London scene. The editorial columns of the newly established Nature (e.g., issue of January 27, 1870) objected with some passion to Tyndall’s Friday evening discourse at the Royal Institution on January 21, 1870. Behind the objection was the realizations that were Pasteur’s paradigm taken to be strictly true, the origin of life would need to be external to Earth. For if life had no spontaneous origin, it would be possible to follow any animal generation-by-generation back to a time before Earth existed, the origin being therefore required outside Earth.

This was put in remarkably clear terms in 1874 by the German physicist Hermann von Helmholtz4:

It appears to me to be a fully correct scientific procedure, if all our attempts fail to cause the production of organisms from non-living matter, to raise the question whether life has ever arisen, whether it is not just as old as matter itself, and whether seeds have not been carried from one planet to another and have developed everywhere where they have fallen on fertile soil….

In his presidential address to the 1881 meeting of the British Association, Lord Kelvin drew a remarkable picture:

When two great masses come into collision in space, it is certain that a large part of each is melted, but it seems also quite certain that in many cases a large quantity of debris must be shot forth in all directions, much of which may have experienced no greater violence than individual pieces of rock experience in a landslip or in blasting by gunpowder. Should the time when this earth comes into collision with another body, comparable in dimensions to itself, be when it is still clothed as at present with vegetation, many great and small fragments carrying seeds of living plants and animals would undoubtedly be scattered through space. Hence, and because we all confidently believe that there are at present, and have been from time immemorial, many worlds of life besides our own, we must regard it as probable in the highest degree that there are countless seed-bearing meteoric stones moving about through space. If at the present instant no life existed upon Earth, one such stone falling upon it might, by what we blindly call natural causes, lead to its becoming covered with vegetation.”

Essentially, what Kelvin was suggesting at that time was it is possible for seeds of life to be carried between planetary or cosmic bodies. Thus almost 120 years ago the ideas that have recently come to the forefront of scientific discussion were already well known. Unfortunately there was no way at that date, 1881, whereby observation or experiment could be brought seriously to bear on Kelvin’s formulation of panspermia and the world had to wait 120 year before some sort of concrete experimental and observational proof started trickling in.

It has been known for quite some time that bacteria and other microorganism are extremely hardy. Some have been found to be living in the most unlikely places where conventional thinking would attribute life’s survival – such as in the heavy water of nuclear reactors or the highly acidic and hot volcanic areas. These extremophiles, i.e. microbes that live in conditions that would kill other creatures. It was not until the 1970's that such creatures were recognized, but the more researchers look, the more they discover that most archaea; some bacteria and a few protists can survive in the harshest and strangest of environments.

There is scarcely any set of conditions prevailing on Earth, no matter how extreme that is incapable of harboring some type of microbial life. Under space conditions, microorganisms are very easily protected against ultraviolet damage.

One may find it very difficult to believe but there is evidence that some of bacteria actually survived almost 2 years on moon’s harsh environment. What has happened was that during the sealing of a camera which was supposed to be sent to moon, somebody might have sneezed and hence left microbes inside the camera body. The camera in turn was delivered to moon’s surface to find best locations for landing. 2 years later when the Apollo astronauts retrieved this camera and brought it back to earth – this fact was discovered. Moon has almost no atmosphere and the range of temperature is very high indeed. if microbes can survive 2 years of almost vacuum and the harsh differences in temperatures, is it really far fetched to think that it would be possible for them to travel intra-stellar or interstellar distances and seed life on earth as well?

Couple this with the fact that today an impressive array of interstellar molecules has been detected and among the list are a host of hydrocarbons, polyaromatic hydrocarbons, the amino acid glycine, vinegar and the sugar glycoaldehyde14. Such organic molecules that pervade interstellar clouds make up a considerable fraction of the available galactic carbon.

Actually, theories of how interstellar organic molecules might form via non-biological processes are still in their infancy and, in terms of explaining the available facts, they leave much to be desired.

N Chandra Wickramasinghe further speculates – “The overwhelming bulk of organic matter on Earth is indisputably derived from biology, much of it being degradation products of biology. Might not the same processes operate in the case of interstellar organic molecules? The polyaromatic hydrocarbons that are so abundant in the cosmos could have a similar origin to the organic pollutants that choke us in our major cities - products of degradation of biology, biologically generated fossil fuels in the urban case, cosmic microbiology in the interstellar clouds. The theory of cosmic panspermia that we have proposed leads us to argue that interstellar space could be a graveyard of cosmic life as well as its cradle. Only the minutest fraction (less than one part in a trillion) of the interstellar bacteria needs to retain viability, in dense shielded cloudlets of space, for panspermia to hold sway. Common sense dictates that this survival rate is unavoidable.”

So where does this leads us? Actually, to me this is a partial answer at best. While it may be indisputably established that life did not independently evolve here on earth and was in fact seeded from the stars – it still leaves us clueless about the origins of life itself, wherever it might have evolved or originated.

In this sense, we are all aliens on this planet. Something to ponder on…. I will be writing more on this topic in coming days.

Sunday, October 19, 2008

Worldwide financial meltdown and what it means for the environment…

The worldwide financial meltdown is expected to have a dampening effect on demand for energy and goods. As the wallets lighten up, people will go out less, spend less on travel and luxury… at least that’s what the sound economics tells us.

Earlier this July (2008), when the crude oil prices had started heading north and peaked around $147 per barrel, there was a wide hope in the clean energy lobby that, perhaps finally now, the world had a financial justification to vigorously pursue development of clean energy alternatives.

The energy shock has galvanized everyone into finding alternatives, more so from the renewable sources and perhaps at a cheaper price point.

However, the financial meltdown, according to International Science Panel, poses a grave threat to efforts to turn back greenhouse gas emissions -- but it doesn't have to be that way, says the head of the international science panel that has authoritatively outlined the challenge for government policymakers.

It's crucial that carbon dioxide emissions that are on the increase be sent in the other direction by 2015, so the way to go is to use the current economic troubles to launch a recharged effort to rein in climate change, led by the next U.S. president, said R.K. Pachauri, chairman of the Intergovernmental Panel on Climate Change. Read more about his comments here.

Ramesh Jaura – also talks about the same theme - Enter the global financial crisis - exit action on climate change? According to him, that lingering apprehension is not shared by Pamela Cox, the World Bank's vice-president for Latin America and the Caribbean.

Adam Stein – discusses the same issue. According to him – “Although it is very difficult to make predictions about the direction of the economy, it appears likely the current downturn will continue for some time. Which is bad for the climate, mainly because of the way that a weak economy interacts with the other items on the list. For example, slow growth saps the political will for dramatic action on climate change”.

A weak economy could at least temporarily bring fossil fuel prices down. We continue to believe that the long-term trend in fossil fuel prices is up, up, up, but, as mentioned, volatility will muddy the investment picture for clean energy.

Margaret Kriz – talks about how financial crisis is dimming the hope for U.S. Climate Legislation. “Environmentalists had been looking to a new president and a new Congress to pass legislation dealing with global warming next year. But with tough economic times looming, the passage of a sweeping climate change bill now appears far less likely”.

However, the jury is still out on this issue.

Clean and green technologies may end up a big winner in the current global financial crisis, say some investment professionals.

Billions of dollars in new investments have been made in clean/green tech such as renewable energy and energy efficiency in recent years. And, despite fears of a major recession in the U.S., nearly all investment professionals and institutions reported plans to introduce new investment opportunities before the end of 2009, according a new survey of the 500-member Social Investment Forum (SIF), an association for socially and environmentally responsible investment firms.

"In the last two years the growth in the green economy has been tremendous," said Jack Robinson, president of Winslow Management Company in Boston.

"But the huge win for the green economy is the U.S. bank bailout programme," Robinson, a green investment expert, told IPS.

It turns out the near collapse of the U.S. financial system has a silver lining for the long-cash-starved alternative energy sector.

Another reason for green investor optimism is the virtual certainty that the U.S. will have a carbon cap and trade system by 2010 at the latest.

"The new Congress will regulate carbon emissions. The costs of fossil fuel will finally begin to reflect the costs of climate change," said Adam Seitchik, lead portfolio manager of Green Century Balanced Fund, and chief investment officer of Trillium Asset Management, Boston.

"Despite the credit crisis, the fundamentals of clean energy are so strong, they will find financing," Steitchik said in an interview.

Companies producing solar products have seen their revenues grow 60 to 140 percent this year and expect to reach 45 to 200 percent in 2009. One company, SunPower Corp., will see 2 billion dollars in sales in 2009, he said.

Although solidly profitable, the stock prices for these companies have plummeted just like all the others on Wall St. However that means they are terrific investment opportunities even if their earnings decline due to slowing economies and the credit crisis, he said.

"As energy prices continue to fluctuate and the need to address climate change becomes ever more urgent, many investors want to blaze a trail for clean energy solutions that meet demand and respond to the impacts of climate change," said Lisa Woll, chief executive officer of the Washington, D.C.-based SIF.

But if every dark cloud has a silver lining there's one over here as well...albeit it looks like a green one.

Tighter purse strings could (and already is) forcing the world to save energy and look at alternative options to stretch the dollar. Bjorn Lomborg, Danish author of 'The Skeptical Environmentalist' echoes, "Hopefully the crisis will make us smarter in spending our money."

There could be a perceptible shift in investments towards energy efficiency. More low scale and down to earth projects may reap the benefits. Consultants McKinsey & Co. opine that emissions-cutting measures such as better building insulation, fuel efficiency in vehicles, more efficient lighting and air conditioning end up paying for themselves via lower energy bills. More optimistically, if a bad economy fuels a grassroots green movement then that itself would be a big shift towards greener attitudes.

Some are still on the path of continuous investments. Sven Teske, renewable energy director for environmental group Greenpeace, said investments still made sense. He said that the wind energy market totaled $37 billion in 2007 and added more than 19 gigawatts to the grid.

The U.N. Climate Panel has estimated the costs of slowing climate change at only 0.12 percent of world gross domestic product to 2030, with vast benefits in avoiding human suffering. That is small change but the world is feeling an instant economic pinch right now. Are we losing sight of the forest by looking at the trees? Hopefully not, otherwise the battle to save the planet could turn into a war...too late

Saturday, October 18, 2008

The Web, then and now and beyond… Web 2.0!

Following commercialization and introduction of privately run Internet Service Providers in the 1980s, and its expansion into popular use in the 1990s, the Internet has had a drastic impact on culture and commerce. The Internet has touched lives profoundly all over the world. It has led to an almost revolution in the way world operates. And it has brought humanity closer.

In the 1950s and early 1960s, prior to the widespread inter-networking that led to the Internet, most communication networks were limited by their nature to only allow communications between the stations on the network. Some networks had gateways or bridges between them, but these bridges were often limited or built specifically for a single use. One prevalent computer networking method was based on the central mainframe method, simply allowing its terminals to be connected via long leased lines. This method was used in the 1950s by Project RAND to support researchers such as Herbert Simon, in Pittsburgh, Pennsylvania, when collaborating across the continent with researchers in Sullivan, Illinois, on automated theorem proving and artificial intelligence.

I am not going to rant about the history of internet here. This is covered in a very comprehensive and lucid way in this excellent article at Wikipedia. Another great article, starting a little earlier than the 1950s… Roads and Crossroads of Internet History.

Walt Howe’s discourse about the history of internet is also very interesting to read.

Another interesting aspect is Al Gore’s contribution to the growth of internet.

The commercialization of the Internet brought extraordinary, almost to the point of being irrational, wealth to a few individuals. The euphoria created by sudden connectivity, unleashing the power of individual creativity and bringing it onto a world stage with very little cost was also paralleled in an irrational exuberance in the stock markets. This is also commonly known as the Dot Com Bubble.

It all started during the mid 1990’s. The Stock Market soared on technology and Internet stocks, IPOs were all the rage, and the sky was the limit for stock prices. The masses believed there was a new world upon us, and the internet was to become the future of business. Then reality set in when the hype didn’t live up to its promises and the stock market crashed. If you take all of this for only its face value, all you see is what happens when a stock market gets overvalued and crashes, but if you look deeper you can find plenty of timeless lessons that every investor should learn.

Ian Peter’s History of the Internet - the Dotcom bubble nicely summarizes the Internet and Dot Com Bubble.

Web 2.0 is a buzzword that exploded in popularity sometime in 2005. In 2004, software guru Tim O'Reilly founded the Web 2.0 Conference, held annually in San Francisco. It has since expanded from a conference into a way of thinking, a new approach to doing business on the Internet. There is no standard definition for web 2.0, as it is a cluster of ideas rather than anything clear-cut. However, O'Reilly's comments on the topic are seen as having special authority, and rank among the top Google search results for the term.

According to Tim O’Reilly – “Web 2.0 is the term used, in the technology field, to refer to some exciting developments that promise to bring us to a new and improved internet experience. For developers this means learning new technologies and frameworks to provide better, faster and more sophisticated features”.

The concept of "Web 2.0" began with a conference brainstorming session between O'Reilly and MediaLive International. Dale Dougherty, web pioneer and O'Reilly VP, noted that far from having "crashed", the web was more important than ever, with exciting new applications and sites popping up with surprising regularity. What's more, the companies that had survived the collapse seemed to have some things in common. Could it be that the dot-com collapse marked some kind of turning point for the web, such that a call to action such as "Web 2.0" might make sense? We agreed that it did, and so the Web 2.0 Conference was born.

Paul Graham describes Web 2.0 as an idea that was meaningless when started, but has acquired a meaning during the intervening period. He then goes on to describe the various components that contribute to the new revolution.

According to Wikipedia - Web 2.0 is a term describing changing trends in the use of World Wide Web technology and web design that aims to enhance creativity, secure information sharing, collaboration and functionality of the web. Web 2.0 concepts have led to the development and evolution of web-based communities and its hosted services, such as social-networking sites, video sharing sites, wikis, blogs, and folksonomies. Although the term suggests a new version of the World Wide Web, it does not refer to an update to any technical specifications, but to changes in the ways software developers and end-users utilize the Web.

The first premise of web 2.0 is leveraging the power of the user. For example, fluid user tagging of content would be used instead of a centralized taxonomy. Web 2.0 entrepreneurs often consider the Long Tail, which is basically an observation that the vast majority of the attention market is based on niche content. Web 2.0 is radically decentralized, as in the case of BitTorrent, a collaborative downloading co-op that consumes a serious portion of all Internet traffic.

Blogs are considered web 2.0. Instead of centralized "personal home pages", blogs let people easily post as much or as little as they want as rarely or as frequently as they want. Feed aggregators ensure that people only need to visit a single site to see all the feeds they subscribe to. Comments are enabled everywhere, allowing people to participate rather than passively consume content.

Web 2.0 technology encourages lightweight business models enabled by syndication of content and of service and by ease of picking-up by early adopters.

O'Reilly provided examples of companies or products that embody these principles in his description of his four levels in the hierarchy of Web 2.0 sites:

- Level-3 applications, the most "Web 2.0"-oriented, exist only on the Internet, deriving their effectiveness from the inter-human connections and from the network effects that Web 2.0 makes possible and growing in effectiveness in proportion as people make more use of them. O'Reilly gave eBay, Craigslist, Wikipedia, del.icio.us, Skype, dodgeball, and AdSense as examples.

- Level-2 applications can operate offline but gain advantages from going online. O'Reilly cited Flickr, which benefits from its shared photo-database and from its community-generated tag database.

- Level-1 applications operate offline but gain features online. O'Reilly pointed to Writely (now Google Docs & Spreadsheets) and iTunes (because of its music-store portion).

- Level-0 applications work as well offline as online. O'Reilly gave the examples of MapQuest, Yahoo! Local, and Google Maps (mapping-applications using contributions from users to advantage could rank as "level 2").

Non-web applications like email, instant-messaging clients, and the telephone fall outside the above hierarchy

Web 2.0 websites allow users to do more than just retrieve information. They can build on the interactive facilities of "Web 1.0" to provide "Network as platform" computing, allowing users to run software-applications entirely through a browser. Users can own the data on a Web 2.0 site and exercise control over that data. These sites may have an "Architecture of participation" that encourages users to add value to the application as they use it. This stands in contrast to very old traditional websites, the sort which limited visitors to viewing and whose content only the site's owner could modify. Web 2.0 sites often feature a rich, user-friendly interface based on Ajax, OpenLaszlo, Flex or similar rich media.

Second life and similar platforms provide an immersive environment where users can interact with each other in a father richer way than was possible through the web 1.0 tools such as chats, emails, etc.

In my next post, I’ll be talking more on how this new revolution is touching lives and what may be in store for humanity with this medium… almost bordering on the science fiction!

Wednesday, October 08, 2008

Grid computing…

In my previous post, I had mentioned how grid computing is helping in processing of petabytes of data generated by LHC. Today, let’s explore the concept of grid computing in a little more depth.

According to Wikipedia - Grid computing is a form of distributed computing whereby a "super and virtual computer" is composed of a cluster of networked, loosely-coupled computers, acting in concert to perform very large tasks. This technology has been applied to computationally-intensive scientific, mathematical, and academic problems through volunteer computing, and it is used in commercial enterprises for such diverse applications as drug discovery, economic forecasting, seismic analysis, and back-office data processing in support of e-commerce and web services.

The term Grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power grid in Ian Foster and Carl Kesselmans seminal work, "The Grid: Blueprint for a new computing infrastructure".

An excellent resource on grid computing is available at - http://www.gridcomputing.com/

Later on, CPU scavenging and volunteer computing were popularized beginning in 1997 by distributed.net and later in 1999 by SETI@home to harness the power of networked PCs worldwide, in order to solve CPU-intensive research problems.

One of the most famous cycle-scavenging networks is SETI@home, which was using more than 3 million computers to achieve 23.37 sustained teraflops (979 lifetime teraflops) as of September 2001. Being deeply interested in the question of extra-terrestrial life, I myself participated in the SETI@home project with my old x486!

Grid computing requires the use of software that can divide and farm out pieces of a program to as many as several thousand computers. Grid computing can be thought of as distributed and large-scale cluster computing and as a form of network-distributed parallel processing. It can be confined to the network of computer workstations within a corporation or it can be a public collaboration (in which case it is also sometimes known as a form of peer-to-peer computing).

A number of corporations, professional groups, university consortiums, and other groups have developed or are developing frameworks and software for managing grid computing projects. The European Community (EU) is sponsoring a project for a grid for high-energy physics, earth observation, and biology applications. In the United States, the National Technology Grid is prototyping a computational grid for infrastructure and an access grid for people. Sun Microsystems offers Grid Engine software. Described as a distributed resource management (DRM) tool, Grid Engine allows engineers at companies like Sony and Synopsys to pool the computer cycles on up to 80 workstations at a time. (At this scale, grid computing can be seen as a more extreme case of load balancing.)

What distinguishes grid computing from typical cluster computing systems is that grids tend to be more loosely coupled, heterogeneous, and geographically dispersed. Also, while a computing grid may be dedicated to a specialized application, it is often constructed with the aid of general purpose grid software libraries and middleware.

"Distributed" or "grid" computing in general is a special type of parallel computing which relies on complete computers (with onboard CPU, storage, power supply, network interface, etc.) connected to a network (private, public or the Internet) by a conventional network interface, such as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-speed computer bus.

The primary advantage of distributed computing is that each node can be purchased as commodity hardware, which when combined can produce similar computing resources to a multiprocessor supercomputer, but at lower cost. This is due to the economies of scale of producing commodity hardware, compared to the lower efficiency of designing and constructing a small number of custom supercomputers. The primary performance disadvantage is that the various processors and local storage areas do not have high-speed connections. This arrangement is thus well-suited to applications in which multiple parallel computations can take place independently, without the need to communicate intermediate results between processors.

The high-end scalability of geographically dispersed grids is generally favorable, due to the low need for connectivity between nodes relative to the capacity of the public Internet.

There are also some differences in programming and deployment. It can be costly and difficult to write programs so that they can be run in the environment of a supercomputer, which may have a custom operating system, or require the program to address concurrency issues. If a problem can be adequately parallelized, a "thin" layer of "grid" infrastructure can allow conventional, standalone programs to run on multiple machines (but each given a different part of the same problem). This makes it possible to write and debug on a single conventional machine, and eliminates complications due to multiple instances of the same program running in the same shared memory and storage space at the same time.

By the way, Nortel had also joined the Global Grid Forum (GCF) as far back as April, 2004. The charter of the GHPN group is to establish a rich two-way communication between the community of Grid application developers and the networking communities (in both academia and industry).

Another well-known project is the World Community Grid. The World Community Grid's mission is to create the largest public computing grid benefiting humanity. This work is built on the belief that technological innovation combined with visionary scientific research and large-scale volunteerism can change our world for the better. IBM Corporation has donated the hardware, software, technical services and expertise to build the infrastructure for World Community Grid and provides free hosting, maintenance and support.

During 2007 the term cloud computing came into popularity, which is conceptually similar to the canonical Foster definition of grid computing (in terms of computing resources being consumed as electricity is from the power grid). Indeed grid computing is often (but not always) associated with the delivery of cloud computing systems.

All the major corporations of the world involved with computing industry in one way or the other are working towards this area. Microsoft is joining the cloud-computing trend, with CEO Steve Ballmer saying a "Windows Cloud" OS will be launched at Microsoft's Professional Developers Conference. Ballmer said Microsoft's "Windows Cloud" is aimed at developers creating cloud-computing apps. Microsoft, IBM, Intel and Oracle are all getting involved in cloud computing.

For example, Oracle is shifting its Grid Focus to the Application. "Think of WebLogic Application Grid as similar to a service-oriented architecture," said Mike Piech, an Oracle senior director of product marketing, during a recent briefing. "It's not a single product, not a single technology, but an infrastructure with a certain set of characteristics to provide on-demand behavior. Our approach is to have all the foundation-level middleware technologies play into that basic idea of the grid: pooling and sharing resources, using them more efficiently, but also providing a higher quality of service."

IBM is also not being. Relevant resources from IBM can be located from this link.

Grid computing appears to be a promising trend for three reasons:

(1) Its ability to make more cost-effective use of a given amount of computer resources,

(2) As a way to solve problems that can't be approached without an enormous amount of computing power, and

(3) Because it suggests that the resources of many computers can be cooperatively and perhaps synergistically harnessed and managed as collaboration toward a common objective.

In some grid computing systems, the computers may collaborate rather than being directed by one managing computer. One likely area for the use of grid computing will be pervasive computing applications - those in which computers pervade our environment without our necessary awareness

Tuesday, October 07, 2008

Climate Change and civilizations…

In my previous post on the topic of global warming and its impact on planet’s climate, I had delved into the causes behind global warming and varying opinions of different authority figures. This time, I would like to trace the effects of climate change on the evolution of humanity.

I came across an interesting article by Dr Nachiketa Das – where he discusses the topic of climate change and its effect on river Ganges.

He is of the opinion – “Global warming, now in 2008, is real, and upon us. How will global warming affect the rivers in India; will they all dry up? Can the holy Ganges, the river that has shaped and sustained Indian civilization through the ages, who we Indians revere as the life-giving mother, run dry! Many climate experts and environmentalists, in the last ten years, have been making dire predictions of the Ganges becoming seasonal. Some doomsayers have even gone to the extent of boldly predicting the river to be ephemeral by the year 2035, which is barely a generation away! Is it really possible that the Ganges will run dry by 2035! Is this calamity an inevitability that should be accepted as fait accompli, or is there anything we, the people of India, collectively can do to save the holy Mother Ganges from extinction”.

The concerns are valid. The Ganges originates from Gangotri glacier, which is one of the largest valley glaciers located in western Himalayas. 30.2 km long and 0.5 to 2.5 km wide Gangotri lies recumbent at the altitudes between 4,120 and 7,000 m above sea level. The total area occupied by the glacier complex (in 2001) that feeds Ganges, is 260 square km, which contains 40 cubic km of ice (in 1999). During a 60 year period between 1936 and 1996, Gangotri has receded by as much as 1,147 m, 850 m of which happened during a 25 year period between 1971 and 1996. In a three year period between 1996 and 1999 Gangotri retreated by 76 m. When this result is contrasted with the 2,000 m retreat over the last 200 years, the significantly accelerated rate of retreat has become obvious.

Also, Global warming does not mean uniform amount of warming at each and every place on the globe. Although vast majority of the places on this earth will become hotter due to global warming, however strange it may seem, certain parts will in fact become cooler.

Why? Well, it has to with how the weather and heat transfer systems work on our planet. Unlike the frozen wastes of Mars or lead melting surface of Venus, our planet is blessed with a very complex, yet delicate ecosystem.

The Gulf Stream is a vast oceanic current that carries warm waters from the tropics to the temperate regions of northern Europe and North America. This ocean current originates in the Gulf of Mexico, flows past the east coast of the USA and Newfoundland in Canada, and then crosses the Atlantic Ocean. It then branches into two, with the northern stream moving to northern Europe. The Gulf Stream is about 80 to 150 km wide and a 1,000 m deep river of sea that transports 1.4 petawatts (1 petawatt is 1,000 million megawatts) of heat, which is equivalent to almost 100 times the current energy demand of the entire world. Around Cape Hatteras on the coast of North Carolina in the US, the Gulf Stream transports water at the rate of 80 million cubic meter per second, and is much bigger than any river system of the world; in fact the combined release of all the waters from all the rivers flowing into the Atlantic is only 0.6 million cubic meter per second.

The Gulf Stream has significant localized effects on the climate of the east coast of Florida and Massachusetts in the US; and the west coast of Britain, which is a good few degrees warmer than the east coast. The warming effect of the Gulf Stream is most dramatic in the western islands of Scotland, so much so that the small township of Plockton (latitude 57.33oN) that is located east of the Isle of Skye, has a mild climate that allows sub-tropical cabbage-palm-trees to grow. The local climate in Plockton in the absence of the Gulf Stream would be freezing cold as latitudinal it lies further north of Moscow (latitude 55.45oN) by almost two degrees.

Due to global warming, there is every possibility that the Gulf Stream may change course or it may lose its strength. In fact in November 2004, it completely stopped for full ten days, and there are reports saying that in the last 50 years (since 1957) its deep return flow has weakened by as much as 30%. Any change in the characteristics of the Gulf Stream, would cause significant localized cooling in Scandinavia and Britain. At a time of global warming, the western islands of Scotland will experience substantial cooling.

The effects on human population and civilization can be imagined.

This, however, is the not the first time that climate change has affected humanity on such a scale.

The river Sarasvati for example, is widely considered to have supported the Harappan culture. Movement and ultimate decline of the Harappan culture are often attributed to climate change and its ultimate effect on river Sarasvati. Some Rigvedic verses (6.61.2-13) indicate that the Sarasvati River originated in the hills or mountains (giri), where she "burst with her strong waves the ridges of the hills (giri)". It is a matter of interpretation whether this refers not merely to the Himalayan foothills like the present-day Sarasvati (Sarsuti) river. The Sarasvati is described as a river swollen (pinvamānā) by other rivers (sindhubhih). Another reference to the Sarasvati is in the geographical enumeration of the rivers in the late Rigvedic Nadistuti sukta (10.75.5, this verse enumerates all important rivers from the Ganges in the east up to the Indus in the west in a strict geographical order), as "Ganga, Yamuna, Sarasvati, Shutudri", the Sarasvati is placed between the Yamuna and the Sutlej, consistent with the Ghaggar identification. It is clear, therefore, that even if she has unmistakably lost much of her former prominence, Sarasvati remains characterized as a river goddess throughout the Rigveda, being the home river of the Puru and later on, the Kuru tribe.

While Sarasvati River might still be remembered, its influence on Indian history cannot be discounted. Nor can its decline be overlooked.

However, climate change need not always be so bad for human culture. After all, our present day humans ascended to our current position in evolutionary terms due to ending of an ICE AGE.

Sahara desert, the largest one on our planet, used to be a very lush and green place before the change in climate led to its present state and forced a lot of human tribes into the valley of the Nile, leading up to its fabulous civilization and myriad dynasties.

Sustenance played a crucial role in the founding of Egyptian civilization. The Nile is an unending source of sustenance. The Nile made the land surrounding it extremely fertile when it flooded or was inundated annually. The Egyptians were able to cultivate wheat and crops around the Nile, providing food for the general population. Also, the Nile’s water attracted game such as water buffalo; and after the Persians introduced them in the 7th century BC, camels. These animals could be killed for meat, or could be captured, tamed and used for ploughing — or in the camels' case, travelling. Water was vital to both people and livestock. The Nile was also a convenient and efficient way of transportation for people and goods. The Nile played a major role in politics and social life.

But at the end of the day, it was climate change which led to rise and fall of these civilizations….

So what are we worried about?

It’s just that we humans are much more numerous now and spread of more area of the planet than at any time in our history. So climate change this time, turns out to be very inconvenient indeed and because it might lead to decline of our current global civilization and ultimately bring misery to untold billions, that we are so concerned.

We humans seem to be standing too much in the way of nature to be left unscathed by the fury that will be unleashed by the current spell climate change, arguably induced by our actions only…

Monday, October 06, 2008

LHC and grid computing…

Hello Folks! We are back to my favorite theme of LHC!

The Large Hadron Collider (LHC) at CERN near Geneva is the largest scientific instrument on the planet. When it begins operations, it will produce roughly 15 Petabytes (15 million Gigabytes) of data annually, which thousands of scientists around the world will access and analyze. The first phase of the grid actually went online on 29th Sept, 2003

The world's largest computing grid is all set to tackle the biggest ever data challenge from the most powerful accelerator, the Large Hadron Collider (LHC). Three weeks after the first particle beams were injected into the LHC, the Worldwide LHC Computing Grid combines the power of more than 140 computer centers from 33 countries to analyze and manage more than 15 million gigabytes of LHC data every year. The mission of the Worldwide LHC Computing Grid (LCG) project is to build and maintain data storage and analysis infrastructure for the entire high energy physics community that will use the LHC.

Just to refresh your knowledge on CERN and LHC - CERN is the European Laboratory for Particle Physics, one of the world's most prestigious centres for fundamental research. The laboratory is currently building the Large Hadron Collider. The most ambitious scientific undertaking the world has yet seen, the LHC will collide tiny fragments of matter head on to unravel the fundamental laws of nature. It is due to switch on in 2007 and will be used to answer some of the most fundamental questions of science by some 7,000 scientists from universities and laboratories all around the world.

"Particle physics projects such as the LHC have been a driving force for the development of worldwide computing grids," said Ed Seidel, director of the NSF Office of Cyber infrastructure. "The benefits from these grids are now being reaped in areas as diverse as mathematical modeling and drug discovery."

"Open Science Grid members have put an incredible amount of time and effort in developing a nationwide (US) computing system that is already at work supporting America's 1,200 LHC physicists and their colleagues from other sciences," said OSG executive director Ruth Pordes from DOE's Fermi National Accelerator Lab.

The data from the LHC experiments will be distributed around the globe, according to a four-tiered model. A primary backup will be recorded on tape at CERN, the “Tier-0” centre of LCG. After initial processing, this data will be distributed to a series of Tier-1 centres, large computer centres with sufficient storage capacity and with round-the-clock support for the Grid. The Tier-1 centres will make data available to Tier-2 centres, each consisting of one or several collaborating computing facilities, which can store sufficient data and provide adequate computing power for specific analysis tasks. Dedicated optical fiber networks distribute LHC data from CERN in Geneva, Switzerland to 11 major 'Tier-1' computer centers in Europe, North America and Asia, including those at DOE's Brookhaven National Lab in New York and Fermi National Accelerator Laboratory in Illinois. From these, data is dispatched to more than 140 "Tier-2" centers around the world, including 12 in the US. Individual scientists will access these facilities through Tier-3 computing resources, which can consist of local clusters in a University Department or even individual PCs, and which may be allocated to LCG on a regular basis.

"Our ability to manage data at this scale is the product of several years of intense testing," said Ian Bird, leader of the Worldwide LHC Computing Grid project.

"Today's result demonstrates the excellent and successful collaboration we have enjoyed with countries all over the world. Without these international partnerships, such an achievement would be impossible," he said.

"When the LHC starts running at full speed, it will produce enough data to fill about six CDs per second," said Michael Ernst, director of Brookhaven National Laboratory's Tier-1 Computing Centre.

"As the first point of contact for LHC data in the US, the computing centres at Brookhaven and Fermilab are responsible for storing and distributing a great amount of this data for use by scientists around the country. We've spent years ramping up to this point, and now, we're excited to help uncover some of the numerous secrets nature is still hiding from us," informed Ernst.

Physicists in the US and around the world will sift through the LHC data torrent in search of tiny signals that will lead to discoveries about the nature of the physical universe. Through their distributed computing infrastructures, these physicists also help other scientific researchers increase their use of computing and storage for broader discovery.

"Grid computing allows university research groups at home and abroad to fully participate in the LHC project while fostering positive collaboration across different scientific departments on many campuses," said Ken Bloom from the University of Nebraska-Lincoln, manager for seven Tier-2 sites in the US.

Saturday, October 04, 2008

US Financial Earthquake; bailout rejected, what’s next?

Stocks suffered their worst decline ever on a point basis Monday afternoon, as the failure of the House of Representative to pass Wall Street bailout legislation led to widespread selling of stocks.

If you haven’t noticed already the US politicians are now claiming their decisions are not considered a “bailout” rather an “insurance program” aimed at protecting the consumer from taking on close to a trillion dollars of new debt. The reason for the change in terminology is the politicians don’t want to be associated with a “bailout” rather the term “insurance” appears to be more dignified and safe. Plus it creates great spin for the media and the average consumer will simply equate this legislation to “safety in the markets” which the politicians will take credit for and stand proud for protecting American consumers and the global financial markets in general.

I think we are watching economic history unfold.

No, not the kind that most analysts are whining about: the excess debt that will make the prophecy of US meltdown. a reality sooner, rather than later.

Everyone takes some risk. Some take more risk than others. Gambling in Las Vegas has risk. Starting a business has risk. Those that take risk hope to receive the reward of their risk. Calculating and providing protection from risk is the business of insurance companies.

As Ajit Dayal puts it - Hank Paulson may go down in history as the man who took the world from its path of greed and destruction to a more “socialist” path. And he may help reduce the US debt by selling the companies he is buying at a huge profit.

Hundreds of billions of dollars of profit.

Enough money to stay another 436 days in Iraq and Afghanistan.

Enough profit to cover - for 269 days – the oil required to fill all those 50 million cars driving across USA.

Hank’s deals may not save the USA but it will postpone the inevitable.

The Secretary of the Treasury, Hank Paulson, is not a typical government bureaucrat and a “do-gooder”. Nor is he an academic that made it to the top job because of some theoretical paper on “inflation targeting” or “interest rates and cost of money” or “public ownership of private goods”.

Oh, no.
No, no.

Remember: Mr. Paulson has worked with Goldman, Sachs. He was a key member responsible for the firm’s global success. This man is a walking power-point on building businesses – successfully. He is a turn-around artist.

I think he knows what he is doing: there is method in this madness. Or is there?

AIG, one of the world’s largest insurers, took risk and did not calculate the potential impact of the risk they took. Subsequently the US Government decided to bail AIG out at the tune of $85 Billion in addition to the roughly $600 billion of other financial risk gone bad by banks and Wall Street firms.

The current U.S. Government Bailout program represents a significant shift for financial markets, and for individuals and businesses alike, in that US government has now become the largest insurer in the world. However the government insurance program is unlike any other insurance offered by the private sector. The U.S. Government Insurance Program basically says to business markets, don’t worry if you fail we’ll back you up. What a deal for big business but what about small business and individuals? If a small business or a family fails financially will the government come to our rescue as well?

The problem with the proposed plan and the attitude from US politicians, indeed any politician for that matter, is that both are fundamentally flawed in their assumptions. The basis of risk management is founded on a set of choices about risk. Individually and institutionally there are five choices one makes relative to managing risk.

- Avoid the risk: Don’t participate in activities or endeavors that increase risk
- Spread (share) the risk: In the insurance world this is known as reinsurance
- Transfer the risk: Move the consequences of related risk for others to assume
- Mitigate the risk: The Wall Street firms, banks and AIG first choose to assume the risk they took which then fueled failure. By declaring bankruptcy they in essence transferred the risk for someone else to assume. The U.S. Government, and our beloved politicians, looked at the situation that their own previous legislation had created and decided the risk of a global economic meltdown needed to be mitigated by the US government assuming the risk of those that had failed to manage the risk to begin with.

The problem with this thinking and the related decisions is that by assuming the risk the US government is actually increasing rather than mitigating the risk. How many other banks, insurance companies and Wall Street firms will fail and expect the government to bail them out. How many “big businesses”, the auto industry has already begun, will ask the government to help them mitigate the risk of their own decisions.

The proposed “bailout” is nothing more than a band aid on a damn of rising expectations actually created and enhanced by those who think their past and current decisions can be covered up or protected from risk. The truth is their decisions are fueling the risk and related cost which will eventually fall onto the back of every consumer.

The government, any government, is funded by the people and any assumption of risk falls onto “we the people”. Ultimately we the people have to provide the economics of any bailout or insurance program created by those “we the people” elect to make these decisions for us?