Showing posts with label Humanity. Show all posts
Showing posts with label Humanity. Show all posts

Monday, November 02, 2015

Who are we? We find that we live on an insignificant planet of a humdrum star lost in a galaxy tucked away in some forgotten corner of a universe in which there are far more galaxies than people” – Carl Sagan.

I return to my favourite theme of the Universe and all of creation. Somewhat grand with a hint of vagueness in restarting my outpouring of thoughts, the act of focusing my consciousness on the grandest stage of them of all. How did material complexity, then single-cell life, then animals and consciousness emerge from chaos? We still don’t how many universes are there? Is everything the result of a meaningless cosmic sneeze, or of an intentional First Cause?

As Robert Lanza puts it – “You’re not an object — you are your consciousness. You’re a unified being, not just your wriggling arm or foot, but part of a larger equation that includes all the colors, sensations and objects you perceive. If you divorce one side of the equation from the other you cease to exist. Indeed, experiments confirm that particles only exist with real properties if they’re observed. Until the mind sets the scaffolding of things in place, they can’t be thought of as having any real existence — neither duration nor position in space.”

As the great physicist John Wheeler said, “No phenomenon is a real phenomenon until it is an observed phenomenon.” That’s why in real experiments, not just the properties of matter — but space and time themselves — depend on the observer. Our consciousness isn’t just part of the equation — the equation is us (I guess).


I am greatly moved, almost emotionally, by one of Post-Impressionist Paul Gauguin's most famous paintings – “Where Do We Come From? What Are We? Where Are We Going?” Our quest to understand who we are is in part a quest to understand the destiny and purpose of our origin and existence. For the forthcoming few weeks, I shall be focusing my attention on these aspects before moving on to more humdrum issues requiring focused thought.

Sunday, December 06, 2009

The mystery of human Mortality

Living forever is just for Hollywood. But why do humans age? We are born with a robust toolbox full of mechanisms to fight disease and injury, which we might think should arm us against stiff joints and other ailments. But as we age, the body's repair mechanisms get out of shape. In effect, our resilience to physical injury and stress declines. Theories for why people age can be divided into two categories:

1) Like other human characteristics, aging could just be a part of human genetics and is somehow beneficial.

2) In the less optimistic view, aging has no purpose and results from cellular damage that occurs over a person's lifetime.

A handful of researchers, however, think science will ultimately delay aging at least long enough to double life spans.

Thursday, October 15, 2009

Some pearls of wisdom

ONE. Give people more than they expect and do it cheerfully.

TWO. Marry a man/woman you love to talk to. As you get older, their conversational skills will be as important as any other.

THREE. Don't believe all you hear, spend all you have or sleep all you want.

FOUR. When you say, 'I love you,' mean it.

FIVE. When you say, 'I'm sorry,' look the person in the eye.

SIX. Be engaged at least six months before you get married.

SEVEN. Believe in love at first sight.

EIGHT. Never laugh at anyone's dreams. People who don't have dreams don't have much.

NINE. Love deeply and passionately.. You might get hurt but it's the only way to live life completely.

TEN.. In disagreements, fight fairly. No name calling.

ELEVEN. Don't judge people by their relatives.

TWELVE. Talk slowly but think quickly.

THIRTEEN! .. When someone asks you a question you don't want to answer, smile and ask, 'Why do you want to know?'

FOURTEEN. Remember that great love and great achievements involve great risk.

FIFTEEN. Say 'bless you' when you hear someone sneeze.

SIXTEEN. When you lose, don't lose the lesson.

SEVENTEEN. Remember the three R's: Respect for self; Respect for others; and Responsibility for all your actions.

EIGHTEEN. Don't let a little dispute injure a great friendship.

NINETEEN. When you realize you've made a mistake, take immediate steps to correct it.

TWENTY. Smile when picking up the phone. The caller will hear it in your voice

TWENTY- ONE. Spend some time alone.

Wednesday, September 30, 2009

Amazing Art…

I found this amazing form/ expression of art and wanted to share with you all...

This stunning crop art has sprung up across rice fields in Japan. But this is no alien creation - the designs have been cleverly planted.

Farmers creating the huge displays use no ink or dye. Instead, different colors of rice plants have been precisely and strategically arranged and grown in the paddy fields. As summer progresses and the plants shoot up, the detailed artwork begins to emerge.



A Sengoku warrior on horseback has been created from hundreds of thousands of rice plants,
the colours created by using different varieties, in the village of Inakadate in Japan . The largest and finest work is grown in the Aomori village of Inakadate , 600 miles north of Toyko, where the tradition began in 1993. The village has now earned a reputation for its agricultural artistry and this year the enormous pictures of Napoleon and a Sengoku-period warrior, both on horseback, are visible in a pair of fields adjacent to the town hall. More than 150,000 vistors come to Inakadate, where just 8,700 people live, every summer to see the extraordinary murals. Each year hundreds of volunteers and villagers plant four different varieties of rice in late May across huge swathes of paddy fields.
Napolean on horseback can be seen from the skies, created by precision planting and months of planning between villagers and farmers in Inkadate
Fictional warrior Naoe Kanetsugu and his wife Osen appear in fields in the town of Yonezawa , Japan . And over the past few years, other villages have joined in with the plant designs. Various artwork has popped up in other rice-farming areas of Japan this year, including designs of deer dancers.

Smaller works of crop art can be seen in other rice-farming areas of Japan such as this image of Doraemon and deer dancers. The farmers create the murals by planting little purple and yellow-leafed kodaimai rice along with their local green-leafed tsugaru roman variety to create the coloured patterns between planting and harvesting in September. The murals in Inakadate cover 15,000 square metres of paddy fields. From ground level, the designs are invisible, and viewers have to climb the mock castle tower of the village office to get a glimpse of the work. Rice-paddy art was started there in 1993 as a local revitalization project, an idea that grew out of meetings of the village committee.
Closer to the image, the careful placement of thousands of rice plants in the paddy fields can be seen.


The different varieties of rice plant grow alongside each other to create the masterpieces In the first nine years, the village office workers and local farmers grew a simple design of Mount Iwaki every year. But their ideas grew more complicated and attracted more attention. In 2005 agreements between landowners allowed the creation of enormous rice paddy art. A year later, organisers used computers to precisely plot planting of the four differently colored rice varieties that bring the images to life.

Monday, September 21, 2009

The clutter in our lives

As sat quietly in my living room this afternoon, the thought occurred to me - Why would anyone want to carry around needless burdens? That's what clutter is. It drains one's energy, slows one's progress, and eats away at our limited time and space. Left unabated, it spreads all over one's life becoming emotional, mental, and physical clutter.

Have you gone through several iterations of de-cluttering, only to feel like you really didn’t make much progress? The likely reason for this is that our lives are filled with clutter, rather than us just having a few areas of clutter.

Our lives tend to accumulate clutter in every corner – on our desks, in our drawers, on our shelves at home, in our closets, in our computers, even in the activities that we do and our relationships!

We start out in life unfulfilled, with nothing, and we start acquiring stuff. At some point, we peak (the top of the curve) when we have enough. That’s the magic thing that we’re always looking for: enough.

I wonder if this has anything to do with the endowment effect.

The endowment effect (also known as divestiture aversion) is a hypothesis that people value a good or service more once their property right to it has been established. In other words, people place a higher value on objects they own than objects that they do not. In one experiment, people demanded a higher price for a coffee mug that had been given to them but put a lower price on one they did not yet own. The endowment effect was described as inconsistent with standard economic theory which asserts that a person's willingness to pay (WTP) for a good should be equal to their willingness to accept (WTA) compensation to be deprived of the good.

The effect was first theorized by Richard Thaler. It is a specific form, linked to ownership, of status quo bias. Although it differs from loss aversion, a prospect theory concept, those two biases reinforce each other in cases when the asset price has fallen compared to the owner's buying price. This bias has also a few similarities with commitment and attachment.

Loss aversion, by the way, was first proposed as an explanation for the endowment effect - the fact that people place a higher value on a good that they own than on an identical good that they do not own - by Kahneman, Knetsch, and Thaler (1990). Loss aversion and the endowment effect lead to a violation of the Coase theorem - that "the allocation of resources will be independent of the assignment of property rights when costless trades are possible"

Sigh… here I go again, getting too deep into the technicalities of what and why we clutter our lives with stuff we don’t often need or need to hang on to… we start acquiring from the day we are able to and that age is pretty low for humans…

But, of course we don’t stop acquiring. We enter the zone when we begin having more than enough, and therefore begin accumulating clutter. And most of us accumulate it all of our lives. The sad thing is, we don’t just have more stuff than we need, and we now have stuff we don’t need that demands our attention in some way: we have to maintain it, fix it, continue to make payments on it, store it,

So we start becoming less fulfilled instead of more so.

I can give numerous examples, but the most relevant one seems to be the thousands of books, some going back to my college days of over 10 years ago which I still hoard or my wife’s jewelry: enough to start a store. Closets full of clothes I haven’t worn in a long while, and so on.

Clutter is usually thought of as things we acquire or accumulate. And, when you stop and think about, the clutter goes beyond purchases: we also clutter our lives with activities that are of no real value to us – like watching serials which serve no practical purpose except to pass the time. I guess, most of us end up doing it unconsciously. The things we don't do, but should do, clutter our mind with apprehension and stress. Unwritten letters, unpaid bills, unanswered phone calls, and unattended tasks and obligations take their toll on our lives. They create a slow energy drain and are as distracting as an endless humming in our head. We can free ourselves from such needless headaches by taking the time to do whatever needs to be done. We can't do everything, but we should do the essentials

So, what do we do? We can do some de-cluttering. It seems to me, though, that what we most need to work on is our constant desire to fill our lives with more “stuff,” be that unnecessary purchases or activities that are of no value. I only suggest it as something we should be aware of and work on to the extent that we can on avoiding it or getting out of the trap that we lay for ourselves.

I don’t have any advice for you on how to do that, at least not at this time, but I did come across a good article by Chuck Gallozzi – and while there is no mystical answer to anything there, it does offer some practical advice!

Saturday, August 15, 2009

Long Term Implications of Longevity on Humanity

As I sat down after my initial indignation at the callous attitude, rather to be more accurate, retarded and inefficient approach towards solving a real life problem by AMEX travel counselors, my attention drifted towards more abstract thoughts.

The idea of extremely long life has been a fascinating one. Immortality (or eternal life) is the concept of living in a physical or spiritual form for an infinite or inconceivably vast length of time. As immortality is the negation of mortality—not dying or not being subject to death—it has been a subject of fascination to humanity since at least the beginning of history.

To present day science, it is not known whether human physical immortality is an achievable condition. Biological forms have inherent limitations — for example, their fragility and slow adaptability to changing environments, which may or may not be able to be overcome through medical interventions or engineering.

However, as of 2009, we do know that natural selection has developed biological immortality in at least one species, the jellyfish Turritopsis nutricula, one consequence of which is a worldwide population explosion of the organism.

According to Wikipedia - Biological immortality is the absence of a sustained increase in rate of mortality as a function of chronological age. A cell or organism that does not experience, or at some future point will cease, aging, is biologically immortal. However this definition of immortality was challenged in the new "Handbook of the Biology of Aging", because the increase in rate of mortality as a function of chronological age may be negligible at extremely old ages (late-life mortality plateau). But even though the rate of mortality ceases to increase in old age, those rates are very high.

Biologists have chosen the word immortal to designate cells that are not limited by the Hayflick limit (where cells no longer divide because of DNA damage or shortened telomeres). (Prior to the work of Leonard Hayflick there was the erroneous belief fostered by Alexis Carrel that all normal somatic cells are immortal.)

However, there is definite agreement on one fact at least - The immortality of a single cell has never been observed.

That would be the day!

If it were possible to have immortality in a single cell – this would naturally lead to immortality of a being – steady-state – never decaying, never dying – perpetual existence in a locked state.

Not even regeneration would be needed. Now, what would a time lord think of that idea?

The absence of aging would provide humans with biological immortality, but not invulnerability to death by physical trauma: According to 2002 statistical data, the odds of an individual being traumatically killed are once in every one thousand and seven hundred years.

Some life extensionists, such as those who practice cryonics, have the hope that humans may someday become biologically immortal. This would not be the same as literal immortality, since people are still susceptible to death through external circumstances (either deliberate or accidental).

But is immortality really desirable?

Let’s take a look at some arguments in favor of undesirability of immortality.

Physical immortality has also been imagined as a form of eternal torment, as in Mary Shelley's short story "The Mortal Immortal", the protagonist of which witnesses everyone he cares about dying around him. Jorge Luis Borges explored the idea that life gets its meaning from death in the short story "'The Immortal"; an entire society having achieved immortality, they found time becoming infinite, and so found no motivation for any action.

Here, one of the stories written by Jonathan Swift (author of Gulliver’s travels). He too had basically written against immortality. Of course – everyone assumes that to live forever means to live forever in a state of youth. Swift was rather astute to point out that this may not always be the case. Immortality trapped in an old body is worse than a curse!

Humans being social animals - to really live, alone, for a long time – is quite undesirable indeed.

Ethics of immortality

The possibility of clinical immortality raises a host of medical, philosophical, and religious issues and ethical questions. These include persistent vegetative states, the nature of personality over time, technology to mimic or copy the mind or its processes, social and economic disparities created by longevity, and survival of the heat death of the universe.

The social, emotional consequences of achieving extremely long life can very really hard to imagine. We have, in our times, experienced the average human lifespan increase tremendously. Today we do not find it odd to observe and expect that an average human – not subject to disease or trauma would at least live to be beyond 75 terrestrial years. Of course the average lifespan vary by region of earth and type of human gene pool as well.

However, our social customs and society is geared very so much towards the old medieval concept of aging. The so called “middle age” has steadily shifting towards 50s and we have a chance of really observing 80s as being called the middle age factor!

But what of the world population? The social consequences of people continuing to live the way it is done today – with its associated tremendous pressure on the resources of this planet.

There are of course religious viewpoints in this very sensitive subject – but I will deliberately restrict myself from venturing in that direction.

In my view, this planet cannot afford to have extremely long lived humans with their current pattern of consumption at figures of billions. We either need to find a new habitable planet to expand to or vastly change the way we live and consume from the environment.

In my future posts, I will concentrate more on emotional and social consequences of extremely long life spans…

Saturday, March 21, 2009

Internet and mankind

Sleeping on something sometimes produces profound thoughts… or so I have heard. During my well earned afternoon siesta today after a rather hectic week at work… the dreams hit on a theme we usually ignore – the internet and its potential impact on our society, culture and civilization.

Over the next several hundred years, as mankind grows in numbers, progresses in technology, and generally, learns how to control its environment with more accuracy and efficiency, the structure of society is likely to change in some very significant ways.

According to Wikipedia

Prior to the widespread internetworking that led to the Internet, most communication networks were limited by their nature to only allow communications between the stations on the network, and the prevalent computer networking method was based on the central mainframe computer model. Several research programs began to explore and articulate principles of networking between separate physical networks. This led to the development of the packet switching model of digital networking. These research efforts included those of the laboratories of Donald Davies (NPL), Paul Baran (RAND Corporation), and Leonard Kleinrock's MIT and UCLA.

The research led to the development of several packet-switched networking solutions in the late 1960s and 1970s, including ARPANET and the X.25 protocols. Additionally, public access and hobbyist networking systems grew in popularity, including unix-to-unix copy (UUCP) and FidoNet. They were however still disjointed separate networks, served only by limited gateways between networks. This led to the application of packet switching to develop a protocol for inter-networking, where multiple different networks could be joined together into a super-framework of networks. By defining a simple common network system, the Internet protocol suite, the concept of the network could be separated from its physical implementation. This spread of inter-network began to form into the idea of a global inter-network that would be called 'The Internet', and this began to quickly spread as existing networks were converted to become compatible with this. This spread quickly across the advanced telecommunication networks of the western world, and then began to penetrate into the rest of the world as it became the de-facto international standard and global network. However, the disparity of growth led to a digital divide that is still a concern today.

Following commercialization and introduction of privately run Internet Service Providers in the 1980s, and its expansion into popular use in the 1990s, the Internet has had a drastic impact on culture and commerce. This includes the rise of near instant communication by e-mail, text based discussion forums, and the World Wide Web. Investor speculation in new markets provided by these innovations would also lead to the inflation and collapse of the Dot-com bubble, a major market collapse. But despite this, the Internet continues to grow.

Because, of its already noticeable impact on the every day lives of many people, the Internet has attracted attention as something which may well bring about significant changes. The tendency of the Internet to promote connectedness between individuals and its ability to disseminate large and diverse quantities of information has led many theorists to advocate the concepts of the super organism and global brain as descriptions of future human society.

Bill gates, one of the visionaries of our times (and the richest individual for a long time on Forbes Listing) outlines the importance of Internet as –

“The main advantage of any new technology is that it amplifies human potential. In the 20th century, electricity, the telephone, the automobile and the airplane all made the world more accessible to more people, transforming our economy and society in the process. The Internet has the same revolutionary impact--individuals and businesses can overcome geographical, cultural and logistical barriers and improve the way they live and work. Because it amplifies our potential in so many ways, it's possible that the long-term impact of the Internet could equal that of electricity, the automobile and the telephone all rolled together. The Internet brings people closer together. Before the Internet, it was possible to keep in touch with relatives and friends across the country or around the world--but it was also expensive. Today, communicating with a friend in Japan is as easy and cheap as communicating with a friend across town, and families regularly use the Internet to keep in touch with far-flung relatives. Millions of people with shared interests--no matter how obscure--exchange information and build communities through Web sites, email and instant-messaging software. Using innovative accessibility aids, people with disabilities can use the Internet to help overcome barriers that prevent them from leading more productive and fulfilling lives”.

However, the internet is being thought of being much more than just bringing connectedness. What this interaction between individual “cells” represented by individual people and the global mass populations as super organisms has profoundly greater potential than being recognized as such.

Ideas of the human super organism and global brain first appeared in modern form in Herbert Spencer's The Principles of Sociology (1876-96). The super organism idea gained scientific support from the work of the notable Russian biogeochemist Vladimir Vernadsky. He performed groundbreaking studies of the large scale biochemical processes of the earth, and was the first to think of the Earth and all living things as a single biosphere.

While the biosphere concept deals with the Earth as a whole, Vernadsky also coined the term, noosphere, which more specifically denotes "the network of thoughts, information and communication that englobes the planet." This network could only be a phenomenon attributed to humans, and in 1955, Pierre Teilhard de Chardin published his work, The Phenomenon of Man, in which he popularized the term, noosphere, and the concept of the human superorganism and the global brain. Since then, a wide range of thinkers has taken up these concepts and developed them using today's knowledge of the world and humanity.

The most straightforward way of approaching the concept of the human superorganism is through trend. By examining a significant trend throughout the development of living creatures and humanity, in particular, it is possible to extrapolate the development of the human superorganism. This trend has been for systems of single entities acting separately to combine into systems of many entities acting in cooperation. The many-entity system can then be viewed as a single, larger entity, often with significant qualities which cannot be attributed to the individual entities making up the system.

In evolutionary terms, this tendency of individual entities to combine to form larger, more complex entities manifests itself further in the structure of multicelled organisms. The single cell organisms, which existed before the development of multicelled organisms, realized a distinct survival advantage by cooperating with other single cell organisms to satisfy their basic needs. The level of cooperation between single celled organisms evolved until the cooperative units could be viewed as single entities or multicell organisms.

Speculating about the future development of humanity, proponents of the superorganism concept notice this same trend in human societies with modern free market economies. In these societies, individuals acting for their own benefit often find that cooperation with another individual serves to further the individual interests of both parties. Indeed, this concept has been developing for millennia and has reached the point where people spend the majority of their lives specializing in one area of expertise. People provide work in narrow areas that somehow benefit others and contract with others to provide for their needs in all other areas.

The incredible rate at which scientific discoveries and technology advances are made today ensures that the need to specialize will only increase. As humans become even more specialized in narrow fields of expertise, the need for cooperation becomes even greater. In general, the more a person has specialized in one field, the less they know about other fields, and the more they must rely on others to provide knowledge or services in other fields. In this manner, the complexity of societal interactions and cooperation is constantly increasing with developments in science and technology.

Thus, we are reminded of the single cell organisms whose level of cooperation increased to the point where all cooperating cells could be seen as a single, more complex multicell organism. In the same way, if one assumes that the progress of humanity will continue in the manner we have seen throughout history, we can extrapolate the development of a human superorganism. Such a superorganism would result from the highly complex interactions and cooperation of the human individuals as they each act naturally in their own best interests.

The guiding intellect of this human superorganism can be defined as the global brain. This global brain is likely to have qualities which cannot be attributed to individual humans. Just as a cell cannot comprehend the concept of a human or of sentience, we may not be able to fully understand the nature of the superorganism and the global brain. However, one can hope that our sentience and investigative nature might give us more success than the cell.

To understand the influence of the Internet of the global brain it is necessary to understand the current state of the global brain. As witnessed to by all the present strife in the world, it is evident that the global brain, if at all existent, is still in a very primitive form. While there is a reasonable level of cooperation among members of some countries, cooperation among member of different societies is still quite limited by language and cultural differences. We see fighting, terrorism and general strife in many parts of the world in the name of language, religion and culture. Also, at least half of the people in the world are not in political or economic situations in which they can participate in a superorganism and global brain. Clearly, humanity has far to progress and many obstacles to overcome before the global brain can emerge as a coherent entity.

The advent of the Internet in the past decade has caused quite a stir among thinkers sympathetic to the global brain concept. It is thought that the Internet might be the ingredient needed to bring about the emergence of the global brain as a coherent entity. The reason the Internet inspires such optimism is due to its likeness to human brains and to its globalness.

Firstly, the organization of information on the World Wide Web in hypertext format closely resembles the associative connections formed by neurons in the brain.

Secondly, the Internet network encompasses the entire globe, holding the possibility for linking all humans with a means of virtually instant interaction.

Thirdly, it could conceivably store all human knowledge and provide instant access to that knowledge to its users.

This is an unparalleled trend in human evolution. Never have we, as a species, had instant access to such amount of information. Besides simply noticing the trend of today's civilization toward developing a global brain, some thinkers have suggested specific ways in which this might come about through development of the Internet. This is important for the healthy development of the global brain. The global brain must develop as a natural process that benefits individuals and never as an end in itself. This helps to guarantee that human individuals will not find themselves being oppressed for the "good" of the global brain.

Various information gathering mechanisms can be viewed as ways of improving the efficiency of interaction between people who are cooperating to perform various actions. The knowledge provided by a person with a certain specialty is more readily available for use by others who might need it. As such, the Internet, by dramatically increasing the level of cooperation between individuals, could possibly lead to a solidification of the global brain as a coherent entity.

In fact, some thinkers argue that the advent of internet may have even more profound impact on humanity than splitting of the atom!

“The fact that everything is possible on the Internet reveals mankind's true essence, the aspiration towards freedom.” Pierre Lévy, “Collective Intelligence: A Civilization”

The ubiquity of computers and access to the Internet has put the greatest libraries, image databases, and interactive tools at the fingertips of most artists working today. As a result, traditional artistic practice is exploding as artists explore the potential of these new technologies and incorporate them into their working methodologies.

I-Mod talks about the concept of Meta-brain - Mankind and it's Meta Brain, the Internet.

There is however, as usual, a contrary view as well as this article on cracked.com highlights it - 7 Reasons the 21st Century is Making You Miserable… or this one - Internet: Amazing tool of mankind in the 2lst century but

As Bill Gates puts it - The Internet has already revolutionized the way we live and work, but it is still in its infancy. In the coming years, a combination of cheap and powerful computing devices, fast and convenient Internet access, and software innovations could make the Internet as common and powerful a resource as electricity is today.

Mankind would never be the same again… although that may not always be a good thing all the time. But as history of dinosaurs has shown us… what does not evolve, perishes!

Monday, October 27, 2008

Virtual worlds and their evolution…

As humankind progresses, there are different avenues for social interaction which are becoming possible. The basic intrinsic need of human beings – is to interact. Higher from basic food and nourishment needs, arise the need to interact with fellow beings or fellow intelligences…

The need to interact with intelligence or sentient being is progressively seen in our search for intelligences from beyond our own world. Search for extra-terrestrial life is an extension of humankind’s desire to seek intelligence. In a manner, this direct result of our deep seated, is almost base fear of being alone.

Earlier, the speed of communication was as fast as one could run or as far as one could shout and carry his own voice – of course I am not counting telepathy here since majority of human beings are denied this facility. This changed with the technological revolution (not OUR technological revolution) in form of mastery of fire. This afforded, among other things, the ability to send out signals through fire and smoke which could be seen over large distances. Domestication of horses also helped to a large extent. The speed of communication was the speed of the fastest horse!

Things have advanced consideration from that point. Today, we have reached a stage where we have almost instantaneous communication with anyone on the planet is possible. First the telegraph, then the radio and telephone brought about an unprecedented revolution in the way people communicated.

They coupled with email (considerably later), were in fact, the ‘killer applications” of the 19th and 20th century!

Development and literal explosion of computing age coupled with networking capabilities have taken this to the next level. Now we have computers talking to other computers – almost instantaneously and the social consequences could not be far behind.

With the coming of gaming industry onto computer platform, we have witnessed another social revolution, of a sort, in terms of virtual worlds.

A virtual world is a computer-based simulated environment intended for its users to inhabit and interact via avatars. These avatars are usually depicted as textual, two-dimensional, or three-dimensional graphical representations, although other forms are possible (auditory and touch sensations for example). Some, but not all, virtual worlds allow for multiple users.

According to an article on Wikipedia – “The concept of virtual worlds predates computers and could be traced in some sense to Pliny. The mechanical-based 1962 Sensorama machine used the senses of vision, sound, balance, smells and touch (via wind) to simulate its world. Among the earliest virtual worlds to be implemented by computers were not games but generic virtual reality simulators, such as Ivan Sutherland's 1968 virtual reality device. This form of virtual reality is characterized by bulky headsets and other types of sensory input simulation. Contemporary virtual worlds, multi-user online virtual environments, emerged mostly independently of this virtual reality technology research, fueled instead by the gaming industry but drawing on similar inspiration. While classic sensory-imitating virtual reality relies on tricking the perceptual system into experiencing an immersive environment, virtual worlds typically rely on mentally and emotionally engaging content which gives rise to an immersive experience.

I won’t go much into the detailed history of virtual worlds – there is sufficient literature already available in the article quoted above. But the core philosophy behind virtual worlds is that the computer accesses a computer-simulated world and presents perceptual stimuli to the user, who in turn can manipulate elements of the modeled world and thus experiences telepresence to a certain degree.

As virtual world is a fairly vague and inclusive term, the above can generally be divided along a spectrum ranging from:

- massively multiplayer online role-playing games or MMORPGs where the user playing a specific character is a main feature of the game (World Of Warcraft for example).

- massively multiplayer online real-life/rogue-like games or MMORLGs, the user can edit and alter their avatar at will, allowing them to play a more dynamic role, or multiple roles

Some would argue that the MMO versions of RTS and FPS games are also virtual worlds if the world editors allow for open editing of the terrains if the "source file" for the terrain is shared. Emerging concepts include basing the terrain of such games on real satellite photos, such as those available through the Google Maps API or through a simple virtual geocaching of "easter eggs" on WikiMapia or similar mashups, where permitted.

Such modeled worlds may appear similar to the real world or instead depict fantasy worlds. The model world may simulate rules based on the real world or some hybrid fantasy world. Example rules are gravity, topography, locomotion, real-time actions, and communication. Communication between users has ranged from text, graphical icons, visual gesture, sound, and rarely, forms using touch and balance senses.

There are many different types of virtual worlds; however there are six features all of them have in common:

1. Shared Space: the world allows many users to participate at once.

2. Graphical User Interface: the world depicts space visually, ranging in style from 2D "cartoon" imagery to more immersive 3D environments.

3. Immediacy: interaction takes place in real time.

4. Interactivity: the world allows users to alter, develop, build, or submit customized content.

5. Persistence: the world's existence continues regardless of whether individual users are logged in.

6. Socialization/Community: the world allows and encourages the formation of in-world social groups like teams, guilds, clubs, cliques, housemates, neighborhoods, etc

There is a virtual plethora of information available on virtual worlds at the virtual worlds review.

The use of virtual worlds in training arena is picking up fast, including military training. Simulators for various types of aircrafts are, at the end of the day, a kind of virtual world only.

If we speculate further, the kind of social revolution that has come about due to virtual worlds is tremendous. People can meet over media which is more immersive than ever and I think I have seen the future of virtual worlds as well – for those who are followers of the Star Trek – holodeck is the ultimate expression of virtual world!

Personally, I eagerly await the development of virtual world technology to holodeck level. It would afford those who are infirm or otherwise unable to travel due to certain kind of disability – to visit people, culture and places they would otherwise have never dreamt of…

Sunday, October 26, 2008

Back to the moon…

India, with its launch of the unmanned moon probe entered the race of return to the moon – earth’s only and somewhat abnormally large – natural satellite.

Abnormal? What is so abnormal about our moon – which has inspired different emotions in humankind throughout the ages – from awe, superstition to much more tender emotions such as love, etc. well, as it happens, the Moon happens to quite large as compared to its capital body around which is revolves, in fact it is the fifth largest natural satellite in the entire SOL’s (sun) Solar System. Take for example other large moons of the solar system such as Triton, Titan, IO, Europa, Cheron (what!!! – you didn’t know that Pluto had a moon too?) – They are all huge, but pale when compared to their “parent” body. In fact, Jupiter and Saturn are so big (full of gas) that their moons or satellites are very small indeed when compared to their own mass or size. The earth’s moon, in comparison, is very large indeed – in terms of diameter - a little more than a quarter that of the Earth. This means that the Moon's volume is about 2 percent that of Earth and the pull of gravity at its surface about 17 percent that of the Earth. In fact, in astronomy, it is found quite astonishing to have such a large body orbiting a relatively small core planet. Indeed, sometimes, from galactic perspective, Earth-Moon system is often referred to as the double-planet system.

By the middle of the 17th century, Galileo and other early astronomers made telescopic observations, noting an almost endless overlapping of craters. It has also been known for more than a century that the Moon is less dense than the Earth. Although a certain amount of information was ascertained about the Moon before the space age, this new era has revealed many secrets barely imaginable before that time. Current knowledge of the Moon is greater than for any other solar system object except Earth.

Various facts, especially the NASA photographs of Apollo missions are lucidly presented in this article by Rosanna L. Hamilton.

But really, how much do we know about our own galactic backyard?

The Moon makes a complete orbit around the Earth every 27.3 days (the orbital period), and the periodic variations in the geometry of the Earth–Moon–Sun system are responsible for the lunar phases that repeat every 29.5 days (the synodic period). The Moon is in synchronous rotation, meaning that it keeps nearly the same face turned towards the Earth at all times. Early in the Moon's history, its rotation slowed and became locked in this configuration as a result of frictional effects associated with tidal deformations caused by the Earth. The far side had never been seen by any human until the launch of moon probes in the last 1950’s.

You can see the Virtual Reality Moon Phase Pictures here.

The Moon is the only celestial body to which humans have travelled and upon which humans have landed. The first artificial object to escape Earth's gravity and pass near the Moon was the Soviet Union's Luna 1, the first artificial object to impact the lunar surface was Luna 2, and the first photographs of the normally occluded far side of the Moon were made by Luna 3, all in 1959. The first spacecraft to perform a successful lunar soft landing was Luna 9, and the first unmanned vehicle to orbit the Moon was Luna 10, both in 1966. The United States (U.S.) Apollo program achieved the only manned missions to date, resulting in six landings between 1969 and 1972. Human exploration of the Moon ceased with the conclusion of the Apollo program, although several countries have announced plans to send people or robotic spacecraft to the Moon – well – India and China are amongst the nations now raring to literally reach for the moon!

Okay, back to moon J

One distinguishing feature of the far side is its almost complete lack of maria. The dark and relatively featureless lunar plains which can clearly be seen with the naked eye are called maria (singular mare), Latin for seas, since they were believed by ancient astronomers to be filled with water. These are now known to be vast solidified pools of ancient basaltic lava. The majority of these lavas erupted or flowed into the depressions associated with impact basins that formed by the collisions of meteors and comets with the lunar surface. Maria are found almost exclusively on the near side of the Moon, with the far side having only a few scattered patches covering only about 2% of its surface compared with about 31% on the near side.

The lighter-colored regions of the Moon are called terrae, or more commonly just highlands, since they are higher than most maria. Several prominent mountain ranges on the near side are found along the periphery of the giant impact basins, many of which have been filled by mare basalt. These are believed to be the surviving remnants of the impact basin's outer rims. In contrast to the Earth, no major lunar mountains are believed to have formed as a result of tectonic events.

The Moon's surface shows obvious evidence of having been affected by impact cratering. Impact craters form when asteroids and comets collide with the lunar surface, and globally about half a million craters with diameters greater than 1 km can be found. Since impact craters accumulate at a nearly constant rate, the number of craters per unit area superposed on a geologic unit can be used to estimate the age of the surface (see crater counting). The lack of an atmosphere, weather and recent geological processes ensures that many of these craters have remained relatively well preserved in comparison to those found on Earth. The largest crater on the Moon, which also has the distinction of being one of the largest known craters in the Solar System, is the South Pole-Aitken basin. This impact basin is located on the far side, between the South Pole and equator, and is some 2,240 km in diameter and 13 km in depth.

Blanketed atop the Moon's crust is a highly comminuted (broken into ever smaller particles) and "impact gardened" surface layer called regolith. Since the regolith forms by impact processes, the regolith of older surfaces is generally thicker than for younger surfaces. In particular, it has been estimated that the regolith varies in thickness from about 3–5 m in the maria, and by about 10–20 m in the highlands. In other words, the soil is thick and slick…

But why the rush of back to moon, why now?

India launched Chandrayaan-1 - in a historic feat, on October 22, 2008 from the Satish Dhawan Space Centre in Sriharikota. The successful launch of India's maiden unmanned moon mission Chandrayaan-1 has catapulted the country into the league of a select group of nations. One of the prime reasons is national pride and then the other is the possibilities it affords. With the western economy in decline and ascendency of India and China in this century, it was only a matter of time before these two nations realized the importance of breaking the bounds of earth’s puny gravity well and soar beyond.

But beyond the political hype and all the aspirations of becoming a superpower, there is a much more practical aspect to race towards the moon.

The Moon holds several minerals and elements not found or manufactured easily on earth. Helium-3 for example. Then there are spin-off benefits from the technology that must be developed to reach moon. The sophistication and cost effectiveness of the journey outwards ultimately holds the key to cheap and profitable exploration of outer space.

India's love fest with deep space has only just begun. It could well become a force to reckon with giving the established space agencies a run for their money as the Indian moon mission is the cheapest till date of all moon missions in this century, but one also which creates a world record of carrying the largest suite of scientific instruments ever to be carried to the moon till date.

And frankly, I would want to see a difference from NASA – whose every mission has to cost a billion dollars and then explode either while leaving or entering earth’s gravity well. I’ve got nothing against NASA – bunch of great guys (lot of Indians there in fact if I heard it right), but everything they do - why DOES IT HAVE TO COST SO MUCH?

If the technology wasn’t ready to allow human exploration of space in a safe way, why send humans? Robotic vehicles can also operate with certain amount of efficiency and the money could have been better utilized in developing technologies which would have ultimately resulted in cheaper access to outer space.

And now NASA’s mantra has indeed become – faster, cheaper and better (FCB) – evident in the Mars probes.

In fact, if the price tag wasn’t so high, space exploration would have proceeded at a much quicker pace than what happened in the aftermath of Apollo missions.

We keep on saying that man has landed on the moon and we are not exploring our own cosmic backyard. But, without the intention of belittling our (humankind’s) achievements so far, what we have done till now is slingshot a few missions to moon (with humans in it), put a space station in orbit (with the ever present danger of it falling down on our heads – e.g. MIR) and sent some probes to near by planets. Voyager I & II and the pioneer missions were an exception. They were real value for their money because of the wealth of information that they afforded to humanity about the outer side of our solar system.
I would closely watch India and humanity’s collective progress of back to the moon, mars and then ultimately the solar system.

Monday, October 20, 2008

Extra-terrestrial origins of life?

The concept of life being a cosmic phenomenon is rapidly gaining support, with new evidence from space science, geology and biology. In this picture life on Earth resulted from the introduction of bacteria from comets, and the subsequent evolution of life required the continuing input of genes from comets.

Fred Hoyle was an important scientist who worked at the frontiers of astronomy and theoretical physics. In 1983 he published a well-illustrated popular book for nonscientists in which he attacked the whole idea that life originated and evolved on Earth and replaced it by 'intelligent cosmic control'.

Although Hoyle has been accused to siding with creationism – I personally disagree with this assessment. He was in favor of cosmic connection and control of origin of life on earth and elsewhere in universe and was not particularly hinting at any divine control.

In an interview, N Chandra Wickramasinghe, one of the foremost authority on the idea of life from outer space a student and collaborator of Sir Fred Hoyle – in the frontline – mentions that two recent experiments in the United States have once again drawn the attention of scientists to the theory of panspermia.

Fred Hoyle and Chandra Wickramasinghe demonstrated correctly that interstellar clouds contain some organic molecules but their subsequent proposal for the extraterrestrial origin of life on earth and for the ability of microbes to survive in space are not substantiated by hardcore evidence. However, with the hard core evidence from other quarters coming in, this theory is gaining ground and acceptance in mainstream scientific thinking.

Panspermia - which literally means seeds everywhere, underlies the hypothesis that the (biological) stuff of life did not have its origins in terrestrial resources but in inter-stellar space.

The theory maintains that life on the earth was seeded from space and that life's evolution to higher forms depends on complex genes (including those of viruses and diseases) that the earth receives from space from time to time.)

The two experiments discussed included:

In one experiment reported in October, environmental biologists Russell H. Vreeland and William D. Rosenzweig claimed that they discovered the longest surviving (250 million years) bacterial spores locked inside a salt crystal formation in Mexico that could be revived. This was considered as evidence that life - even one-celled micro-organisms - could survive in suspended animation for eons and float on comets to far away planets

In another experiment reported, a team of scientists from the California Institute of Technology, Vanderbilt and McGill Universities discovered that small pieces of space rock could be transferred from Mars to earth without its interior get ting excessively heated up, thus enabling living organisms to ride in them

The renewed interest in panspermia also comes in the wake of space-based discoveries that include recent findings of some simple amino acids and sugars in inter-stellar space; the announcement by the National Aeronautics and Space Administration (NASA) in August 1996 of evidence of fossilized ancient life in a meteorite from Mars; evidence in the same year by geneticists that many genes are much older than what the fossil record would indicate; the discovery by a Russian microbiologist in 1998 of a micro-fossil in a meteorite was a previously unknown bacterium; and the announcement by NASA in April this year of the detection of very large organic molecules in space in its Stardust Mission launched in February and that the non-biological origins of such large molecules are not known


Early history of panspermia

Until the late 19th century panspermia meant the passage of organisms through Earth’s own atmosphere, not an incidence from outside Earth. In this form it seems to have been used first by the Abbee Lazzaro Spallanzoni (1729-99). But almost a century before that, Francesco Redi had carried out what can be seen as a classic experiment in the subject. He had shown that maggots appear in decaying meat only when the meat is exposed to air, inferring that whatever it was that gave rise to the maggots must have travelled to the meat through the air.

A very long wait until the 1860’s then ensued, until Louis Pasteur’s3 experiments on the souring of milk and the fermentation of wine showed that similar results occurred when the air-borne agents were bacteria, replicating as bacteria but not producing a visible organism like maggots. The world then permitted Pasteur to get away with a huge generalization, and honored him greatly both at the time and in history for it. Because by then the world was anxious to be done with the old Aristotelian concept of life emerging from the mixing of warm earth and morning dew. The same old concept was to arise again in the mid-twenties of the past century, however, but with a different name. Instead of Aristotle’s warm earth and morning dew it became “a warm organic soup.”

Pasteur’s far-ranging generalization implied that each generation of every plant or animal is preceded by a generation of the same plant or animal. This view was taken up enthusiastically by others, particularly by physicists, among them John Tyndall, who lectured frequently on the London scene. The editorial columns of the newly established Nature (e.g., issue of January 27, 1870) objected with some passion to Tyndall’s Friday evening discourse at the Royal Institution on January 21, 1870. Behind the objection was the realizations that were Pasteur’s paradigm taken to be strictly true, the origin of life would need to be external to Earth. For if life had no spontaneous origin, it would be possible to follow any animal generation-by-generation back to a time before Earth existed, the origin being therefore required outside Earth.

This was put in remarkably clear terms in 1874 by the German physicist Hermann von Helmholtz4:

It appears to me to be a fully correct scientific procedure, if all our attempts fail to cause the production of organisms from non-living matter, to raise the question whether life has ever arisen, whether it is not just as old as matter itself, and whether seeds have not been carried from one planet to another and have developed everywhere where they have fallen on fertile soil….

In his presidential address to the 1881 meeting of the British Association, Lord Kelvin drew a remarkable picture:

When two great masses come into collision in space, it is certain that a large part of each is melted, but it seems also quite certain that in many cases a large quantity of debris must be shot forth in all directions, much of which may have experienced no greater violence than individual pieces of rock experience in a landslip or in blasting by gunpowder. Should the time when this earth comes into collision with another body, comparable in dimensions to itself, be when it is still clothed as at present with vegetation, many great and small fragments carrying seeds of living plants and animals would undoubtedly be scattered through space. Hence, and because we all confidently believe that there are at present, and have been from time immemorial, many worlds of life besides our own, we must regard it as probable in the highest degree that there are countless seed-bearing meteoric stones moving about through space. If at the present instant no life existed upon Earth, one such stone falling upon it might, by what we blindly call natural causes, lead to its becoming covered with vegetation.”

Essentially, what Kelvin was suggesting at that time was it is possible for seeds of life to be carried between planetary or cosmic bodies. Thus almost 120 years ago the ideas that have recently come to the forefront of scientific discussion were already well known. Unfortunately there was no way at that date, 1881, whereby observation or experiment could be brought seriously to bear on Kelvin’s formulation of panspermia and the world had to wait 120 year before some sort of concrete experimental and observational proof started trickling in.

It has been known for quite some time that bacteria and other microorganism are extremely hardy. Some have been found to be living in the most unlikely places where conventional thinking would attribute life’s survival – such as in the heavy water of nuclear reactors or the highly acidic and hot volcanic areas. These extremophiles, i.e. microbes that live in conditions that would kill other creatures. It was not until the 1970's that such creatures were recognized, but the more researchers look, the more they discover that most archaea; some bacteria and a few protists can survive in the harshest and strangest of environments.

There is scarcely any set of conditions prevailing on Earth, no matter how extreme that is incapable of harboring some type of microbial life. Under space conditions, microorganisms are very easily protected against ultraviolet damage.

One may find it very difficult to believe but there is evidence that some of bacteria actually survived almost 2 years on moon’s harsh environment. What has happened was that during the sealing of a camera which was supposed to be sent to moon, somebody might have sneezed and hence left microbes inside the camera body. The camera in turn was delivered to moon’s surface to find best locations for landing. 2 years later when the Apollo astronauts retrieved this camera and brought it back to earth – this fact was discovered. Moon has almost no atmosphere and the range of temperature is very high indeed. if microbes can survive 2 years of almost vacuum and the harsh differences in temperatures, is it really far fetched to think that it would be possible for them to travel intra-stellar or interstellar distances and seed life on earth as well?

Couple this with the fact that today an impressive array of interstellar molecules has been detected and among the list are a host of hydrocarbons, polyaromatic hydrocarbons, the amino acid glycine, vinegar and the sugar glycoaldehyde14. Such organic molecules that pervade interstellar clouds make up a considerable fraction of the available galactic carbon.

Actually, theories of how interstellar organic molecules might form via non-biological processes are still in their infancy and, in terms of explaining the available facts, they leave much to be desired.

N Chandra Wickramasinghe further speculates – “The overwhelming bulk of organic matter on Earth is indisputably derived from biology, much of it being degradation products of biology. Might not the same processes operate in the case of interstellar organic molecules? The polyaromatic hydrocarbons that are so abundant in the cosmos could have a similar origin to the organic pollutants that choke us in our major cities - products of degradation of biology, biologically generated fossil fuels in the urban case, cosmic microbiology in the interstellar clouds. The theory of cosmic panspermia that we have proposed leads us to argue that interstellar space could be a graveyard of cosmic life as well as its cradle. Only the minutest fraction (less than one part in a trillion) of the interstellar bacteria needs to retain viability, in dense shielded cloudlets of space, for panspermia to hold sway. Common sense dictates that this survival rate is unavoidable.”

So where does this leads us? Actually, to me this is a partial answer at best. While it may be indisputably established that life did not independently evolve here on earth and was in fact seeded from the stars – it still leaves us clueless about the origins of life itself, wherever it might have evolved or originated.

In this sense, we are all aliens on this planet. Something to ponder on…. I will be writing more on this topic in coming days.

Saturday, October 18, 2008

The Web, then and now and beyond… Web 2.0!

Following commercialization and introduction of privately run Internet Service Providers in the 1980s, and its expansion into popular use in the 1990s, the Internet has had a drastic impact on culture and commerce. The Internet has touched lives profoundly all over the world. It has led to an almost revolution in the way world operates. And it has brought humanity closer.

In the 1950s and early 1960s, prior to the widespread inter-networking that led to the Internet, most communication networks were limited by their nature to only allow communications between the stations on the network. Some networks had gateways or bridges between them, but these bridges were often limited or built specifically for a single use. One prevalent computer networking method was based on the central mainframe method, simply allowing its terminals to be connected via long leased lines. This method was used in the 1950s by Project RAND to support researchers such as Herbert Simon, in Pittsburgh, Pennsylvania, when collaborating across the continent with researchers in Sullivan, Illinois, on automated theorem proving and artificial intelligence.

I am not going to rant about the history of internet here. This is covered in a very comprehensive and lucid way in this excellent article at Wikipedia. Another great article, starting a little earlier than the 1950s… Roads and Crossroads of Internet History.

Walt Howe’s discourse about the history of internet is also very interesting to read.

Another interesting aspect is Al Gore’s contribution to the growth of internet.

The commercialization of the Internet brought extraordinary, almost to the point of being irrational, wealth to a few individuals. The euphoria created by sudden connectivity, unleashing the power of individual creativity and bringing it onto a world stage with very little cost was also paralleled in an irrational exuberance in the stock markets. This is also commonly known as the Dot Com Bubble.

It all started during the mid 1990’s. The Stock Market soared on technology and Internet stocks, IPOs were all the rage, and the sky was the limit for stock prices. The masses believed there was a new world upon us, and the internet was to become the future of business. Then reality set in when the hype didn’t live up to its promises and the stock market crashed. If you take all of this for only its face value, all you see is what happens when a stock market gets overvalued and crashes, but if you look deeper you can find plenty of timeless lessons that every investor should learn.

Ian Peter’s History of the Internet - the Dotcom bubble nicely summarizes the Internet and Dot Com Bubble.

Web 2.0 is a buzzword that exploded in popularity sometime in 2005. In 2004, software guru Tim O'Reilly founded the Web 2.0 Conference, held annually in San Francisco. It has since expanded from a conference into a way of thinking, a new approach to doing business on the Internet. There is no standard definition for web 2.0, as it is a cluster of ideas rather than anything clear-cut. However, O'Reilly's comments on the topic are seen as having special authority, and rank among the top Google search results for the term.

According to Tim O’Reilly – “Web 2.0 is the term used, in the technology field, to refer to some exciting developments that promise to bring us to a new and improved internet experience. For developers this means learning new technologies and frameworks to provide better, faster and more sophisticated features”.

The concept of "Web 2.0" began with a conference brainstorming session between O'Reilly and MediaLive International. Dale Dougherty, web pioneer and O'Reilly VP, noted that far from having "crashed", the web was more important than ever, with exciting new applications and sites popping up with surprising regularity. What's more, the companies that had survived the collapse seemed to have some things in common. Could it be that the dot-com collapse marked some kind of turning point for the web, such that a call to action such as "Web 2.0" might make sense? We agreed that it did, and so the Web 2.0 Conference was born.

Paul Graham describes Web 2.0 as an idea that was meaningless when started, but has acquired a meaning during the intervening period. He then goes on to describe the various components that contribute to the new revolution.

According to Wikipedia - Web 2.0 is a term describing changing trends in the use of World Wide Web technology and web design that aims to enhance creativity, secure information sharing, collaboration and functionality of the web. Web 2.0 concepts have led to the development and evolution of web-based communities and its hosted services, such as social-networking sites, video sharing sites, wikis, blogs, and folksonomies. Although the term suggests a new version of the World Wide Web, it does not refer to an update to any technical specifications, but to changes in the ways software developers and end-users utilize the Web.

The first premise of web 2.0 is leveraging the power of the user. For example, fluid user tagging of content would be used instead of a centralized taxonomy. Web 2.0 entrepreneurs often consider the Long Tail, which is basically an observation that the vast majority of the attention market is based on niche content. Web 2.0 is radically decentralized, as in the case of BitTorrent, a collaborative downloading co-op that consumes a serious portion of all Internet traffic.

Blogs are considered web 2.0. Instead of centralized "personal home pages", blogs let people easily post as much or as little as they want as rarely or as frequently as they want. Feed aggregators ensure that people only need to visit a single site to see all the feeds they subscribe to. Comments are enabled everywhere, allowing people to participate rather than passively consume content.

Web 2.0 technology encourages lightweight business models enabled by syndication of content and of service and by ease of picking-up by early adopters.

O'Reilly provided examples of companies or products that embody these principles in his description of his four levels in the hierarchy of Web 2.0 sites:

- Level-3 applications, the most "Web 2.0"-oriented, exist only on the Internet, deriving their effectiveness from the inter-human connections and from the network effects that Web 2.0 makes possible and growing in effectiveness in proportion as people make more use of them. O'Reilly gave eBay, Craigslist, Wikipedia, del.icio.us, Skype, dodgeball, and AdSense as examples.

- Level-2 applications can operate offline but gain advantages from going online. O'Reilly cited Flickr, which benefits from its shared photo-database and from its community-generated tag database.

- Level-1 applications operate offline but gain features online. O'Reilly pointed to Writely (now Google Docs & Spreadsheets) and iTunes (because of its music-store portion).

- Level-0 applications work as well offline as online. O'Reilly gave the examples of MapQuest, Yahoo! Local, and Google Maps (mapping-applications using contributions from users to advantage could rank as "level 2").

Non-web applications like email, instant-messaging clients, and the telephone fall outside the above hierarchy

Web 2.0 websites allow users to do more than just retrieve information. They can build on the interactive facilities of "Web 1.0" to provide "Network as platform" computing, allowing users to run software-applications entirely through a browser. Users can own the data on a Web 2.0 site and exercise control over that data. These sites may have an "Architecture of participation" that encourages users to add value to the application as they use it. This stands in contrast to very old traditional websites, the sort which limited visitors to viewing and whose content only the site's owner could modify. Web 2.0 sites often feature a rich, user-friendly interface based on Ajax, OpenLaszlo, Flex or similar rich media.

Second life and similar platforms provide an immersive environment where users can interact with each other in a father richer way than was possible through the web 1.0 tools such as chats, emails, etc.

In my next post, I’ll be talking more on how this new revolution is touching lives and what may be in store for humanity with this medium… almost bordering on the science fiction!

Tuesday, October 07, 2008

Climate Change and civilizations…

In my previous post on the topic of global warming and its impact on planet’s climate, I had delved into the causes behind global warming and varying opinions of different authority figures. This time, I would like to trace the effects of climate change on the evolution of humanity.

I came across an interesting article by Dr Nachiketa Das – where he discusses the topic of climate change and its effect on river Ganges.

He is of the opinion – “Global warming, now in 2008, is real, and upon us. How will global warming affect the rivers in India; will they all dry up? Can the holy Ganges, the river that has shaped and sustained Indian civilization through the ages, who we Indians revere as the life-giving mother, run dry! Many climate experts and environmentalists, in the last ten years, have been making dire predictions of the Ganges becoming seasonal. Some doomsayers have even gone to the extent of boldly predicting the river to be ephemeral by the year 2035, which is barely a generation away! Is it really possible that the Ganges will run dry by 2035! Is this calamity an inevitability that should be accepted as fait accompli, or is there anything we, the people of India, collectively can do to save the holy Mother Ganges from extinction”.

The concerns are valid. The Ganges originates from Gangotri glacier, which is one of the largest valley glaciers located in western Himalayas. 30.2 km long and 0.5 to 2.5 km wide Gangotri lies recumbent at the altitudes between 4,120 and 7,000 m above sea level. The total area occupied by the glacier complex (in 2001) that feeds Ganges, is 260 square km, which contains 40 cubic km of ice (in 1999). During a 60 year period between 1936 and 1996, Gangotri has receded by as much as 1,147 m, 850 m of which happened during a 25 year period between 1971 and 1996. In a three year period between 1996 and 1999 Gangotri retreated by 76 m. When this result is contrasted with the 2,000 m retreat over the last 200 years, the significantly accelerated rate of retreat has become obvious.

Also, Global warming does not mean uniform amount of warming at each and every place on the globe. Although vast majority of the places on this earth will become hotter due to global warming, however strange it may seem, certain parts will in fact become cooler.

Why? Well, it has to with how the weather and heat transfer systems work on our planet. Unlike the frozen wastes of Mars or lead melting surface of Venus, our planet is blessed with a very complex, yet delicate ecosystem.

The Gulf Stream is a vast oceanic current that carries warm waters from the tropics to the temperate regions of northern Europe and North America. This ocean current originates in the Gulf of Mexico, flows past the east coast of the USA and Newfoundland in Canada, and then crosses the Atlantic Ocean. It then branches into two, with the northern stream moving to northern Europe. The Gulf Stream is about 80 to 150 km wide and a 1,000 m deep river of sea that transports 1.4 petawatts (1 petawatt is 1,000 million megawatts) of heat, which is equivalent to almost 100 times the current energy demand of the entire world. Around Cape Hatteras on the coast of North Carolina in the US, the Gulf Stream transports water at the rate of 80 million cubic meter per second, and is much bigger than any river system of the world; in fact the combined release of all the waters from all the rivers flowing into the Atlantic is only 0.6 million cubic meter per second.

The Gulf Stream has significant localized effects on the climate of the east coast of Florida and Massachusetts in the US; and the west coast of Britain, which is a good few degrees warmer than the east coast. The warming effect of the Gulf Stream is most dramatic in the western islands of Scotland, so much so that the small township of Plockton (latitude 57.33oN) that is located east of the Isle of Skye, has a mild climate that allows sub-tropical cabbage-palm-trees to grow. The local climate in Plockton in the absence of the Gulf Stream would be freezing cold as latitudinal it lies further north of Moscow (latitude 55.45oN) by almost two degrees.

Due to global warming, there is every possibility that the Gulf Stream may change course or it may lose its strength. In fact in November 2004, it completely stopped for full ten days, and there are reports saying that in the last 50 years (since 1957) its deep return flow has weakened by as much as 30%. Any change in the characteristics of the Gulf Stream, would cause significant localized cooling in Scandinavia and Britain. At a time of global warming, the western islands of Scotland will experience substantial cooling.

The effects on human population and civilization can be imagined.

This, however, is the not the first time that climate change has affected humanity on such a scale.

The river Sarasvati for example, is widely considered to have supported the Harappan culture. Movement and ultimate decline of the Harappan culture are often attributed to climate change and its ultimate effect on river Sarasvati. Some Rigvedic verses (6.61.2-13) indicate that the Sarasvati River originated in the hills or mountains (giri), where she "burst with her strong waves the ridges of the hills (giri)". It is a matter of interpretation whether this refers not merely to the Himalayan foothills like the present-day Sarasvati (Sarsuti) river. The Sarasvati is described as a river swollen (pinvamānā) by other rivers (sindhubhih). Another reference to the Sarasvati is in the geographical enumeration of the rivers in the late Rigvedic Nadistuti sukta (10.75.5, this verse enumerates all important rivers from the Ganges in the east up to the Indus in the west in a strict geographical order), as "Ganga, Yamuna, Sarasvati, Shutudri", the Sarasvati is placed between the Yamuna and the Sutlej, consistent with the Ghaggar identification. It is clear, therefore, that even if she has unmistakably lost much of her former prominence, Sarasvati remains characterized as a river goddess throughout the Rigveda, being the home river of the Puru and later on, the Kuru tribe.

While Sarasvati River might still be remembered, its influence on Indian history cannot be discounted. Nor can its decline be overlooked.

However, climate change need not always be so bad for human culture. After all, our present day humans ascended to our current position in evolutionary terms due to ending of an ICE AGE.

Sahara desert, the largest one on our planet, used to be a very lush and green place before the change in climate led to its present state and forced a lot of human tribes into the valley of the Nile, leading up to its fabulous civilization and myriad dynasties.

Sustenance played a crucial role in the founding of Egyptian civilization. The Nile is an unending source of sustenance. The Nile made the land surrounding it extremely fertile when it flooded or was inundated annually. The Egyptians were able to cultivate wheat and crops around the Nile, providing food for the general population. Also, the Nile’s water attracted game such as water buffalo; and after the Persians introduced them in the 7th century BC, camels. These animals could be killed for meat, or could be captured, tamed and used for ploughing — or in the camels' case, travelling. Water was vital to both people and livestock. The Nile was also a convenient and efficient way of transportation for people and goods. The Nile played a major role in politics and social life.

But at the end of the day, it was climate change which led to rise and fall of these civilizations….

So what are we worried about?

It’s just that we humans are much more numerous now and spread of more area of the planet than at any time in our history. So climate change this time, turns out to be very inconvenient indeed and because it might lead to decline of our current global civilization and ultimately bring misery to untold billions, that we are so concerned.

We humans seem to be standing too much in the way of nature to be left unscathed by the fury that will be unleashed by the current spell climate change, arguably induced by our actions only…

Sunday, September 28, 2008

Our Fragile Global Economy…

“Greed is good,” insisted Gordon Gekko in the 1987 film Wall Street. Most of us disagree. Recent events in the mortgage lending industry prove us right.

The “subprime loan crisis” has been making headlines since it began in August. It refers to the fact that a relatively high percentage of mortgages offered to people with significant probability of default have gone sour.

The moniker is a bit misleading, though. The crisis we are witnessing starts from risky loan deals but will extend to all varieties of credit and risk: consumer loans, credit cards, businesses, and so on. It is just about to heat up but the roots of this crisis were laid years ago.

It’s a credit crisis, but credit per se is not the problem. The problem lies in how credit was traded from one hand to another on an unprecedented scale. This was done through financial innovations called derivatives.

In recent memory – translated as in the last few weeks of media glare – we have noticed and witnessed a US and to some extent global financial meltdown of historic proportions. Most of media and the people quoted (as experts by media) are calling it the greatest threat to “world” economy since the Great Depression of 1930s.

Addison Wiggin and Bill Bonner wrote the classic, must-read, “The Empire of Debt”. The US Empire, the book surmised, had grown economically weak and unsustainable. It has burdened itself with too much debt. Its collapse is imminent. From the roots of this book has emerged one of the most talked about documentaries: “I. O. U.S.A.” (the “O” as in “owe”) which highlights the extraordinary debt levels of the US government. A debt that is estimated at USD 410,000 per full-time worker in America – about 9x the average annual per capita income in USA.

So here we are yet again in the midst of another "global economic crisis." From the hilltops of Davos, Switzerland, Morgan Stanley's permabear Stephen Roach has shouted warnings of potential economic "Armageddon." Superinvestor George Soros designated the current state of the global economy "the worst market crisis in 60 years." Bill Clinton labeled it "the biggest financial crisis since the Great Depression" -- even as global stocks responded by slumping 7.7% in January -- the worst start to an investing year since Morgan Stanley began publishing data in the 1970s.

It seems that certain quarters had been warning of this for a long time. In it’s public pronouncement – “Global systemic crisis / September 2008 - Phase of collapse of US real economy” - LEAP/E2020 – had predicted – “the end of the third quarter of 2008 will be marked by a new tipping point in the unfolding of the global systemic crisis. At that time indeed, the cumulated impact of the various sequences of the crisis (see table below) will reach its maximum strength and affect decisively the very heart of the systems concerned, on the frontline of which the United States, epicentre of the current crisis. In the United States, this new tipping point will translate into a collapse of the real economy, final socio-economic stage of the serial bursting of the housing and financial bubbles and of the pursuance of the US dollar fall. The collapse of US real economy means the virtual freeze of the American economic machinery: private and public bankruptcies in large numbers, companies and public services closing down massively,...”

It has a dire warning – “On the occasion of the second anniversary of the publication of our famous “Global systemic crisis Alert” which toured the world in February 2006 (4), LEAP/E2020 wishes to remind that we are now resolutely stepping into an era with no historical precedent. Our researchers insisted on that many times in the last two years: any comparison with the previous crises of our modern economy would be fallacious. It is neither a “remake” of the 1929 crisis nor a repetition of the 1970s oil crises or 1987 stock market crisis. It is truly a global systemic crisis, that is to say a crisis affecting the entire planet and questioning the very foundations of the international system upon which the world was organised in the last decades”

So wasn’t anybody paying attention?

Looks like it. Although some institutions/ people did see…

As far back as April, 2008 – IMF warned – “Credit Crisis Is Broadening”. The widening and deepening fallout from the U.S. subprime mortgage crisis could have profound financial system and macroeconomic implications, according to the IMF's latest Global Financial Stability Report (GFSR).

"Financial markets remain under considerable stress because of a combination of three factors," said Jaime Caruana, head of the IMF's Monetary and Capital Markets Department. "First, the balance sheets of financial institutions are weakening; second, the deleveraging process continues and also that asset prices continue to fall; and, finally, the macroeconomic environment is more challenging because of the weakening global growth," he added.

The most profound aspect of this meltdown is that the crisis has weakened the capital and funding of large systemically important financial institutions, raising systemic risks.

So how did it begin?

According to an article on Wikipedia - The subprime mortgage crisis is an ongoing economic problem which became more apparent during 2007 and 2008, and is characterized by contracted liquidity in the global credit markets and banking system. The downturn in the U.S. housing market, risky lending and borrowing practices, and excessive individual and corporate debt levels have caused multiple adverse effects on the world economy. The crisis has passed through various stages, exposing pervasive weaknesses in the global financial system and regulatory framework

The crisis began with the bursting of the United States housing bubble and high default rates on "subprime" and adjustable rate mortgages (ARM), beginning in approximately 2005-2006. For several years prior to that, an increase in loan incentives such as easy initial terms and a long-term trend of rising housing prices had encouraged borrowers to assume difficult mortgages in the belief they would be able to quickly refinance at more favorable terms. However, once housing prices started to drop moderately in 2006–2007 in many parts of the U.S., refinancing became more difficult. Defaults and foreclosure activity increased dramatically, as easy initial terms expired, home prices failed to go up as anticipated, and ARM interest rates reset higher. Foreclosures accelerated in the United States in late 2006 and triggered a global financial crisis through 2007 and 2008. During 2007, nearly 1.3 million U.S. housing properties were subject to foreclosure activity, up 79% from 2006.

Major banks and other financial institutions around the world have reported losses of approximately US$435 billion as of 17 July 2008. In addition, the ability of corporations to obtain funds through the issuance of commercial paper was affected. This aspect of the crisis is consistent with a credit crunch. The liquidity concerns drove central banks around the world to take action to provide funds to member banks to encourage lending to worthy borrowers and to restore faith in the commercial paper markets.

Who is to blame?

According to LEAP/E2020 - It is always a repeated astonishment for our team to see the degree of incapacity of these same experts and managers in understanding the specific nature of the phenomenon currently unfolding. According to them, this crisis would only be a usual crisis but bigger. As a matter of fact that's how the financial media reflect the dominant interpretations of the ongoing crisis. According to our team, this approach is not only intellectually lazy; it is also morally guilty, because it has for a main consequence to prevent their readers (whether they are simple citizens, private investors or public or private organization managers) from preparing for the upcoming shocks.

They continue their forecast of doom - For this reason, in opposition to all what can be read in the mainstream media always eager to conceal the truth and serve the interests of those who rule them, LEAP/E2020 wishes to remind that it is first and foremost in the United States that the systemic crisis is taking an unprecedented shape (the « Very Great US Depression » as our team decided to call it in January 2007) because it is around this country, and this country alone, that the world got progressively organized after the second World War. The various issues of the GEAB extensively described this situation. In short, it appears to be useful to make clear that neither Europe nor Asia have a negative saving rate, a full-scale housing crisis throwing millions of citizens out of their homes, a free-falling currency, abysmal public and trade deficits, an economic recession and, on top of all this, a number of costly wars to finance.

So is there a silver lining for Asia and Europe?

Time will tell… for now, we, as individuals, can do little except watch this unprecedented systemic crisis unfold and reshape the financial landscape of this planet. The tectonic shifts in balance of financial power due to collapse of pillars of US economy (such as Lehman) is definitely reshaping the world order – but not with war this time, but with economics… Asia’s rise was anyway foretold… :-)

Saturday, September 27, 2008

Global Warming and its effects on Planetary Climate

There is a lot of buzz on the planet about global warming and its adverse effects on planet’s ecology, our society and civilization at large. But what is all this debate about?

Thanks to the media attention, many people are concerned about global warming, but they do not know what to do about it. The first thing is to understand the problem and its apparent root cause: increase in the amount of Carbon Dioxide in planet’s atmosphere or is it?

According to Victor Miguel Ponce – “The concentration of carbon dioxide in the atmosphere determines to a large extent the present world climate, with temperature being an important component. Through geologic time, carbon dioxide entering the atmosphere from natural sources such as volcanic eruptions and weathering of rocks, has been gradually used by vegetation, through the process of photosynthesis. In the past two-and-a-half billion years, carbon has been temporarily stored above the Earth's surface as standing biomass and litter. During this time, excess quantities of carbon were permanently stored below the surface as fossil deposits of coal, petroleum, and natural gas”.

However, it is an opinion which although backed with some facts, is not unanimous.

There is a somewhat informational site from Environmental Defense Fund which provides arguments against the common misunderstandings that people (in general) have about global warming. It can be accessed here à Global Warming Myths and Facts. There even a climate atlas available on this site.

Even the National Geographic has covered this topic of literally global concern. So it is happening? The answer is emphatic yes. There is more and more energy being pumped into global weather/ climate system causing ever more violent weather – hurricanes, stronger monsoons, etc. An upsurge in the amount of extreme weather events, such as wildfires, heat waves, and strong tropical storms, is also attributed in part to climate change by some experts. Tragedy on human life is increasing since humanity is more spread and numerous on the planet now than it ever was in the entire history of our species or the planet.

T J Nelson provides an insight into cold facts (contrary to mass hysteria) about global warming in this illuminating article. I strongly recommend a reading. In this article, Nelson attempts to provide a balanced and scientific view of facts related to global warming. His assertion is essentially this – current hysteria about global warming is largely a hype created by media and certain so called scientific papers.

According to him – “Although carbon dioxide is capable of raising the Earth's overall temperature, the IPCC's predictions of catastrophic temperature increases produced by carbon dioxide have been challenged by many scientists. In particular, the importance of water vapor is frequently overlooked by environmental activists and by the media. The above discussion shows that the large temperature increases predicted by many computer models are unphysical and inconsistent with results obtained by basic measurements. Skepticism is warranted when considering computer-generated projections of global warming that cannot even predict existing observations”.

http://www.junkscience.com/Greenhouse/ tries to debunk the entire mythology around global warming. This article (or set of it) actually tries to delink the “greenhouse effect” and global warming and the entire concept of climate change. It is very reassuring to the skeptic in me that there are people out there who are actually questioning blind following and mass hysteria being created by certain circles.

So is there no cause for worry? Have humans caused this global warming? The opinion though not unanimous, is overwhelmingly in favor of human causality. To know in a more coherent manner, one has to understand a few facts first:

  • There is general agreement amongst scientists of this planet that our planet was “The Earth” was formed about 4,540,000,000 years ago – that’s 4.54 billion years.
  • At the time of beginning and for quite sometime after that, the Earth's atmosphere contained very little oxygen (less than 1% oxygen pressure).
  • Popular belief among the scientific community has it that early plants started to develop more than 2 billion years ago, probably about 2,700,000,000.
  • It is an established fact that through photosynthesis, plants uptake carbon dioxide into the biosphere as organic matter, and release oxygen as a byproduct.
  • Through geological ages, oxygen accumulated gradually in the atmosphere, reaching a value of about 21% of atmospheric gases at the present time. So our planet and its current atmosphere were terra-formed slowly over millions of years and concerted action of biological agents. It is not a “natural” occurrence, rather it would be safe to put it in these terms – life shaped earth’s climate and atmosphere to suit itself and brought about the present distribution of gases which we breathe.
  • It is also believed that through geological ages, surplus organic matter has been sequestered in the lithosphere as fossil organic materials (coal, petroleum, and natural gas).
  • Early animals (the first organisms with external shells) started to develop around 600,000,000 years ago
  • Animals operate in the opposite way than plants: they take up oxygen, burn organic matter (food), and release carbon dioxide as a byproduct
  • Early humans (Australopithecus anamensis) began to develop about 4,100,000 years ago
  • Cool climatic conditions have prevailed during the past 1,000,000 years. The species Homo sapiens evolved under these climatic conditions
  • Homo sapiens, that’s us, dates back to no more than 400,000 years.
  • Estimates for the variety Homo sapiens sapiens, to which all humans belong, range from 130,000 to 195,000 years old
  • The concentration of carbon dioxide in the atmosphere was as low as 190 ppm during the last Ice Age, about 21,000 years ago.
  • The last Ice Age began to recede about 20,000 years ago
  • The agricultural revolution, where humans converted forests and rangelands into farms, began to develop about 10,000 years ago.
  • The agricultural revolution caused a reduction in standing biomass in the biosphere and reduced the uptake of carbon dioxide in midlatitudinal regions, indirectly contributing, however so slightly, to global warming.
  • The concentration of carbon dioxide in the atmosphere increased gradually from a low of 190 ppm 21,000 year ago, to about 290 ppm in the year 1900, i.e., at an average rate of 0.00478 ppm per year
  • The industrial revolution, where humans developed machines (artificial animals, since they consume fuels, which are mostly organic matter), began in England about 240 years ago (1767).
  • In October 1999, the world's population reached 6,000,000,000, which is double that of the year 1959 (the doubling occurred in 40 years)
  • The global fleet of motor vehicles is estimated at 830,000,000 (2006).
  • The global fleet of motor vehicles has been recently growing at the rate of 16,000,000 per year.
  • Motor vehicles (cars, trucks, buses, and scooters) account for 80% of all transport-related energy use
  • The concentration of carbon dioxide in the atmosphere, which was at 290 ppm in the year 1900, rose to 316 ppm in 1959, or at an average 0.44 ppm per year
  • Measurements of the concentration of carbon dioxide since 1959 (316 ppm) have revealed an increase to 378 ppm in 2004, or at an average 1.38 ppm per year
  • The concentration of carbon dioxide has increased an average of about 1.8 ppm per year over the past two decades
  • The year 1998 was the warmest of record. The year 2002 was the second warmest (to that date). The year 2003 was the third warmest (to that date). The year 2004 was the fourth warmest (to that date). The year 2005 equaled 1998 as the warmest of record. The year 2007 equaled 1998 as the second warmest of record.
  • About 75% of the annual increase in atmospheric carbon dioxide is due to the burning of fossil fuels
  • The remaining 25% is attributed to anthropogenic changes in land use, which have the effect of reducing the net uptake of carbon dioxide. Anthropogenic changes in land use occur when forests are converted to rangelands, rangelands to agriculture, and agriculture to urban areas.
  • Other patterns of land degradation--deforestation, overgrazing, over-cultivation, desertification, and salinization--reduce the net uptake of carbon dioxide, indirectly contributing, however slightly, to global warming.

What's could Happen?

A follow-up report by the IPCC released in April 2007 warned that global warming could lead to large-scale food and water shortages and have catastrophic effects on wildlife.

  • Sea level could rise between 7 and 23 inches (18 to 59 centimeters) by century's end, the IPCC's February 2007 report projects. Rises of just 4 inches (10 centimeters) could flood many South Seas islands and swamp large parts of Southeast Asia.
  • Some hundred million people live within 3 feet (1 meter) of mean sea level, and much of the world's population is concentrated in vulnerable coastal cities. In the U.S., Louisiana and Florida are especially at risk.
  • Glaciers around the world could melt, causing sea levels to rise while creating water shortages in regions dependent on runoff for fresh water.
  • Strong hurricanes, droughts, heat waves, wildfires, and other natural disasters may become commonplace in many parts of the world. The growth of deserts may also cause food shortages in many places.
  • More than a million species face extinction from disappearing habitat, changing ecosystems, and acidifying oceans.
  • The ocean's circulation system, known as the ocean conveyor belt, could be permanently altered, causing a mini-ice age in Western Europe and other rapid changes.
  • At some point in the future, warming could become uncontrollable by creating a so-called positive feedback effect. Rising temperatures could release additional greenhouse gases by unlocking methane in permafrost and undersea deposits, freeing carbon trapped in sea ice, and causing increased evaporation of water.

We need to remember, Dinosaurs died out 65 million years ago – primarily due to climate change. It wasn’t only the asteroid which struck the earth and wiped out 99% species on land and oceans. It wasn’t just the impact – but the climate change induced by that impact which killed the species.

Can we do anything to stem this rising tide?

Sure! Just follow the tips on this wonderful blog. The Grinning planet provides some clues – which I would have anyway recommended even if our planet wasn’t threatened with global warming and so called catastrophic climate change!

Other resources: