Wednesday, October 26, 2011

Have you ever considered how you would go about re-starting civilisation if the Earth were struck by a comet, or suffered some other widescale, civilisation-ending catastrophe? The people at Open Source Ecology have, and they are in the process of building the Global Village Construction Sett.

This construction set is a platform which will allow you to build 50 industrial machines that you can use to re-build civilisation from the ground up.

The complete Global Village Construction Set will include 50 different industrial machines, from tractors and sawmills to wind turbines and steam generators and CNC mills and 3D printers and laser cutters. Apparently, with all 50 of these machines, you can create "a small civilization with modern comforts." Yep, that's all it takes. So far, they've prototyped eight of these machines, and with help from Kickstarter, they're hoping to get to work on the other 42 to make them available to hopeful developing civilizations all over the world. _DVice

What Makes the Global Village Construction Set so special?

Open Source- we freely publish our 3d designs, schematics, instructional videos, budgets, and product manuals on our open source wiki and we harness open collaboration with contributors.Low-Cost- The cost of making or buying our machines are, on average, 8x cheaper than buying from an Industrial Manufacturer, including an average labor cost of $15 hour for a GVCS fabricator and using mail-order parts.Modular- Motors, parts, assemblies, and power units can interchange.User-Serviceable - Design-for-disassembly allows the user to take apart, maintain, and fix tools readily without the need to rely on expensive repairmen.DIY - The user gains control of designing, producing, and modifying the GVCS tool set.Closed Loop Manufacturing - Metal is an essential component of advanced civilization, and our platform allows for recycling metal into virgin feedstock for producing further GVCS technologies - thereby allowing for cradle-to-cradle manufacturing cyclesHigh Performance - Performance standards must match or exceed those of industrial counterparts for the GVCS to be viable.Flexible Fabrication - It has been demonstrated that the flexible use of generalized machinery in appropriate-scale production is a viable alternative to centralized production.Open Business Models - We encourage the replication of enterprises that derive from the GVCS platform as a route to truly free enterprise - along the ideals of Jeffersonian democracy.Industrial Efficiency - In order to provide a viable choice for a resilient lifestyle, the GVCS platform matches or exceeds productivity standards of industrial counterparts. _Global Village Construction Set

Saturday, October 22, 2011

This development from Japan is said to reduce the water requirements for vegetable gardening by 80%! The thin aerogel films are said to be reusable for up to 3 years per film. Consider this only the beginning for this new approach -- halfway between hydroponics and aeroponics.

This type of farming should be useful for future space and lunar colonies, as well as for farms in deserts, polar regions, and underground.

Tokyo-based Mebiol is working on an membrane–based plant cultivation technology called Imec that makes it possible to let plants grow on thin film instead of soil. The film is made of a water-absorbent material called hydrogel and is just “tens of microns” thick.

Mebiol says that tomatoes, radish, cucumber, melons etc. need up to 80% less water to grow when compared with conventional culture and that 1g of SkyGel (that’s the brand name of the hydrogel) absorbs and holds 100ml of water. In contrast to soil, bacteria or viruses have no chance to harm the plants. Another advantage is that SkyGel can be used on various surfaces, including sand, concrete or ice (see this PDF for examples from recent years).

The film can be used to grow plants for 2-3 years before it needs to be replaced, according to the company. _TechCrunch_via_ImpactLab

Expect more indoor farming of all types, once this new growth medium becomes more widely available. If you are thinking that marijuana farmers will be looking at this, you are probably right. And as new genetically engineered flowers and vegetables come along that borrow genes from the coca plant and the opium poppy, expect many more domestic sources for such drugs -- or virtually any drug -- in the not so distant future.

The gravity of Europe’s demographic situation became clear at a conference I attended in Singapore last year. Dieter Salomon, the green mayor of the environmentally correct Freiburg, Germany, was speaking about the future of cities. When asked what Germany’s future would be like in 30 years, he answered, with a little smile, ”There won’t be a future.”

Herr Mayor was not exaggerating. For decades, Europe has experienced some of the world’s slowest population growth rates. Fertility rates have dropped well below replacement rates, and are roughly 50% lower than those in the U.S. Over time these demographic trends will have catastrophic economic consequences. By 2050, Europe, now home to 730 million people, will shrink by 75 million to 100 million and its workforce will be 25% smaller than in 2000._JoelKotkin_Forbes

When a welfare state is in the throes of a shrinking demographic, the implications are extremely dire. Guarantees of benefits in these nations is based upon a pyramid scheme, with the older pensioners near the apex and the younger taxpayers near the base. If the positions are reversed due to a collapsing demographic, national debt tends to grow to massive proportions rather quickly.

The fiscal costs of this process are already evident. Countries like Spain, Italy and Greece, which rank among the most rapidly aging populations in the world, are teetering on the verge of bankruptcy. One reason has to do with the lack enough productive workers to pay for generous pensions and other welfare-state provisions.

Germany, the über-economy of the continent, has little hope of avoiding the demographic winter either. By 2030 Germany will have about 53 retirees for every 100 people in its workforce; by comparison the U.S. ratio will be closer to 30. As a result, Germany will face a giant debt crisis, as social costs for the aging eat away its currently frugal and productive economy. According to the American Enterprise Institute’s Nick Eberstadt, by 2020 Germany debt service compared to GDP will rise to twice that currently suffered by Greece. _Forbes_Kotkin

The United States under Obama has been trying to imitate the European style of government -- a designed expansion of the public sector at the expense of the private sector. Fortunately for Obama -- in one sense -- the US is not suffering the same demographic collapse as Europe. The US is instead still growing demographically, due to immigration and due to higher birthrates among immigrants.
But in another sense, the US population growth is not as fortunate as it appears. The new replacement populations coming into the US are, on average, of lower aptitude in a cognitive sense. Average IQ of the US population is almost certain to drop as a result, and US global comptetiveness will be strained, consequently.

Trends in SAT test scores are an early warning of this very phenomenon:

"The scores are disappointing, and it seems to be a trend over the last five to six years, with drops across the board," said Jim Hull, senior policy analyst for the National School Board Association's Center for Public Education.

Test score drops will be blamed on a number of factors, but the studiously ignored central cause of this trend is the lowered cognitive potential of modern students. And it is only likely to get worse. And the repercussions of this cognitive decline will spread throughout society -- and will be magnified tenfold by affirmative action policies.

A dumber population will place greater demands on governmental infrastructure. Law enforcement, welfare, education, housing, food subsidies, etc. will all have to grow more productive to compensate for a population of lower cognitive aptitudes.

National debt will increase even faster than at present, as the underlying society grows less capable of repaying their own debt on top of the debt of previous generations. Economic hardship will increase.

Multicultural societies are low-trust societies, which require much larger police forces to maintain order. As police forces are downsized due to the exponential growth in public sector union pensions and benefits, civil disorder will expand to fill the void.

Youth gangs made up almost entirely of immigrants, children of immigrants, and "minority" populations, will range the landscape virtually unimpeded by a law enforcement infrastructure that will grow more corrupt as it takes on the multi-cultural form of the new dominant populations of society. The initiation ritual for many of these gangs is likely to be an act of violence against the shrinking population of the formerly dominant, European descended people.

Can you and yours find a place of safety in this coming world? That depends upon what you do between now and then.

For the architects of the Next Level, this dumbed down demographic -- a de facto Idiocracy -- presents special challenges. It will require working through alternative infrastructures other than traditional governmental institutions.

The creation of such "shadow governments" will not be easy, but it will not necessarily be as expensive as one might think. More on that later.

Friday, October 21, 2011

When will humanity reach Singularity, that now-famous point in time when artificial intelligence becomes greater than human intelligence? It is aptly called the Singularity proponents like Ray Kurzweil: like the singularity at the center of a black hole, we have no idea what happens once we reach it. However, the debate today is not what happens after the Singularity, but when will it happen. _BigThink

In the video below, Kurzweil discusses some of his ideas about the coming singularity, including timelines and cautionary notes.

Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities. They call this tipping point the singularity, because they believe it is impossible to predict how the human future might unfold after this point. Once these machines exist, Kurzweil and Vinge claim, they'll possess a superhuman intelligence that is so incomprehensible to us that we cannot even rationally guess how our life experiences would be altered. Vinge asks us to ponder the role of humans in a world where machines are as much smarter than us as we are smarter than our pet dogs and cats. Kurzweil, who is a bit more optimistic, envisions a future in which developments in medical nanotechnology will allow us to download a copy of our individual brains into these superhuman machines, leave our bodies behind, and, in a sense, live forever. It's heady stuff.

While we suppose this kind of singularity might one day occur, we don't think it is near.

...Kurzweil's reasoning rests on the Law of Accelerating Returns and its siblings, but these are not physical laws. They are assertions about how past rates of scientific and technical progress can predict the future rate. Therefore, like other attempts to forecast the future from the past, these "laws" will work until they don't. More problematically for the singularity, these kinds of extrapolations derive much of their overall exponential shape from supposing that there will be a constant supply of increasingly more powerful computing capabilities. For the Law to apply and the singularity to occur circa 2045, the advances in capability have to occur not only in a computer's hardware technologies (memory, processing power, bus speed, etc.) but also in the software we create to run on these more capable computers. To achieve the singularity, it isn't enough to just run today's software faster. We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this. _Technology Review_Paul Allen

Allen goes on to discuss the "complexity brake," which the limitations of the human brain (and the limitations of the human understanding of the human brain) will apply to any endeavour that begins to accelerate in complexity too quickly.

Allen's argument is remarkably similar to arguments previously put forward by Al Fin neurscientists and cognitivists. The actual way that the human brain works, is something that is very poorly understood -- even by the best neuroscientists and cognitivists. If that is true, the understanding of the brain by artificial intelligence researchers tends to be orders of magnitude poorer. If these are the people who are supposed to come up with super-human intelligence and the "uploading of human brains" technology that posthuman wannabes are counting on, good luck!

Allen writes that "the Law of Accelerating Returns (LOAR). . . is not a physical law." I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a finer level. A classical example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, they model each particle as following a random walk. So by definition, we cannot predict where any particular particle will be at any future time. Yet the overall properties of the gas are highly predictable to a high degree of precision according to the laws of thermodynamics. So it is with the law of accelerating returns. Each technology project and contributor is unpredictable, yet the overall trajectory as quantified by basic measures of price-performance and capacity nonetheless follow remarkably predictable paths.

...Allen writes that "these 'laws' work until they don't." Here, Allen is confusing paradigms with the ongoing trajectory of a basic area of information technology. If we were examining the trend of creating ever-smaller vacuum tubes, the paradigm for improving computation in the 1950s, it's true that this specific trend continued until it didn't. But as the end of this particular paradigm became clear, research pressure grew for the next paradigm.

...Allen's statement that every structure and neural circuit is unique is simply impossible. That would mean that the design of the brain would require hundreds of trillions of bytes of information. Yet the design of the brain (like the rest of the body) is contained in the genome. And while the translation of the genome into a brain is not straightforward, the brain cannot have more design information than the genome. Note that epigenetic information (such as the peptides controlling gene expression) do not appreciably add to the amount of information in the genome. Experience and learning do add significantly to the amount of information, but the same can be said of AI systems.

...How do we get on the order of 100 trillion connections in the brain from only tens of millions of bytes of design information? Obviously, the answer is through redundancy. There are on the order of a billion pattern-recognition mechanisms in the cortex. They are interconnected in intricate ways, but even in the connections there is massive redundancy. The cerebellum also has billions of repeated patterns of neurons.

...Allen mischaracterizes my proposal to learn about the brain from scanning the brain to understand its fine structure. It is not my proposal to simulate an entire brain "bottom up" without understanding the information processing functions. We do need to understand in detail how individual types of neurons work, and then gather information about how functional modules are connected. The functional methods that are derived from this type of analysis can then guide the development of intelligent systems. Basically, we are looking for biologically inspired methods that can accelerate work in AI, much of which has progressed without significant insight as to how the brain performs similar functions. _TechnologyReview_Ray Kurzweil

Kurzweil's attitude seems to be: "Because difficult problems have arisen and been solved in the past, we can expect that all difficult problems that arise in the future will also be solved." Perhaps I am being unfair to Kurzweil here, but his reasoning appears to be fallacious in a rather facile way.

Al Fin neuroscientists and cognitivists warn Kurzweil and other singularity enthusiasts not to confuse the cerebellum with the cerebrum, in terms of complexity. They further warn Kurzweil not to assume that a machine intelligence researcher can simply program a machine to emulate neurons and neuronal networks to a certain level of fidelity, and then vastly expand that model to the point that it achieves human-level intelligence. That is a dead end trap, which will end up wasting many billions of dollars of research funds in North America, Europe, and elsewhere.

This debate has barely entered its opening phase. Paul Allen is ahead in terms of a realistic appraisal of the difficulties ahead. Ray Kurzweil scores points based upon his endless optimism and his proven record of skillful reductionistic analyses and solutions of previous problems.

Simply put, the singularity is not nearly as near as Mr. Kurzweil predicts. But the problem should not be considered impossible. Clearly, we will need a much smarter breed of human before we can see our way clear to the singularity. As smart as Mr. Kurzweil is, and as rich as Mr. Allen is, we are going to need something more from the humans who eventually birth the singularity.

Wednesday, October 19, 2011

Mass extinctions have played an important role in the evolution of Terrestrial life. With each mass extinction, the way is cleared for the spread and adaptation of surviving species, and for the emergence of new species. But that is not what we will talk about today.

Recent findings in geochemistry have called into doubt some of the pet theories of climate scientologists scientists concerning acid oceans and mass ocean extinctions. Here is the abstract from the paper in PNAS:

Periods of oceanic anoxia have had a major influence on the evolutionary history of Earth and are often contemporaneous with mass extinction events. Changes in global (as opposed to local) redox conditions can be potentially evaluated using U system proxies. The intensity and timing of oceanic redox changes associated with the end-Permian extinction horizon (EH) were assessed from variations in 238U/235U (δ238U) and Th/U ratios in a carbonate section at Dawen in southern China. The EH is characterized by shifts toward lower δ238U values (from -0.37‰ to -0.65‰), indicative of an expansion of oceanic anoxia, and higher Th/U ratios (from 0.06 to 0.42), indicative of drawdown of U concentrations in seawater. Using a mass balance model, we estimate that this isotopic shift represents a sixfold increase in the flux of U to anoxic facies, implying a corresponding increase in the extent of oceanic anoxia. The intensification of oceanic anoxia coincided with, or slightly preceded, the EH and persisted for an interval of at least 40,000 to 50,000 y following the EH. These findings challenge previous hypotheses of an extended period of whole-ocean anoxia prior to the end-Permian extinction. _PNAS

The suggestion is that the ocean anoxia was secondary to the main extinction event, rather than being the cause. More study will be necessary to validate the isotopic techniques utilised. But this finding cannot but be a disappointment to the politically correct denizens of deep climate scientology science.

But what interests Al Fin know-it-all-o-tologists about this information, is how it may relate to the topic of the production and sequestration of ancient oil. Deep ocean anoxia is not only related to mass extinction events, it is also a component of oil formation in the deep seabed.

Sea bottom anoxia occurs routinely at the mouths of large rivers, where massive sediment routinely buries dead sealife that is constantly deposited on the seafloor. That is why rich oil fields are often found offshore of large river deltas -- either where the deltas are now, or where they were hundreds of millions of years ago.

An ancient oil sleuth must be able to backward-trace the movements of continents and great river valleys, in order to know where to look for such sediment-buried deposits.

Another cause of mass sediment burial of seafloor organic material, is massive volcanic activity. This would be particularly important to an ancient oil sleuth when a group of volcanoes might stay active for millions of years, in the same general vicinity upwind of river deltas or rich upwelling currents.

But in cases of mass extinctions, the large scale deep ocean anoxia occurring at the same time as massive deposition of organic material onto the seafloor, might be a particularly rich time for the initiation of large scale oil production.

When this process occurs over continental crust, the oil can be preserved for a very long time. If it occurs over oceanic crust, the oil may be subducted with the crust into the mantle, where it will likely be converted into short chain hydrocarbons, CO2, CO, and other forms of carbon. The short chain hydrocarbons may return to the crust, and may eventually be recovered economically. Diamond and graphite may also return to depths which allows humans to recover them economically.

Regardless, it is the ancient oil we are interested in. The challenge is to connect the extinction events, the ocean anoxia, and the ancient geographic patterns together, to provide the best guess for the locations of giant oil deposits which might conceivably still exist in an undiscovered, but ultimately recoverable state.

Humans have become accustomed to utilising the easy oil, and are just now getting good at recovering oil from the harsh, deep ocean environments. That is a good thing, because the Earth is 70% ocean-covered.

Still, some the planet which was once covered by oceans is now dry land, and such places -- if they fit the criteria above -- might be some of the first locations to check out.

Saturday, October 15, 2011

When the ancient people wanted to be safe from zombies, wild animals, and other enemies, they would locate their homes in difficult to access places. In cliffs, caves, or underground, they would take advantage of natural havens, and sometimes dig into sheer rock, to find safety for themselves and their tribe.

Modern people also dig deeply through solid rock, to seek safety from more modern weapons and enemies. But zombies would find it difficult to penetrate the defenses of Cheyenne Mountain, as would fire, flood, tornado, and most nuclear weapons short of a direct megaton scale hit.

Some tribes sought safety in the far north, and learned to survive where other tribes -- including zombies -- could not. Although their modern descendants are forgetting the ways of extreme arctic survival, they may soon be forced to re-learn those skills -- if they can.

We have recently looked at this example of a zombie-proofed home, complete with drawbridge and concrete shutters. While we have no record of how such homes survive in the face of tornadoes, hurricanes, earthquakes, or wildfire, there is reason to believe that for most natural disasters, such a home would do better than a traditional stick home.

Monolithic dome homes can be "bermed," or covered with earth. In fact, these domes are strong enough to be covered with approximately 30 feet of soil without suffering damage. Covering these steel-reinforced concrete domes with earth adds to their innate protection against fire and storm.

A regular monolithic dome placed on a hilltop location allows for maximum visibility and advanced warning against zombie attack. Monolithic domes have survived hurricanes, tornados, earthquakes, and wildfires, in situations where most other residences around them were destroyed. The steel rebar inside the dome should provide some EMP protection if grounded, although supplementary integrated wire mesh that is grounded should provide extra EMP shielding.

Some imaginative builders have built houses on hydraulic lifts, in case of flooding. Lifting the house up and out of reach of zombies, would provide the house with additional security. The ultimate in moving a house out of harm's way would be the flying house. But floating houses have been built for low-lying areas, and might be considered if one were forced to live in such a place.

This home in Hollywood Hills has sometimes been referred to as the "safest house in the world." More likely, it is the safest house in Hollywood Hills.

This Al Corbi house provides excellent protection against attack by criminals or zombies -- with its bullet-proof walls, windows, and doors -- and allows for quick evacuation via rooftop helipad in case of regional disaster or global collapse. But it lacks its own power supply, and other necessities needed for an extended siege situation, when basic services have collapsed.

The rolling steel shutters pictured on this house's doors and windows provides excellent protection against thieves, invaders, and zombies. Such types of protection are likely to be in greater demand as the Obama crisis deepens around the world.

Here are some other things to consider:

1. How would you keep your house from freezing in the winter if electricity were unavailable for a long period of time? Do you have some type of wood burning heater? What about hot water?

2. Do you have back-up cooking facilities if an earthquake made natural gas unavailable for a month or two? Could you heat hot water?

3. What if you lose both electricity and gas?

4. Would you be willing to rely on batteries and candles for illumination if a major power outage lasted more than a week?

5. Do you have extra tanks of potable water should public water supplies be cut off or contaminated? Would you know how to collect and filter your own water if none was available for a long time?

6. If a winter storm damaged windows in your home, would you have sufficient plastic sheeting and repair materials to quickly enclose the open areas to retain heat? _The Secure Home

Interesting questions to keep in mind, but what if you had to worry about all of that, at the same time that you were under zombie attack? Obviously th author of "The Secure Home" was not thinking in broad enough terms for the modern age.

If you had to choose one type of home to build for maximum security in a relatively short time span, the monolithic dome is probably your best bet. You would need to provide sufficient protected storage for food, water, medicines, trade goods, and other supplies. You should consider rolling steel shutters for windows and doors, or other methods of protecting those weak points. Avoid building on a flood plain, or in a location without adequate visibility of your surroundings. Stay away from high crime areas, and areas of known zombie habitation. Keep your garden spaces within a defensible area, for the most part.

Most importantly, make sure that you are close to a community of skilled and competent people, who possess a broad range of expertise, and who share your basic values toward private property, free market exchange, and respect for human life.

Friday, October 14, 2011

Imagine how much it would cost to fly from San Francisco to London if the airlines had to destroy every airliner after each use. But that is the same basic logic that is used for space launch, where spacecraft typically do not survive the journey, requiring a new craft to be built for each trip. But what if you could re-use all parts of your craft, with rapid turnover between launches. Shouldn't that bring down the cost of space exploration and development?

NASA's space shuttle is the only orbital reusable launch vehicle that's flown to date, and it was retired this summer after falling far short of its original goals to launch frequently and inexpensively—the agency projected it would fly up to 50 missions per year at an operating cost of $10.5 million per flight. It turned out that the shuttles flew less than five times per year at an operating cost 20 times that.

SpaceX's approach is to convert the two stages of the Falcon 9 rocket into independent vehicles capable of making return landings at their launch site. The first stage, after separating from the rest of the rocket, would fire its engines to guide itself back to the launch site, extending a set of legs from its base to land vertically. The upper stage, outfitted with the heat shield that SpaceX developed for its Dragon spacecraft, which was designed to transport cargo and eventually crews to and from the space station, would reenter after deploying its payload in space. It would also use its engine for a powered vertical landing.

Musk is backing up his speech with development work. SpaceX has been quietly building an experimental vehicle called Grasshopper to test the vertical landing technology. Grasshopper is a Falcon 9 first stage outfitted with a single engine and landing legs to allow it to take off and land vertically.

...SpaceX is not the only company actively working on an orbital reusable launch vehicle. Blue Origin, the secretive aerospace company founded by Amazon.com CEO Jeff Bezos, has NASA funding to mature the design of a space vehicle that could be launched on existing expendable rockets, such as the Atlas V. Eventually, though, Blue Origin plans to replace the Atlas with its own reusable orbital launch vehicle, and is using part of the $22 million Commercial Crew Development award it received from NASA earlier this year to work on an engine for that rocket.

"We intend to fly our own Blue Origin reusable launch vehicles that will take [our] space vehicle up and make that system much more affordable," said Rob Meyerson, program manager at Blue Origin, at AIAA Space 2011. The company has not disclosed development schedules or other technical details about its planned vehicle. However, the support the company has from NASA, coupled with the financial backing provided by Bezos, makes the company's effort worth watching.

This is not the first time companies have shown an interest in building reusable launch vehicles. In the late 1990s, several companies, including Kistler Aerospace and Rotary Rocket Company, had ambitious plans for orbital reusable launch vehicles, but their projects never materialized.

What's the difference this time around? Charles Lurio, a space industry consultant and publisher of The Lurio Report newsletter, says current companies have made more progress than earlier firms, including building and flying hardware. "They have a fair shot at making it work," he says, "but nothing's guaranteed." _TechnologyReview

Wednesday, October 12, 2011

The Skolkovo innovation center is a high technology business area being built near Moscow. It will host five scientific communities that carry top priority for Russia -- energy, information technology, telecommunications, biomedicine and nuclear technologies -- as the country diversifies from being largely powered by natural resources. The 600 hectare complex designed by French architects AREP will be situated next to the campus of Skolkovo Moscow School of Management, a top-level business school founded by leading Russian and international companies.... Siemens, Boeing, IBM, Dell and Nokia are among other leading companies that have committed to participation at Skolkovo.

Russian government-led initiatives such as tax incentives to stimulate development and loosening of restrictions on importing foreign workers and technologies have been enacted to facilitate the high-tech hub. Over 200 laws have been amended to facilitate the participation of international companies at Skolkovo and to encourage sustainable innovation among Russian startups. _Marketwatch

Russia is a resource-rich nation with a vast land area but a shrinking and aging population. Under Vladimir Putin, Russians have seen their freedoms steadily disappear, and their national health and morale wither. The best and the brightest are fleeing to better opportunities outside the country, as the infrastructure is stripped of capital by government insiders, and deposited in out of country bank accounts.

Skolkovo, a brainchild of Dmitri Medvedev, offers Russian innovators a way to hook up with outside investors and business interests -- perhaps Russia's last great hope to avoid inbred disaster under Putin.

Skolkovo IT Cluster was founded last year as part of a larger initiative to turn Moscow’s Skolkovo suburb into a kind of Russian Silicon Valley. The plan was initiated by Russia’s President Dmitry Medvedev, and the Skolkovo foundation has since won financial and logistical backing from pretty much every U.S. tech heavyweight. Cisco alone has committed to invest $1 billion over 10 years in the region. Part of that money is now used to jumpstart Russian startups. “In order to change things, you have to start doing things,” Gaika told me. _Gigaom

Medvedev's idea to change Russian laws to allow easier involvement with outsiders, was a brilliant idea. But no one knows what Putin will do if any of the Skolkovo startups grow large enough to be seen as a threat to his autocratic dictator's form of state control.

“Companies started in Russia are cheaper (to run) and the quality and talent level is high,” said Alex Gurevich, a partner with Javelin Venture Partners. “Innovation is happening everywhere–not just in the Valley. I want to see what Russia has to offer.”

...“One thing that’s clear is that there’s some amazing technology in Russia. It’s better than what I’ve seen stateside,” said Bill Reichert, managing director of Garage Ventures, who said he also plans to attend the demo day.

Companies presenting run the gamut from Bazelevs Innovations, which makes interactive 3D visualization of scripts for TV and film, to SpeakToIt, which allows smartphone users to better retrieve information with natural language. _WSJ

The three reports excerpted above present some intriguing startup ideas. But will they have the freedom to develop in Russia without interference and extortion from the thuggish regime or the Russian mafia?

Russia needs an infusion of fresh, new blood, and new ideas. Without new approaches, Russia will shrivel and die in stagnant statist autocracy. If not under Putin, then under some other bombastic autocrat. For Russia to live and thrive, something has to give.

Friday, October 07, 2011

Just because we are conscious does not mean we have the smarts to make consciousness ourselves. Whether (or when) AI is possible will ultimately depend on whether we are smart enough to make something smarter than ourselves. We assume that ants have not achieved this level. We also assume that as smart as chimpanzees are, chimps are not smart enough to make a mind smarter than a chimp, and so have not reached this threshold either. While some people assume humans can create a mind smarter than a human mind, humans may be at a level of intelligence that is below that threshold also. We simply don't know where the threshold of bootstrapping intelligence is, nor where we are on this metric. _KevinKelly

Kevin Kelly has created a "Taxonomy of Minds" as a way of classifying different types of minds and what they might be able to do.

Precisely how a mind can be superior to our minds is very difficult to imagine. One way that would help us to imagine what greater intelligences would be like is to begin to create a taxonomy of the variety of minds. This matrix of minds would include animal minds, and machine minds, and possible minds, particularly transhuman minds, like the ones that science fiction writers have come up with.

Imagine we land on a alien planet. How would we describe or measure the level of the intelligences we encounter there -- assuming they are greater than ours? What are the thresholds of superior intelligence? What are the categories of intelligence in animals on earth? _Read the rest...TaxonomyofMinds

The actual development of superior minds is more likely to occur via evolutionary mechanisms, rather than from straightforward design from principle. The adaptive landscape graphic above provides a small portion of an evolutionary adaptive landscape. Creatures that achieve the higher peaks may be capable of achieving greater feats, but also may be more subject to extinction when the environment shifts -- or when the adaptive landscape is enlarged by merging with a previously separate adaptive landscape (building a bridge between islands, tunneling through a mountain chain, digging a canal through an isthmus, or the emergence of an intergalactic wormhole).

Rather than waiting until our minds become capable of creating other minds, it is more likely that humans will create an evolutionary landscape from which a more intelligent mind than human minds might emerge.

Recently, in conversations with George Dyson, I realized there is a fifth type of elementary mind:

5) A mind incapable of designing a greater mind, but capable of creating a platform upon which greater mind emerges.

This type of mind cannot figure out how to birth an intelligence equal to itself, but it does figure out how to set up conditions of evolution so that a new mind emerges from the forces pushing it. _Technium

This is the approach to AI which Al Fin cognitive scientists have been promoting and utilising. It would be fooling one's self to imagine that it will be easy to evolve a smarter mind. But at least it is not impossible, as most conventional approaches to AI are proving themselves to be. (conventional AI researchers are attempting quantitative solutions where qualitative solutions apply)

There is something quite amusing here: The human mind itself can flit among the taxonomy of minds, at any given time. Because of how the human brain evolved, and the paths we have taken in development, each one of us is multitudes. Without a doubt, we all need better training in using our minds.