Andres Agostini is a Researching Analyst & Consultant & Management Practitioner. Topics subjects of study and practice and survey are Science, Technology, Corporate Strategy, Business, Management, “Transformative Risk Management”, and Professional Futurology. He has 28 years of professional experience, dealing with complex business settings.

Via this Web site, Andres Agostini shares his thoughts, ideas, reflections, and suggestion with total independence of thinking and without mental reservations.

(AATIB) # 1

ANDRES AGOSTINI ON THIS I BELIEVE! (AATIB) # 1 :

Many people ask you often about the leadership traits for the times. It’s fun and sometimes worrisome the great lack of understanding. There are many features the leaders must have. But, now, let’s recall that CHANGE HAS CHANGED ABOVE AND BEYOND DRAMATICALLY….So, if the would-be leader’s ethos is committed to “those good all days” and, from the psychologically standpoint, he/she has a psycho-static mind, he/she won’t be able to make it now and never ever….If the prospect leader is not psycho-kinetically endowed and self-driven/self-energized to aim strongly at the closest position to capture the “totality of knowledge” (omniscience), he/she will not ever make it….WHY? BECAUSE: At interplay there are many driven forces in chaotic, fluid intersections. Complexity is raising beyond and over complexity. Dynamics now run fluid and complex, very complex, throughout many seemingly incompatible contexts….Many of them are extremely subtle and difficult to “sense.” A Century-21 leader cannot successfully inspire anyone into imagined futures to conquer the past if his/her overwhelming objective is the past times….One that thinks that the PAST is a SCRIPT to the FUTURE. One that believes that this FUTURE is not what it used to be. One who cannot see the FUTURE different as a GIFT from Community, Society, Government, Employers. No way, FUTURE is now an extremely difficult conquest, one that is progressively practiced and amplified every second, 24/7….History has a plethora of good examples of men and women that were extraordinary in their life times, to whom we must remained grateful. BUT NOW THE GAME CHANGED BEYOND BELIEFS AND DOES NOT STOP CHANGING YOCTOSECOND TO YOCTOSECOND (One septillionth [10-24] of a second). The power of simplicity will stay true only when one grasps extreme depths/scopes/magnitudes the POWER OF COMPLEXITY….Mr. Steve Ballmer, Microsoft CEO, once indicated that he did not know about a successful business that was not stemming out of complex frameworks. Corporations, nations, supra-national organizations and NGOs require much more than “visionaries.” The referred to “game” is within its instantaneous and erratic cycles redefining, redefining, and redefining the world. Andres Agostini. Published on November 03, 2007. Arlington, Virginia 22222, USA.

(AATIB) # 2

ANDRES AGOSTINI ON THIS I BELIEVE! (AATIB) # 2 :

The past has stopped being the script of the future. I wonder what actuaries and statisticians are doing about it these days. What is the name of their NEW LINE OF PRACTICE as of today? For centuries at Lloyd’s of London the rule was to project the one-year future period based on the previous three years. Projecting the unprojectable? Just might as well. Unfortunately, in an explosively, non-linear (with the non-linearity in over-discontinuous amplification) world this is now beyond impossible. You cannot arrive at new place TODAY with a medieval map. These times are not those they used to be and never will be. So many driving forces are intersecting, superposing and conflicting, engendering subtle and no so-subtle driving forces (for real). Hence, CHANGED-BASED CHANGE will be quantum-ly mutating and transmutating. It is time to ignore whether or not the World will be redefined while we are riding many “SWIRLING OCEANS” already without paying much attention. Have one seen everywhere, every time, PERPETUAL NOVELTY? Once and for all, PAST has not will ever again be THE GUIDING LIGHT TO THE ON-GOING, UPCOMING FUTURE (an extreme process with pervasive sub-possesses). Immediate PAST is just a tiny measure of how DRAMATIC change has been and will always since some years now. CHANGE will enslave you if you do not pay attention and thus get prepared by LATEST STANDARDS. If CHANGE makes things chaotic, one still has the civil liberty to mindful discern whether or not do something about it, literally applying advanced risk management (surviving) or not (not wishing to survive). In the mean time, FUTURE also guards the promise of better times if you truly want to, that is, if you get ultimately prepared for the omnimode-ly “unthinkable.” FUTURE equates to PROGRESS. FUTURE has never been anyone’s entitlement, but a PERPETUAL REAL-TIME CONQUEST (yoctosecond to yoctosecond, One septillionth [10-24[ of a second). Now the CONQUEST of the FUTURE to REDEFINE the as-of-now PRESENT is the MAXIMUM CHALLENGE POSED TO HUMANDKIND. Some say that this epoch is for those endowed with psychokinetic minds, those who embrace the future, change, chaos, risks unconditionally and enthusiastically. I just might agree with this statement if I may exercise my freedom of speech. What would the educated and empowered, but with a psycho-stagnated mind, will do?

For the “crying” one, everything has changed. It has changed (i) CHANGE, (ii) Time, (iii) Politics/Geopolitics, (iv) Science and technology (applied), (v) Economy, (vi) Environment (amplest meaning), (vii) Zeitgeist (spirit of times), (viii) Weltstanchaung (conception of the world), (ix) Zeitgeist-Weltstanchaung’s Prolific Interaction, etc. So there is no need to worry, since NOW, —and everyday forever (kind of...)—there will be a different world, clearly if one looks into the sub-atomic granularity of (zillion) details. Unless you are a historian, there is no need to speak of PAST, PRESENT, FUTURE, JUST TALK ABOUT THE ENDLESSLY PERENNIAL PROGRESSION. Let’s learn a difficult lesson easily NOW. Published on November 05, 2007. Arlington, Virginia 22222, USA.

Andres Agostini

AndresAgostini@gmail.com

www.AndresAgostini.blogspot.com

Arlington, VA 2222, USA

Andres Agostini is a Researching Analyst & Consultant & Management Practitioner. Topics subjects of study and practice and survey are Science, Technology, Corporate Strategy, Business, Management, “Transformative Risk Management”, and Professional Futurology. He has 28 years of professional experience, dealing with complex business settings.

Via this Web site, Andres Agostini shares his thoughts, ideas, reflections, and suggestion with total independence of thinking and without mental reservations.

Andres Agostini

AndresAgostini@gmail.com

www.AndresAgostini.blogspot.com

Arlington, VA 2222, USA

NOTE:

This is the “Andres Agostini on This I Believe! (AATIB)”official site.

Via this Web site, Andres Agostini shares his thoughts, ideas, reflections, and suggestion with total independence of thinking and without mental reservations.(AATIB) # 1ANDRES AGOSTINI ON THIS I BELIEVE! (AATIB) # 1 :

Many people ask you often about the leadership traits for the times. It’s fun and sometimes worrisome the great lack of understanding. There are many features the leaders must have. But, now, let’s recall that CHANGE HAS CHANGED ABOVE AND BEYOND DRAMATICALLY….So, if the would-be leader’s ethos is committed to “those good all days” and, from the psychologically standpoint, he/she has a psycho-static mind, he/she won’t be able to make it now and never ever….If the prospect leader is not psycho-kinetically endowed and self-driven/self-energized to aim strongly at the closest position to capture the “totality of knowledge” (omniscience), he/she will not ever make it….WHY? BECAUSE: At interplay there are many driven forces in chaotic, fluid intersections. Complexity is raising beyond and over complexity. Dynamics now run fluid and complex, very complex, throughout many seemingly incompatible contexts….Many of them are extremely subtle and difficult to “sense.” A Century-21 leader cannot successfully inspire anyone into imagined futures to conquer the past if his/her overwhelming objective is the past times….One that thinks that the PAST is a SCRIPT to the FUTURE. One that believes that this FUTURE is not what it used to be. One who cannot see the FUTURE different as a GIFT from Community, Society, Government, Employers. No way, FUTURE is now an extremely difficult conquest, one that is progressively practiced and amplified every second, 24/7….History has a plethora of good examples of men and women that were extraordinary in their life times, to whom we must remained grateful. BUT NOW THE GAME CHANGED BEYOND BELIEFS AND DOES NOT STOP CHANGING YOCTOSECOND TO YOCTOSECOND (One septillionth [10-24] of a second). The power of simplicity will stay true only when one grasps extreme depths/scopes/magnitudes the POWER OF COMPLEXITY….Mr. Steve Ballmer, Microsoft CEO, once indicated that he did not know about a successful business that was not stemming out of complex frameworks. Corporations, nations, supra-national organizations and NGOs require much more than “visionaries.” The referred to “game” is within its instantaneous and erratic cycles redefining, redefining, and redefining the world. Andres Agostini. Published on November 03, 2007. Arlington, Virginia 22222, USA.

E-mail Andy...

AgosDres@yahoo.com

Andy Agostini'sVideos

Loading...

Objective!

To disseminate new ideas, hypothesis, thesis, original thinking, new proposals to reinvent theory pertaining to Strategy, Innovation, Performance,Risk (all kinds), via Scientific and Highly-Sophisticated Management, in accordance with the perspective of applied omniscience (the perspective of totality of knowledge). Put simply, to research an analyze news ways to optimize the best practices to an optimum degree.

Where is Andy?

Andy on The Science Statement…

The American Heritage® Dictionary of the English Language, Fourth Edition, about “science” refers: “…THE OBSERVATION, IDENTIFICATION, DESCRIPTION, EXPERIMENTAL INVESTIGATION, and theoretical explanation of phenomena…Such activities restricted to a class of natural phenomena…SUCH ACTIVITIES APPLIED TO AN OBJECT OF INQUIRY OR STUDY… METHODOLOGICAL ACTIVITY, DISCIPLINE, OR STUDY…AN ACTIVITY THAT APPEARS TO REQUIRE STUDY AND METHOD…KNOWLEDGE, ESPECIALLY THAT GAINED THROUGH EXPERIENCE….”

Although I do not have a diploma to claim to be a scientist, I must state that, out the above definition, the upper-cased phrases in the definition do apply to me.

I have been surrounded all my life for some of the most challenging entrepreneurs in the world. I have been lucky. Many of them are from the U.S., U.K., Japan, Canada, Spain, Brazil, European Union, etc.

Since 1996, I have mentors and tutors and supervisors and colleagues from the hardest core of the scientific arena. I have been blessed. I have a thirst for scientific knowledge beyond the boldest dreams. And I will marshal, on the doubles, all my way to capture more and more of the avant-garde state of the art at any cost and forever.

Fine arts are a way to scan around for knowledge. Science, and everyone is a scientist documented or undocumented, is another way to capture knowledge, skill, competencies, insights, etc.

I respect all occupations and professions, especially which of those consummated scientists. Who knows? Someday I may tender a little gift, from my utmost stubbornness, recursive, forever search, to humankind.

In the mean time, my in-depth research, analyses, consultancy, e-publishing, and blogging will carry on with a Davincian mind and a Einstenian, a la “gendaken”, brain, that is, if I may. Yes, I will and without a fail.

Andy's Wrist Watch Gives the Time...

Definition of "Transformative Risk Management"

Who is Andy Agostini? ...

WHO IS ANDY AGOSTINI?

“Put simply, an inspired, determined soul, with an audacious style of ingrained womb-to-tomb thinking from the monarchy of originality, who starvingly seeks and seeks and seeks —in real-time—the yet unimagined futures in diverse ways, contexts, and approaches, originated in the FUTURE. A knowledge-based, pervasive rebellious, ‘type A Prima Donna’, born out of extraterrestrial protoplasm, who is on a rampant mission to (cross) research science (state of the art from the avant-garde) progressively, envision, and capture a breakthrough foresight of what is/what might be/what should be, still to come while he marshals his ever-practicing, inquisitive future-driven scenarios, via his Lines of Practice and from the intertwined, intersected, chaotically frenzy stances that combine both subtlety and brute force with the until now overwhelmingly unthinkable.”

AGOSTINI HOME WEBSITE ....

It 's about ...

www.AndyBelieves.blogspot.com

A Singularitarian into Original Thinking...

Andres Agostini - Arlignton, Virginia, USA

Andy's Comment to an E-Survey by BBC World:

We are living in extreme times. As Global Risk Manager and Scenario Strategists I know we have the technology and science to solve many existential risks. The problem is that the world is over-populated by –as it seems- a majority of psycho-stable people. For the immeasurable challenges we need to face and act upon them, we will require a majority of extremely educated (exact sciences) people who are psycho-kinetic minded. People who have an unlimited drive to do things optimally, that are visionaries. That will go all the way to make peace universal and so the best maintenance of ecology. One life-to-death risk is a nuclear war. There are too many alleged statesmen willing to pull to switch to quench their mediocre egos. If we can manage systematically, systematically, and holistically the existential risks (including the ruthless progression of science and technology), the world (including some extra-Erath stations) a promissory place. The powers and the superpowers must all “pull” at the unison to mitigate/eliminate these extraordinarily grave risks.

CNNI Richard Quest

Deutsche Welle: DW-WORLD.DE - Germany

Deutsche Welle: DW-WORLD.DE - Europe

Deutsche Welle: DW-WORLD.DE - Business

Deutsche Welle: DW-WORLD.DE

Napolen Bonaparte on Education/Formation....

NAPOLEON ON EDUCATION:

(Literally. Brackets are placed by Andres Agostini.

Content researched by Andres Agostini)

“….Education, strictly speaking, has several objectives: one needs to learn how to speak and write correctly, which is generally called grammar and belles lettres [fines literature of that time]. Each lyceum [high school] has provided for this ob­ject, and there is no well-educated man who has not learned his rhetoric.

After the need to speak and write correctly [accurately and unambiguously] comes the ability to count and measure [skillful at mathematics, physics, quantum mechanics, etc.]. The lyceums have provided this with classes in mathematics embracing arithmetical and mechanical knowledge [classic physics plus quantum mechanics] in their different branches.

The elements of several other fields come next: chronology [timing, tempo, in-flux epochs], ge­ography [geopolitics plus geology plus atmospheric weather], and the rudiments of history are also a part of the educa­tion [sine qua non catalyzer to surf the Intensively-driven Knowledge Economy] of the lyceum. . . .

A young man [a starting, independent entrepreneur] who leaves the lyceum at sixteen years of age therefore knows not only the mechanics of his language and the classical authors [captain of the classic, great wars plus those into philosophy and theology], the divisions of discourse [the structure of documented oral presentations], the different figures of eloquence, the means of employing them either to calm or to arouse passions, in short, everything that one learns in a course on belles lettres.

He also would know the principal epochs of history, the basic geographical divisions, and how to compute and measure [dexterity with information technology, informatics, and telematics]. He has some general idea of the most striking natural phenomena [ambiguity, ambivalence, paradoxes, contradictions, paradigm shits, predicaments, perpetual innovation, so forth] and the principles of equilibrium and movement both [corporate strategy and risk-managing of kinetic energy transformation pertaining to the physical world] with regard to solids and fluids.

Whether he desires to follow the career of the barrister, that of the sword [actual, scientific war waging in the frame of reference of work competition], OR ENGLISH [CENTURY-21 LINGUA FRANCA, MORE-THAN-VITAL TOOL TO ACCESS BASIC THROUGH COMPLEX SCIENCE], or letters; if he is destined to enter into the body of scholars [truest womb-to-tomb managers, pundits, experts, specialists, generalists], to be a geographer, engineer, or land surveyor—in all these cases he has received a general education [strongly dexterous of two to three established disciplines plus a background of a multitude of diverse disciplines from the exact sciences, social sciences, etc.] necessary to become equipped [talented] to receive the remainder of instruction [duly, on-going-ly indoctrinated to meet the thinkable and unthinkable challenges/responsibilities beyond his boldest imagination, indeed] that his [forever-changing, increasingly so] circumstances require, and it is at this moment [of extreme criticality for humankind survival], when he must make his choice of a profession, that the special studies [omnimode, applied with the real-time perspective of the totality of knowledge] science present themselves.

If he wishes to devote himself to the military art, engineering, or artillery, he enters a special school of mathematics [quantum information sciences], the polytechnique. What he learns there is only the corollary of what he has learned in elementary mathematics, but the knowledge acquired in these studies must be developed and applied before he enters the dif­ferent branches of abstract mathematics. No longer is it a question simply of education [and mind’s duly formation/shaping], as in the lyceum: NOW IT BECOMES A MATTER OF ACQUIRING A SCIENCE....”

END OF TRANSCRIPTION.

On "Artificial Intelligence" - As follows:

"AI" redirects here. For other uses of "AI" and "Artificial intelligence", see Ai (disambiguation).

Garry Kasparov playing against Deep Blue, the first machine to win a chess match against a reigning world champion.

Major AI textbooks define artificial intelligence as "the study and design of intelligent agents,"[1] where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success.[2] AI can be seen as a realization of an abstract intelligent agent (AIA) which exhibits the functional essence of intelligence.[3]John McCarthy, who coined the term in 1956,[4] defines it as "the science and engineering of making intelligent machines."[5]

In modern fiction, beginning with Mary Shelley's classic Frankenstein, writers have explored the ethical issues presented by thinking machines.[21] If a machine can be created that has intelligence, can it also feel? If it can feel, does it have the same rights as a human being? This is a key issue in Frankenstein as well as in modern science fiction: for example, the film Artificial Intelligence: A.I. considers a machine in the form of a small boy which has been given the ability to feel human emotions, including, tragically, the capacity to suffer. This issue is also being considered by futurists, such as California's Institute for the Future under the name "robot rights",[22] although many critics believe that the discussion is premature.[23][24]

Futurists estimate the capabilities of machines using Moore's Law, which measures the relentless exponential improvement in digital technology with uncanny accuracy. Ray Kurzweil has calculated that desktop computers will have the same processing power as human brains by the year 2029, and that by 2040 artificial intelligence will reach a point where it is able to improve itself at a rate that far exceeds anything conceivable in the past, a scenario that science fiction writer Vernor Vinge named the "technological singularity".[28]

In the middle of the 20th century, a handful of scientists began a new approach to building intelligent machines, based on recent discoveries in neurology, a new mathematical theory of information, an understanding of control and stability called cybernetics, and above all, by the invention of the digital computer, a machine based on the abstract essence of mathematical reasoning.[30]

The field of modern AI research was founded at conference on the campus of Dartmouth College in the summer of 1956.[31] Those who attended would become the leaders of AI research for many decades, especially John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, who founded AI laboratories at MIT, CMU and Stanford. They and their students wrote programs that were, to most people, simply astonishing:[32] computers were solving word problems in algebra, proving logical theorems and speaking English.[33] By the middle 60s their research was heavily funded by the U.S. Department of Defense[34] and they were optimistic about the future of the new field:

1965, H. A. Simon: "[M]achines will be capable, within twenty years, of doing any work a man can do"[35]

These predictions, and many like them, would not come true. They had failed to recognize the difficulty of some of the problems they faced.[37] In 1974, in response to the criticism of England's Sir James Lighthill and ongoing pressure from Congress to fund more productive projects, DARPA cut off all undirected, exploratory research in AI. This was the first AI Winter.[38]

In the early 80s, AI research was revived by the commercial success of expert systems; applying the knowledge and analytical skills of one or more human experts. By 1985 the market for AI had reached more than a billion dollars.[39]Minsky and others warned the community that enthusiasm for AI had spiraled out of control and that disappointment was sure to follow.[40] Beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, more lasting AI Winter began.[41]

In the 90s and early 21st century AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence was adopted throughout the technology industry, providing the heavy lifting for logistics, data mining, medical diagnosis and many other areas.[42] The success was due to several factors: the incredible power of computers today (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and above all a new commitment by researchers to solid mathematical methods and rigorous scientific standards.[43]

The Dartmouth proposal: Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it. This assertion was printed in the program for the Dartmouth Conference of 1956, and represents the position of most working AI researchers.[47]

Newell and Simon's physical symbol system hypothesis: A physical symbol system has the necessary and sufficient means of general intelligent action. This statement claims that essence of intelligence is symbol manipulation.[48]Hubert Dreyfus argued that, on the contrary, human expertise depends on unconscious instinct rather than conscious symbol manipulation and on having a "feel" for the situation rather than explicit symbolic knowledge.[49]

Searle's "strong AI position": A physical symbol system can have a mind and mental states. Searle counters this assertion with his Chinese room argument, which asks us to look inside the computer and try to find where the "mind" might be.[51]

Early AI researchers developed algorithms that imitated the process of conscious, step-by-step reasoning that human beings use when they solve puzzles, play board games, or make logical deductions.[53] By the late 80s and 90s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[54]

For difficult problems, most of these algorithms can require enormous computational resources — most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem solving algorithms is a high priority for AI research.[55]

It is not clear, however, that conscious human reasoning is any more efficient when faced with a difficult abstract problem. Cognitive scientists have demonstrated that human beings solve most of their problems using unconscious reasoning, rather than the conscious, step-by-step deduction that early AI research was able to model.[56]Embodied cognitive science argues that unconscious sensorimotor skills are essential to our problem solving abilities. It is hoped that sub-symbolic methods, like computational intelligence and situated AI, will be able to model these instinctive skills. The problem of unconscious problem solving, which forms part of our commonsense reasoning, is largely unsolved.

Knowledge representation[57] and knowledge engineering[58] are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects;[59] situations, events, states and time;[60] causes and effects;[61] knowledge about knowledge (what we know about what other people know);[62] and many other, less well researched domains. A complete representation of "what exists" is an ontology[63] (borrowing a word from traditional philosophy), of which the most general are called upper ontologies.

Among the most difficult problems in knowledge representation are:

Default reasoning and the qualification problem: Many of the things people know take the form of "working assumptions." For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about birds in general. John McCarthy identified this problem in 1969[64] as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.[65]

Unconscious knowledge: Much of what people know isn't represented as "facts" or "statements" that they could actually say out loud. They take the form of intuitions or tendencies and are represented in the brain unconsciously and sub-symbolically. This unconscious knowledge informs, supports and provides a context for our conscious knowledge. As with the related problem of unconscious reasoning, it is hoped that situated AI or computational intelligence will provide ways to represent this kind of knowledge.

The breadth of common sense knowledge: The number of atomic facts that the average person knows is astronomical. Research projects that attempt to build a complete knowledge base of commonsense knowledge, such as Cyc, require enormous amounts of tedious step-by-step ontological engineering — they must be built, by hand, one complicated concept at a time.[66]

Intelligent agents must be able to set goals and achieve them.[67] They need a way to visualize the future: they must have a representation of the state of the world and be able to make predictions about how their actions will change it. They must also attempt to determine the utility or "value" of the choices available to it.[68]

In some planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of it's actions may be.[69] However, if this is not true, it must periodically check if the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty.[70]

Unsupervised learning: find a model that matches a stream of input "experiences", and be able to predict what new "experiences" to expect.

Supervised learning, such as classification (be able to determine what category something belongs in, after seeing a number of examples of things from each category), or regression (given a set of numerical input/output examples, discover a continuous function that would generate the outputs from the inputs).

Natural language processing[74] gives machines the ability to read and understand the languages human beings speak. Many researchers hope that a sufficiently powerful natural language processing system would be able to acquire knowledge on its own, by reading the existing text available over the internet. Some straightforward applications of natural language processing include information retrieval (or text mining) and machine translation.[75]

Emotion and social skills play two roles for an intelligent agent:[83]

It must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.)

For good human-computer interaction, an intelligent machine also needs to display emotions — at the very least it must appear polite and sensitive to the humans it interacts with. At best, it should appear to have normal emotions itself.

Most researchers hope that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them.[7] A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.

Many of the problems above are considered AI-complete: to solve one problem, you must solve them all. For example, even a straightforward, specific task like machine translation requires that the machine follow the author's argument (reason), know what it's talking about (knowledge), and faithfully reproduce the author's intention (social intelligence). Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do it.[84]

There are as many approaches to AI as there are AI researchers—any coarse categorization is likely to be unfair to someone. Artificial intelligence communities have grown up around particular problems, institutions and researchers, as well as the theoretical insights that define the approaches described below. Artificial intelligence is a young science and is still a fragmented collection of subfields. At present, there is no established unifying theory that links the subfields into a coherent whole.

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: CMU, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI "good old fashioned AI" or "GOFAI".[86]

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications. This "knowledge revolution" led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[92] The knowledge revolution was also driven by the realization that truly enormous of amounts knowledge would be required by many simple AI applications.

During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[93] By the 1980s, however, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems.[94]

Bottom-up, situated, behavior based or nouvelle AI

Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focussed on the basic engineering problems that would allow robots to move and survive.[95] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 50s and reintroduced the use of control theory in AI. These approaches are also conceptually related to the embodied mind thesis.

In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI's recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Russell & Norvig (2003) describe this movement as nothing less than a "revolution" and "the victory of the neats."[98]

An intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success. The simplest intelligent agents are programs that solve specific problems. The most complicated intelligent agents would be rational, thinking human beings.[100]

The paradigm gives researchers license to study isolated problems and find solutions that are both verifiable and useful, without agreeing on one single approach. An agent that solves a specific problem can use any approach that works — some agents are symbolic and logical, some are sub-symbolic neural networks and some can be based on new approaches (without forcing researchers to reject old approaches that have proven useful). The paradigm gives researchers a common language to describe problems and share their solutions with each other and with other fields—such as decision theory—that also use concepts of abstract agents.

In the course of 50 years of research, AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

"Uninformed" search algorithms eventually search through every possible answer until they locate their goal.[109] Naive algorithms quickly run into problems when they expand the size of their search space to astronomical numbers. The result is a search that is too slow or never completes.

Heuristic or "informed" searches use heuristic methods to eliminate choices that are unlikely to lead to their goal, thus drastically reducing the number of possibilities they must explore.[110]

Fuzzy logic, a version of first order logic which allows the truth of statement to represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems can be used for uncertain reasoning and have been widely used in modern industrial and consumer product control systems.[120]

Many problems in AI (in reasoning, planning, learning, perception and robotics) require the agent to operate with incomplete or uncertain information. Starting in the late 80s and early 90s, Judea Pearl and others championed the use of methods drawn from probability theory and economics to devise a number of powerful tools to solve these problems.[121]

The simplest AI applications can be divided into two types: classifiers ("if shiny then diamond") and controllers ("if shiny then pick up"). Controllers do however also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems.

Classifiers[133] are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set.

When a new observation is received, that observation is classified based on previous experience. A classifier can be trained in various ways; there are many statistical and machine learning approaches.

A wide range of classifiers are available, each with its strengths and weaknesses. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the "no free lunch" theorem. Various empirical tests have been performed to compare classifier performance and to find the characteristics of data that determine classifier performance. Determining a suitable classifier for a given problem is however still more an art than science.

How can one determine if an agent is intelligent? In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.

Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.

The broad classes of outcome for an AI test are:

optimal: it is not possible to perform better

strong super-human: performs better than all humans

super-human: performs better than most humans

sub-human: performs worse than most humans

For example, performance at checkers is optimal[151], performance at chess is super-human and nearing strong super-human[152], performance at Go is sub-human[153], and performance at many everyday tasks performed by humans is sub-human.

There are a number of competitions and prizes to promote research in artificial intelligence. The main areas promoted are: general machine intelligence, conversational behaviour, data-mining, driverless cars, robot soccer and games.

Artificial intelligence has successfully been used in a wide range of fields including medical diagnosis, stock trading, robot control, law, scientific discovery and toys. Frequently, when a technique reaches mainstream use it is no longer considered artificial intelligence, sometimes described as the AI effect.[154]

^ The Egyptian statue of Amun is discussed by Crevier (1993, p. 1). McCorduck (2004, pp. 6-9) discusses Greek statues. Hermes Trismegistus expressed the common belief that with these statues, craftsman had reproduced "the true nature of the gods", their sensus and spiritus. McCorduck makes the connection between sacred automatons and Mosaic law (developed around the same time), which expressly forbids the worship of robots.

^Searle 1980. See also Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis," although Searle's arguments, such as the Chinese Room, apply only to physical symbol systems, not to machines in general (he would consider the brain a machine). Also, notice that the positions as Searle states them don't make any commitment to how much intelligence the system has: it is one thing to say a machine can act intelligently, it is another to say it can act as intelligently as a human being.

.."problem solving" is largely, perhaps entirely, a matter of appropriate selection. Take, for instance, any popular book of problems and puzzles. Almost every one can be reduced to the form: out of a certain set, indicate one element. ... It is, in fact, difficult to think of a problem, either playful or serious, that does not ultimately require an appropriate selection as necessary and sufficient for its solution. It is also clear that many of the tests used for measuring "intelligence" are scored essentially according to the candidate's power of appropriate selection. ... Thus it is not impossible that what is commonly referred to as "intellectual power" may be equivalent to "power of appropriate selection". Indeed, if a talking Black Box were to show high power of appropriate selection in such matters — so that, when given difficult problems it persistently gave correct answers — we could hardly deny that it was showing the 'behavioral' equivalent of "high intelligence". If this is so, and as we know that power of selection can be amplified, it seems to follow that intellectual power, like physical power, can be amplified. Let no one say that it cannot be done, for the gene-patterns do it every time they form a brain that grows up to be something better than the gene-pattern could have specified in detail. What is new is that we can now do it synthetically, consciously, deliberately.

"Man-Computer Symbiosis" is a key speculative paper published in 1960 by psychologist/computer scientistJ.C.R. Licklider, which envisions that mutually-interdependent, "living together", tightly-coupled human brains and computing machines would prove to complement each other's strengths to a high degree:

"Man-computer symbiosis is a subclass of man-machine systems. There are many man-machine systems. At present, however, there are no man-computer symbioses. The purposes of this paper are to present the concept and, hopefully, to foster the development of man-computer symbiosis by analyzing some problems of interaction between men and computing machines, calling attention to applicable principles of man-machine engineering, and pointing out a few questions to which research answers are needed. The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today."

In Licklider's vision, many of the pure artificial intelligence systems envisioned at the time by over-optimistic researchers would prove unnecessary. (This paper is also seen by some historians as marking the genesis of ideas about computer networks which later blossomed into the Internet).

Licklider's research was similar in spirit to his DARPA contemporary and protégéDouglas Engelbart; both had a view of how computers could be used that was both at odds with the then-prevalent views (which saw them as devices principally useful for computations), and key proponents of the way in which computers are now used (as generic adjuncts to humans).

Engelbart reasoned that the state of our current technology controls our ability to manipulate information, and that fact in turn will control our ability to develop new, improved technologies. He thus set himself to the revolutionary task of developing computer-based technologies for manipulating information directly, and also to improve individual and group processes for knowledge-work. Engelbart's philosophy and research agenda is most clearly and directly expressed in the 1962 research report which Engelbart refers to as his 'bible': Augmenting Human Intellect: A Conceptual Framework. The concept of network augmented intelligence is attributed to Engelbart based on this pioneering work.

"Increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insolvable. And by complex situations we include the professional problems of diplomats, executives, social scientists, life scientists, physical scientists, attorneys, designers--whether the problem situation exists for twenty minutes or twenty years. We do not speak of isolated clever tricks that help in particular situations. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human feel for a situation usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids."

Waldrop, M. Mitchell, The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal, Viking Press, New York, NY, 2001. Licklider's biography, contains discussion of the importance of this paper.

Biotechnology:

""Biotechnology" means any technological application that uses biological systems, living organisms, or derivatives thereof, to make or modify products or processes for specific use."

Biotechnology is often used to refer to genetic engineering technology of the 21st century, however the term encompasses a wider range and history of procedures for modifying biological organisms according to the needs of humanity, going back to the initial modifications of native plants into improved food crops through artificial selection and hybridization. Bioengineering is the science upon which all Biotechnological applications are based. With the development of new approaches and modern techniques, traditional biotechnology industries are also acquiring new horizons enabling them to improve the quality of their products and increase the productivity of their systems.

Before 1971, the term, biotechnology, was primarily used in the food processing and agriculture industries. Since the 1970s, it began to be used by the Western scientific establishment to refer to laboratory-based techniques being developed in biological research, such as recombinant DNA or tissue culture-based processes, or horizontal gene transfer in living plants, using vectors such as the Agrobacterium bacteria to transfer DNA into a host organism. In fact, the term should be used in a much broader sense to describe the whole range of methods, both ancient and modern, used to manipulate organic materials to reach the demands of food production. So the term could be defined as, "The application of indigenous and/or scientific knowledge to the management of (parts of) microorganisms, or of cells and tissues of higher organisms, so that these supply goods and services of use to the food industry and its consumers.[2]

The most practical use of biotechnology, which is still present today, is the cultivation of plants to produce food suitable to humans. Agriculture has been theorized to have become the dominant way of producing food since the Neolithic Revolution. The processes and methods of agriculture have been refined by other mechanical and biological sciences since its inception. Through early biotechnology farmers were able to select the best suited and highest-yield crops to produce enough food to support a growing population, including Ali. Other uses of biotechnology were required as crops and fields became increasingly large and difficult to maintain. Specific organisms and organism byproducts were used to fertilize, restore nitrogen, and control pests. Throughout the use of agriculture farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants--one of the first forms of biotechnology. Cultures such as those in Mesopotamia, Egypt, and Iran developed the process of brewing beer. It is still done by the same basic method of using malted grains (containing enzymes) to convert starch from grains into sugar and then adding specific yeasts to produce beer. In this process the carbohydrates in the grains were broken down into alcohols such as ethanol. Later other cultures produced the process of Lactic acid fermentation which allowed the fermentation and preservation of other forms of food. Fermentation was also used in this time period to produce leavened bread. Although the process of fermentation was not fully understood until Louis Pasteur’s work in 1857, it is still the first use of biotechnology to convert a food source into another form.

Combinations of plants and other organisms were used as medications in many early civilizations. Since as early as 200 BC, people began to use disabled or minute amounts of infectious agents to immunize themselves against infections. These and similar processes have been refined in modern medicine and have led to many developments such as antibiotics, vaccines, and other methods of fighting sickness.

Biotechnology has applications in four major industrial areas, including health care (medical), crop production and agriculture, non food (industrial) uses of crops and other products (e.g. biodegradable plastics, vegetable oil, biofuels), and environmental uses.

For example, one application of biotechnology is the directed use of organisms for the manufacture of organic products (examples include beer and milk products). Another example is using naturally present bacteria by the mining industry in bioleaching. Biotechnology is also used to recycle, treat waste, clean up sites contaminated by industrial activities (bioremediation), and also to produce biological weapons.

A series of derived terms have been coined to identify several branches of biotechnology, for example:

Green biotechnology is biotechnology applied to agricultural processes. An example would be the selection and domestication of plants via micropropagation. Another example is the designing of transgenic plants to grow under specific environmental conditions or in the presence (or absence) of certain agricultural chemicals. One hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. An example of this is the engineering of a plant to express a pesticide, thereby eliminating the need for external application of pesticides. An example of this would be Bt corn. Whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate.

White biotechnology , also known as industrial biotechnology, is biotechnology applied to industrial processes. An example is the designing of an organism to produce a useful chemical. Another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous/polluting chemicals (examples using oxidoreductases are given in Feng Xu (2005) “Applications of oxidoreductases: Recent progress” Ind. Biotechnol. 1, 38-50 [1]). White biotechnology tends to consume less in resources than traditional processes used to produce industrial goods.

Blue biotechnology is a term that has been used to describe the marine and aquatic applications of biotechnology, but its use is relatively rare.

The investments and economic output of all of these types of applied biotechnologies form what has been described as the bioeconomy.

Bioinformatics is an interdisciplinary field which addresses biological problems using computational techniques, and makes the rapid organization and analysis of biological data possible. The field may also be referred to as computational biology, and can be defined as, "conceptualizing biology in terms of molecules and then applying informatics techniques to understand and organize the information associated with these molecules, on a large scale."[5] Bioinformatics plays a key role in various areas, such as functional genomics, structural genomics, and proteomics, and forms a key component in the biotechnology and pharmaceutical sector.

Pharmacogenomics is the study of how the genetic inheritance of an individual affects his/her body’s response to drugs. It is a coined word derived from the words “pharmacology” and “genomics”. It is hence the study of the relationship between pharmaceuticals and genetics. The vision of pharmacogenomics is to be able to design and produce drugs that are adapted to each person’s genetic makeup.[6]

1. Development of tailor-made medicines. Using pharmacogenomics, pharmaceutical companies can create drugs based on the proteins, enzymes and RNA molecules that are associated with specific genes and diseases. These tailor-made drugs promise not only to maximize therapeutic effects but also to decrease damage to nearby healthy cells.

2. More accurate methods of determining appropriate drug dosages. Knowing a patient’s genetics will enable doctors to determine how well his/ her body can process and metabolize a medicine. This will maximize the value of the medicine and decrease the likelihood of overdose.

3. Improvements in the drug discovery and approval process. The discovery of potential therapies will be made easier using genome targets. Genes have been associated with numerous diseases and disorders. With modern biotechnology, these genes can be used as targets for the development of effective new therapies, which could significantly shorten the drug discovery process.

4. Better vaccines. Safer vaccines can be designed and produced by organisms transformed by means of genetic engineering. These vaccines will elicit the immune response without the attendant risks of infection. They will be inexpensive, stable, easy to store, and capable of being engineered to carry several strains of pathogen at once.

Computer-generated image of insulin hexamers highlighting the threefold symmetry, the zinc ions holding it together, and the histidine residues involved in zinc binding.

Most traditional pharmaceutical drugs are relatively simple molecules that have been found primarily through trial and error to treat the symptoms of a disease or illness. Biopharmaceuticals are large biological molecules known as proteins and these usually (but not always, as is the case with using insulin to treat type 1 diabetes mellitus) target the underlying mechanisms and pathways of a malady; it is a relatively young industry. They can deal with targets in humans that may not be accessible with traditional medicines. A patient typically is dosed with a small molecule via a tablet while a large molecule is typically injected.

Small molecules are manufactured by chemistry but large molecules are created by living cells such as those found in the human body: for example, bacteria cells, yeast cells, animal or plant cells.

Biotechnology is also commonly associated with landmark breakthroughs in new medical therapies to treat hepatitis B, hepatitis C, cancers, arthritis, haemophilia, bone fractures, multiple sclerosis, and cardiovascular disorders. The biotechnology industry has also been instrumental in developing molecular diagnostic devices than can be used to define the target patient population for a given biopharmaceutical. Herceptin, for example, was the first drug approved for use with a matching diagnostic test and is used to treat breast cancer in women whose cancer cells express the protein HER2.

Modern biotechnology can be used to manufacture existing medicines relatively easily and cheaply. The first genetically engineered products were medicines designed to treat human diseases. To cite one example, in 1978 Genentech developed synthetic humanized insulin by joining its gene with a plasmid vector inserted into the bacterium Escherichia coli. Insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas of cattle and/or pigs. The resulting genetically engineered bacterium enabled the production of vast quantities of synthetic human insulin at low cost.[7]

Since then modern biotechnology has made it possible to produce more easily and cheaply human growth hormone, clotting factors for hemophiliacs, fertility drugs, erythropoietin and other drugs.[8] Most drugs today are based on about 500 molecular targets. Genomic knowledge of the genes involved in diseases, disease pathways, and drug-response sites are expected to lead to the discovery of thousands more new targets.[8]

There are two major types of gene tests. In the first type, a researcher may design short pieces of DNA (“probes”) whose sequences are complementary to the mutated sequences. These probes will seek their complement among the base pairs of an individual’s genome. If the mutated sequence is present in the patient’s genome, the probe will bind to it and flag the mutation. In the second type, a researcher may conduct the gene test by comparing the sequence of DNA bases in a patient’s gene to disease in healthy individuals or their progeny.

Genetic testing is now used for:

Determining sex

Carrier screening, or the identification of unaffected individuals who carry one copy of a gene for a disease that requires two copies for the disease to manifest

Prenatal diagnostic screening

Newborn screening

Presymptomatic testing for predicting adult-onset disorders

Presymptomatic testing for estimating the risk of developing adult-onset cancers

Confirmational diagnosis of symptomatic individuals

Forensic/identity testing

Some genetic tests are already available, although most of them are used in developed countries. The tests currently available can detect mutations associated with rare genetic disorders like cystic fibrosis, sickle cell anemia, and Huntington’s disease. Recently, tests have been developed to detect mutation for a handful of more complex conditions such as breast, ovarian, and colon cancers. However, gene tests may not detect every mutation associated with a particular condition because many are as yet undiscovered, and the ones they do detect may present different risks to different people and populations.[8]

1. Absence of cure. There is still a lack of effective treatment or preventive measures for many diseases and conditions now being diagnosed or predicted using gene tests. Thus, revealing information about risk of a future disease that has no existing cure presents an ethical dilemma for medical practitioners.

2. Ownership and control of genetic information. Who will own and control genetic information, or information about genes, gene products, or inherited characteristics derived from an individual or a group of people like indigenous communities? At the macro level, there is a possibility of a genetic divide, with developing countries that do not have access to medical applications of biotechnology being deprived of benefits accruing from products derived from genes obtained from their own people. Moreover, genetic information can pose a risk for minority population groups as it can lead to group stigmatization.

At the individual level, the absence of privacy and anti-discrimination legal protections in most countries can lead to discrimination in employment or insurance or other misuse of personal genetic information. This raises questions such as whether genetic privacy is different from medical privacy.[9]

3. Reproductive issues. These include the use of genetic information in reproductive decision-making and the possibility of genetically altering reproductive cells that may be passed on to future generations. For example, germline therapy forever changes the genetic make-up of an individual’s descendants. Thus, any error in technology or judgment may have far-reaching consequences. Ethical issues like designer babies and human cloning have also given rise to controversies between and among scientists and bioethicists, especially in the light of past abuses with eugenics.

4. Clinical issues. These center on the capabilities and limitations of doctors and other health-service providers, people identified with genetic conditions, and the general public in dealing with genetic information.

5. Effects on social institutions. Genetic tests reveal information about individuals and their families. Thus, test results can affect the dynamics within social institutions, particularly the family.

6. Conceptual and philosophical implications regarding human responsibility, free will vis-à-vis genetic determinism, and the concepts of health and disease.

Gene therapy using an Adenovirus vector. A new gene is inserted into an adenovirus vector, which is used to introduce the modified DNA into a human cell. If the treatment is successful, the new gene will make a functional protein.

Gene therapy may be used for treating, or even curing, genetic and acquired diseases like cancer and AIDS by using normal genes to supplement or replace defective genes or to bolster a normal function such as immunity. It can be used to target somatic (i.e., body) or germ (i.e., egg and sperm) cells. In somatic gene therapy, the genome of the recipient is changed, but this change is not passed along to the next generation. In contrast, in germline gene therapy, the egg and sperm cells of the parents are changed for the purpose of passing on the changes to their offspring.

There are basically two ways of implementing a gene therapy treatment:

1. Ex vivo, which means “outside the body” – Cells from the patient’s blood or bone marrow are removed and grown in the laboratory. They are then exposed to a virus carrying the desired gene. The virus enters the cells, and the desired gene becomes part of the DNA of the cells. The cells are allowed to grow in the laboratory before being returned to the patient by injection into a vein.

2. In vivo, which means “inside the body” – No cells are removed from the patient’s body. Instead, vectors are used to deliver the desired gene to cells in the patient’s body.

Currently, the use of gene therapy is limited. Somatic gene therapy is primarily at the experimental stage. Germline therapy is the subject of much discussion but it is not being actively investigated in larger animals and human beings.

As of June 2001, more than 500 clinical gene-therapy trials involving about 3,500 patients have been identified worldwide. Around 78% of these are in the United States, with Europe having 18%. These trials focus on various types of cancer, although other multigenic diseases are being studied as well. Recently, two children born with severe combined immunodeficiency disorder (“SCID”) were reported to have been cured after being given genetically engineered cells.

Gene therapy faces many obstacles before it can become a practical approach for treating disease.[10] At least four of these obstacles are as follows:

1. Gene delivery tools. Genes are inserted into the body using gene carriers called vectors. The most common vectors now are viruses, which have evolved a way of encapsulating and delivering their genes to human cells in a pathogenic manner. Scientists manipulate the genome of the virus by removing the disease-causing genes and inserting the therapeutic genes. However, while viruses are effective, they can introduce problems like toxicity, immune and inflammatory responses, and gene control and targeting issues.

2. Limited knowledge of the functions of genes. Scientists currently know the functions of only a few genes. Hence, gene therapy can address only some genes that cause a particular disease. Worse, it is not known exactly whether genes have more than one function, which creates uncertainty as to whether replacing such genes is indeed desirable.

3. Multigene disorders and effect of environment. Most genetic disorders involve more than one gene. Moreover, most diseases involve the interaction of several genes and the environment. For example, many people with cancer not only inherit the disease gene for the disorder, but may have also failed to inherit specific tumor suppressor genes. Diet, exercise, smoking and other environmental factors may have also contributed to their disease.

4. High costs. Since gene therapy is relatively new and at an experimental stage, it is an expensive treatment to undertake. This explains why current studies are focused on illnesses commonly found in developed countries, where more people can afford to pay for treatment. It may take decades before developing countries can take advantage of this technology.

The Human Genome Project is an initiative of the U.S. Department of Energy (“DOE”) that aims to generate a high-quality reference sequence for the entire human genome and identify all the human genes.

The DOE and its predecessor agencies were assigned by the U.S. Congress to develop new energy resources and technologies and to pursue a deeper understanding of potential health and environmental risks posed by their production and use. In 1986, the DOE announced its Human Genome Initiative. Shortly thereafter, the DOE and National Institutes of Health developed a plan for a joint Human Genome Project (“HGP”), which officially began in 1990.

The HGP was originally planned to last 15 years. However, rapid technological advances and worldwide participation accelerated the completion date to 2003 (making it a 13 year project). Already it has enabled gene hunters to pinpoint genes associated with more than 30 disorders.[11]

Cloning involves the removal of the nucleus from one cell and its placement in an unfertilized egg cell whose nucleus has either been deactivated or removed.

There are two types of cloning:

1. Reproductive cloning. After a few divisions, the egg cell is placed into a uterus where it is allowed to develop into a fetus that is genetically identical to the donor of the original nucleus.

2. Therapeutic cloning.[12] The egg is placed into a Petri dish where it develops into embryonic stem cells, which have shown potentials for treating several ailments.[13]

In February 1997, cloning became the focus of media attention when Ian Wilmut and his colleagues at the Roslin Institute announced the successful cloning of a sheep, named Dolly, from the mammary glands of an adult female. The cloning of Dolly made it apparent to many that the techniques used to produce her could someday be used to clone human beings.[14] This stirred a lot of controversy because of its ethical implications.

In January 2008, Christopher S. Chen made an exciting discovery that could potentially alter the future of medicine. He found that cell signaling that is normally biochemically regulated could be simulated with magnetic nanoparticles attached to a cell surface. The discovery of Donald Ingber, Robert Mannix, and Sanjay Kumar, who found that a nanobead can be attached to a monovalent ligand, and that these compounds can bind to Mast cells without triggering the clustering response, inspired Chen’s research. Usually, when a multivalent ligand attaches to the cell’s receptors, the signal pathway is activated. However, these nanobeads only initiated cell signaling when a magnetic field was applied to the area, thereby causing the nanobeads to cluster. It is important to note that this clustering triggered the cellular response, not merely the force applied to the cell due to the receptor binding. This experiment was carried out several times with time-varying activation cycles. However, there is no reason to suggest that the response time could not be reduced to seconds or even milliseconds. This low response time has exciting applications in the medical field. Currently it takes minutes or hours for a pharmaceutical to affect its environment, and when it does so, the changes are irreversible. With the current research in mind, though, a future of millisecond response times and reversible effects is possible. Imagine being able to treat various allergic responses, colds, and other such ailments almost instantaneously. This future has not yet arrived, however, and further research and testing must be done in this area, but this is an important step in the right direction.[15]

Using the techniques of modern biotechnology, one or two genes may be transferred to a highly developed crop variety to impart a new character that would increase its yield (30). However, while increases in crop yield are the most obvious applications of modern biotechnology in agriculture, it is also the most difficult one. Current genetic engineering techniques work best for effects that are controlled by a single gene. Many of the genetic characteristics associated with yield (e.g., enhanced growth) are controlled by a large number of genes, each of which has a minimal effect on the overall yield (31). There is, therefore, much scientific work to be done in this area.

Crops containing genes that will enable them to withstand biotic and abiotic stresses may be developed. For example, drought and excessively salty soil are two important limiting factors in crop productivity. Biotechnologists are studying plants that can cope with these extreme conditions in the hope of finding the genes that enable them to do so and eventually transferring these genes to the more desirable crops. One of the latest developments is the identification of a plant gene, At-DBF2, from thale cress, a tiny weed that is often used for plant research because it is very easy to grow and its genetic code is well mapped out. When this gene was inserted into tomato and tobacco cells, the cells were able to withstand environmental stresses like salt, drought, cold and heat, far more than ordinary cells. If these preliminary results prove successful in larger trials, then At-DBF2 genes can help in engineering crops that can better withstand harsh environments (32). Researchers have also created transgenic rice plants that are resistant to rice yellow mottle virus (RYMV). In Africa, this virus destroys majority of the rice crops and makes the surviving plants more susceptible to fungal infections (33). BIOTECHNOLOGY

Proteins in foods may be modified to increase their nutritional qualities. Proteins in legumes and cereals may be transformed to provide the amino acids needed by human beings for a balanced diet (34). A good example is the work of Professors Ingo Potrykus and Peter Beyer on the so-called Goldenrice™(discussed below).

Modern biotechnology can be used to slow down the process of spoilage so that fruit can ripen longer on the plant and then be transported to the consumer with a still reasonable shelf life. This improves the taste, texture and appearance of the fruit. More importantly, it could expand the market for farmers in developing countries due to the reduction in spoilage.

Biotechnology in cheeze production[16]: enzymes produced by micro-organisms provide an alternative to animal rennet – a cheese coagulant - and a more reliable supply for cheese makers. This also eliminates possible public concerns with animal derived material. Enzymes offer an animal friendly alternative to animal rennet. While providing constant quality, they are also less expensive.

About 85 million tons of wheat flour is used every year to bake bread[17]. By adding an enzyme called maltogenic amylase to the flour, bread stays fresher longer. Assuming that 10-15% of bread is thrown away, if it could just stay fresh another 5-7 days then 2 million tons of flour per year would be saved. That corresponds to 40% of the bread consumed in a country such as the USA. This means more bread becomes available with no increase in input. In combination with other enzymes, bread can also be made bigger, more appetizing and better in a range of ways.

[edit]Reduced dependence on fertilizers, pesticides and other agrochemicals

Most of the current commercial applications of modern biotechnology in agriculture are on reducing the dependence of farmers on agrochemicals. For example, Bacillus thuringiensis (Bt) is a soil bacterium that produces a protein with insecticidal qualities. Traditionally, a fermentation process has been used to produce an insecticidal spray from these bacteria. In this form, the Bt toxin occurs as an inactive protoxin, which requires digestion by an insect to be effective. There are several Bt toxins and each one is specific to certain target insects. Crop plants have now been engineered to contain and express the genes for Bt toxin, which they produce in its active form. When a susceptible insect ingests the transgenic crop cultivar expressing the Bt protein, it stops feeding and soon thereafter dies as a result of the Bt toxin binding to its gut wall. Bt corn is now commercially available in a number of countries to control corn borer (a lepidopteran insect), which is otherwise controlled by spraying (a more difficult process).

Crops have also been genetically engineered to acquire tolerance to broad-spectrum herbicide. The lack of cost-effective herbicides with broad-spectrum activity and no crop injury was a consistent limitation in crop weed management. Multiple applications of numerous herbicides were routinely used to control a wide range of weed species detrimental to agronomic crops. Weed management tended to rely on preemergence — that is, herbicide applications were sprayed in response to expected weed infestations rather than in response to actual weeds present. Mechanical cultivation and hand weeding were often necessary to control weeds not controlled by herbicide applications. The introduction of herbicide tolerant crops has the potential of reducing the number of herbicide active ingredients used for weed management, reducing the number of herbicide applications made during a season, and increasing yield due to improved weed management and less crop injury. Transgenic crops that express tolerance to glyphosphate, glufosinate and bromoxynil have been developed. These herbicides can now be sprayed on transgenic crops without inflicting damage on the crops while killing nearby weeds (37).

From 1996 to 2001, herbicide tolerance was the most dominant trait introduced to commercially available transgenic crops, followed by insect resistance. In 2001, herbicide tolerance deployed in soybean, corn and cotton accounted for 77% of the 626,000 square kilometres planted to transgenic crops; Bt crops accounted for 15%; and "stacked genes" for herbicide tolerance and insect resistance used in both cotton and corn accounted for 8% (38).

Biotechnology is being applied for novel uses other than food. For example, oilseed can be modified to produce fatty acids for detergents, substitute fuels and petrochemicals.[citation needed]Potato, tomato, rice, and other plants have been genetically engineered to produce insulin[citation needed] and certain vaccines. If future clinical trials prove successful, the advantages of edible vaccines would be enormous, especially for developing countries. The transgenic plants may be grown locally and cheaply. Homegrown vaccines would also avoid logistical and economic problems posed by having to transport traditional preparations over long distances and keeping them cold while in transit. And since they are edible, they will not need syringes, which are not only an additional expense in the traditional vaccine preparations but also a source of infections if contaminated.[18] In the case of insulin grown in transgenic plants, it might not be administered as an edible protein, but it could be produced at significantly lower cost than insulin produced in costly, bioreactors.[citation needed]

There is another side to the agricultural biotechnology issue however. It includes increased herbicide usage and resultant herbicide resistance, "super weeds," residues on and in food crops, genetic contamination of non-GM crops which hurt organic and conventional farmers, damage to wildlife from glyphosate, etc.[2][3]

Biotechnological engineering or biological engineering is a branch of engineering that focuses on biotechnologies and biological science. It includes different disciplines such as biochemical engineering, biomedical engineering, bio-process engineering, biosystem engineering and so on. Because of the novelty of the field, the definition of a bioengineer is still undefined. However, in general it is an integrated approach of fundamental biological sciences and traditional engineering principles.

Bioengineers are often employed to scale up bio processes from the laboratory scale to the manufacturing scale. Moreover, as with most engineers, they often deal with management, economic and legal issues. Since patents and regulation (e.g. FDA regulation in the U.S.) are very important issues for biotech enterprises, bioengineers are often required to have knowledge related to these issues.

The increasing number of biotech enterprises is likely to create a need for bioengineers in the years to come. Many universities throughout the world are now providing programs in bioengineering and biotechnology (as independent programs or specialty programs within more established engineering fields).

Biotechnology is being used to engineer and adapt organisms especially microorganisms in an effort to find sustainable ways to clean up contaminated environments. The elimination of a wide range of pollutants and wastes from the environment is an absolute requirement to promote a sustainable development of our society with low environmental impact. Biological processes play a major role in the removal of contaminants and biotechnology is taking advantage of the astonishing catabolic versatility of microorganisms to degrade/convert such compounds. New methodological breakthroughs in sequencing, genomics, proteomics, bioinformatics and imaging are producing vast amounts of information. In the field of Environmental Microbiology, genome-based global studies open a new era providing unprecedented in silico views of metabolic and regulatory networks, as well as clues to the evolution of degradation pathways and to the molecular adaptation strategies to changing environmental conditions. Functional genomic and metagenomic approaches are increasing our understanding of the relative importance of different pathways and regulatory networks to carbon flux in particular environments and for particular compounds and they will certainly accelerate the development of bioremediation technologies and biotransformation processes.[19]

Marine environments are especially vulnerable since oil spills of coastal regions and the open sea are poorly containable and mitigation is difficult. In addition to pollution through human activities, millions of tons of petroleum enter the marine environment every year from natural seepages. Despite its toxicity, a considerable fraction of petroleum oil entering marine systems is eliminated by the hydrocarbon-degrading activities of microbial communities, in particular by a remarkable recently discovered group of specialists, the so-called hydrocarbonoclastic bacteria (HCB).[20]

There are various TV series, films, and documentaries with biotechnological themes; Surface, X-Files, The Island, I Am Legend, Torchwood, Horizon. Most of which convey the endless possiblities of how the technology can go wrong, and the consequences of this.

The majority of newspapers also show pessimistic viewpoints to stem cell research, genetic engineering and the like. Some[attribution needed] would describe the Medias' overarching reaction to biotechnology as simple misunderstanding and fright.[citation needed] While there are legitimate concerns of the overwhelming power this technology may bring, most condemnations of the technology are a result of religious beliefs.[citation needed]

^ The National Action Plan on Breast Cancer and U.S. National Institutes of Health-Department of Energy Working Group on the Ethical, Legal and Social Implications (ELSI) have issued several recommendations to prevent workplace and insurance discrimination. The highlights of these recommendations, which may be taken into account in developing legislation to prevent genetic discrimination, may be found at http://www.ornl.gov/hgmis/ elsi/legislat.html.

^ A number of scientists have called for the use the term “nuclear transplantation,” instead of “therapeutic cloning,” to help reduce public confusion. The term “cloning” has become synonymous with “somatic cell nuclear transfer,” a procedure that can be used for a variety of purposes, only one of which involves an intention to create a clone of an organism. They believe that the term “cloning” is best associated with the ultimate outcome or objective of the research and not the mechanism or technique used to achieve that objective. They argue that the goal of creating a nearly identical genetic copy of a human being is consistent with the term “human reproductive cloning,” but the goal of creating stem cells for regenerative medicine is not consistent with the term “therapeutic cloning.” The objective of the latter is to make tissue that is genetically compatible with that of the recipient, not to create a copy of the potential tissue recipient. Hence, “therapeutic cloning” is conceptually inaccurate. B. Vogelstein, B. Alberts, and K. Shine, “Please Don’t Call It Cloning!”, Science (15 February 2002), 1237

Nanotechnology:

Nanotechnology refers broadly to a field of applied science and technology whose unifying theme is the control of matter on the atomic and molecular scale, normally 1 to 100 nanometers, and the fabrication of devices with critical dimensions that lie within that size range.

Examples of nanotechnology in modern use are the manufacture of polymers based on molecular structure, and the design of computer chip layouts based on surface science. Despite the great promise of numerous nanotechnologies such as quantum dots and nanotubes, real commercial applications have mainly used the advantages of colloidal nanoparticles in bulk form, such as suntan lotion, cosmetics, protective coatings, drug delivery,[1] and stain resistant clothing.

Buckminsterfullerene C60, also known as the buckyball, is the simplest of the carbon structures known as fullerenes. Members of the fullerene family are a major subject of research falling under the nanotechnology umbrella.

The first use of the concepts in 'nano-technology' (but predating use of that name) was in "There's Plenty of Room at the Bottom," a talk given by physicist Richard Feynman at an American Physical Society meeting at Caltech on December 29, 1959. Feynman described a process by which the ability to manipulate individual atoms and molecules might be developed, using one set of precise tools to build and operate another proportionally smaller set, so on down to the needed scale. In the course of this, he noted, scaling issues would arise from the changing magnitude of various physical phenomena: gravity would become less important, surface tension and Van der Waals attraction would become more important, etc. This basic idea appears plausible, and exponential assembly enhances it with parallelism to produce a useful quantity of end products. The term "nanotechnology" was defined by Tokyo Science University Professor Norio Taniguchi in a 1974 paper (N. Taniguchi, "On the Basic Concept of 'Nano-Technology'," Proc. Intl. Conf. Prod. London, Part II, British Society of Precision Engineering, 1974.) as follows: "'Nano-technology' mainly consists of the processing of, separation, consolidation, and deformation of materials by one atom or by one molecule." In the 1980s the basic idea of this definition was explored in much more depth by Dr. K. Eric Drexler, who promoted the technological significance of nano-scale phenomena and devices through speeches and the books Engines of Creation: The Coming Era of Nanotechnology (1986) and Nanosystems: Molecular Machinery, Manufacturing, and Computation,[2] and so the term acquired its current sense. Nanotechnology and nanoscience got started in the early 1980s with two major developments; the birth of cluster science and the invention of the scanning tunneling microscope (STM). This development led to the discovery of fullerenes in 1986 and carbon nanotubes a few years later. In another development, the synthesis and properties of semiconductor nanocrystals was studied; This led to a fast increasing number of metal oxide nanoparticles of quantum dots. The atomic force microscope was invented six years after the STM was invented.

One nanometer (nm) is one billionth, or 10-9 of a meter. For comparison, typical carbon-carbon bond lengths, or the spacing between these atoms in a molecule, are in the range .12-.15 nm, and a DNA double-helix has a diameter around 2 nm. On the other hand, the smallest cellular lifeforms, the bacteria of the genus Mycoplasma, are around 200 nm in length. To put that scale in to context the comparative size of a nanometer to a meter is the same as that of a marble to the size of the earth.[3] Or another way of putting it: a nanometer is the amount a man's beard grows in the time it takes him to raise the razor to his face.[3]

A number of physical phenomena become noticeably pronounced as the size of the system decreases. These include statistical mechanical effects, as well as quantum mechanical effects, for example the “quantum size effect” where the electronic properties of solids are altered with great reductions in particle size. This effect does not come into play by going from macro to micro dimensions. However, it becomes dominant when the nanometer size range is reached. Additionally, a number of physical (mechanical, electrical, optical, etc.) properties change when compared to macroscopic systems. One example is the increase in surface area to volume ratio altering mechanical, thermal and catalytic properties of materials. Novel mechanical properties of nanosystems are of interest in the nanomechanics research. The catalytic activity of nanomaterials also opens potential risks in their interaction with biomaterials.

Materials reduced to the nanoscale can suddenly show very different properties compared to what they exhibit on a macroscale, enabling unique applications. For instance, opaque substances become transparent (copper); inert materials become catalysts (platinum); stable materials turn combustible (aluminum); solids turn into liquids at room temperature (gold); insulators become conductors (silicon). A material such as gold, which is chemically inert at normal scales, can serve as a potent chemical catalyst at nanoscales. Much of the fascination with nanotechnology stems from these unique quantum and surface phenomena that matter exhibits at the nanoscale.

Modern synthetic chemistry has reached the point where it is possible to prepare small molecules to almost any structure. These methods are used today to produce a wide variety of useful chemicals such as pharmaceuticals or commercial polymers. This ability raises the question of extending this kind of control to the next-larger level, seeking methods to assemble these single molecules into supramolecular assemblies consisting of many molecules arranged in a well defined manner.

Such bottom-up approaches should, broadly speaking, be able to produce devices in parallel and much cheaper than top-down methods, but could potentially be overwhelmed as the size and complexity of the desired assembly increases. Most useful structures require complex and thermodynamically unlikely arrangements of atoms. Nevertheless, there are many examples of self-assembly based on molecular recognition in biology, most notably Watson-Crick basepairing and enzyme-substrate interactions. The challenge for nanotechnology is whether these principles can be used to engineer novel constructs in addition to natural ones.

Molecular nanotechnology, sometimes called molecular manufacturing, is a term given to the concept of engineered nanosystems (nanoscale machines) operating on the molecular scale. It is especially associated with the concept of a molecular assembler, a machine that can produce a desired structure or device atom-by-atom using the principles of mechanosynthesis. Manufacturing in the context of productive nanosystems is not related to, and should be clearly distinguished from, the conventional technologies used to manufacture nanomaterials such as carbon nanotubes and nanoparticles.

When the term "nanotechnology" was independently coined and popularized by Eric Drexler (who at the time was unaware of an earlier usage by Norio Taniguchi) it referred to a future manufacturing technology based on molecular machine systems. The premise was that molecular-scale biological analogies of traditional machine components demonstrated molecular machines were possible: by the countless examples found in biology, it is known that sophisticated, stochastically optimised biological machines can be produced.

It is hoped that developments in nanotechnology will make possible their construction by some other means, perhaps using biomimetic principles. However, Drexler and other researchers[4] have proposed that advanced nanotechnology, although perhaps initially implemented by biomimetic means, ultimately could be based on mechanical engineering principles, namely, a manufacturing technology based on the mechanical functionality of these components (such as gears, bearings, motors, and structural members) that would enable programmable, positional assembly to atomic specification (PNAS-1981). The physics and engineering performance of exemplar designs were analyzed in Drexler's book Nanosystems.

But Drexler's analysis is very qualitative and does not address very pressing issues, such as the "fat fingers" and "Sticky fingers" problems. In general it is very difficult to assemble devices on the atomic scale, as all one has to position atoms are other atoms of comparable size and stickyness. Another view, put forth by Carlo Montemagno],[5] is that future nanosystems will be hybrids of silicon technology and biological molecular machines. Yet another view, put forward by the late Richard Smalley, is that mechanosynthesis is impossible due to the difficulties in mechanically manipulating individual molecules.

This led to an exchange of letters in the ACS publication Chemical & Engineering News in 2003.[6] Though biology clearly demonstrates that molecular machine systems are possible, non-biological molecular machines are today only in their infancy. Leaders in research on non-biological molecular machines are Dr. Alex Zettl and his colleagues at Lawrence Berkeley Laboratories and UC Berkeley. They have constructed at least three distinct molecular devices whose motion is controlled from the desktop with changing voltage: a nanotube nanomotor, a molecular actuator, and a nanoelectromechanical relaxation oscillator.

An experiment indicating that positional molecular assembly is possible was performed by Ho and Lee at Cornell University in 1999. They used a scanning tunneling microscope to move an individual carbon monoxide molecule (CO) to an individual iron atom (Fe) sitting on a flat silver crystal, and chemically bound the CO to the Fe by applying a voltage.