the first 30 years, AI pursued a dream of
intelligent machines. When they were
unable to even get close to realizing the
dream, they gave up rule-based AI systems and turned instead to machine
learning focused on automating simple
mental tasks rather than general intelligence. They were able to build amazing
automations based on neural networks
without trying to imitate human brain
processes. Today’s AI has become so
successful with neural network models
that do far better than humans at some
mental tasks that we are now facing
social disruptions about joblessness
caused by AI-driven automation.

Conclusion

We welcome the enthusiasm for computer science and its ways of thinking.
As professionals, we need to be careful
that in our enthusiasm we do not entertain and propagate misconceptions
about our field. Let us not let others
oversell our field. Let us foster expectations we can fulfill.

6. Guzdial, M. Learner-Centered Design of Computing
Education: Research on Computing for Everyone.

Morgan-Claypool, 2015.

7. Tedre, M. Science of Computing: Shaping a Discipline.

CRC Press, Taylor & Francis, 2014.

8. Tedre, M. and Denning, P. J. The long quest for
computational thinking. In Proceedings of the 16th Koli
Calling Conference on Computing Education Research

(Koli, Finland, Nov. 24–27, 2016), 120–129.

9. Wing, J. Computational thinking. Commun. ACM 49, 3

(Mar. 2006), 33–35.

Peter J. Denning ( pjd@nps.edu) is Distinguished
Professor of Computer Science and Director of the
Cebrowski Institute for information innovation at the
Naval Postgraduate School in Monterey, CA, is Editor
of ACM Ubiquity, and is a past president of ACM. The
author’s views expressed here are not necessarily those of
his employer or the U.S. federal government.

Matti Tedre ( matti.tedre@acm.org) is Associate Professor
of Computer and Systems Sciences at Stockholm
University, Sweden, adjunct professor at University of
Eastern Finland, and the author of Science of Computing:
Shaping a Discipline (CRC Press, Taylor & Francis, 2014).

Pat Yongpradit ( pat@code.org) is the Chief Academic
Officer for Code.org and served as staff lead on the
development of the K– 12 Computer Science Framework.

A former high school computer science teacher, Pat has
been featured in the book, American Teacher: Heroes
in the Classroom, has been recognized as a Microsoft
Worldwide Innovative Educator, and is certified in biology,
physics, math, health, and technology education.

Copyright held by authors.

ing that CT is a knowledge set thatdrives the programming skill. A stu-dent who scores well on tests to explainand illustrate abstraction and decom-position can still be an incompetentor insensitive algorithm designer. Theonly way to learn the skill is to practicefor many hours until you master it. Thenewest CSTA guidelines move to coun-teract this upside-down story, empha-sizing exhibition of programming skillin contests and projects.dBecause computation has invadedso many fields, and because people whodo computational design in those fieldshave made many new discoveries, somehave hypothesized that CT is the mostfundamental kind of thinking, trump-ing all the others such as systems think-ing, design thinking, logical thinking,scientific thinking, etc. This is compu-tational chauvinism. There is no basisto claim that CT is more fundamentalthan other kinds of thinking.

When we engage in everyday step-by-step procedures we are thinking
computationally. Everyday step-by-step
procedures use the term “step” loosely
to refer to an isolated action of a person.
That meaning of step is quite different
from a machine instruction; thus most
“human executable recipes” cannot be
implemented by a machine. This misconception actually leads people to misunderstand algorithms and therefore
overestimate what a machine can do.

Step-by-step procedures in life, such
as recipes, do not satisfy the definition
of algorithm because not all their steps
are machine executable. Just because
humans can simulate some computational steps does not change the requirement for a machine to do the steps. This
misconception undermines the definition of algorithm and teaches people the
wrong things about computing.

Computational thinking improvesproblem-solving skills in other fields.This old claim is called the “transferhypothesis.” It assumes that a thinkingskill automatically transfers into otherdomains simply by being present inthe brain. It would revolutionize educa-tion if true. Education researchers havestudied automatic transfer of CT forthree decades and have never been abled One of the K– 12 curriculum recommendationsactually cites making a peanut butter and jellysandwich as an example of an algorithm.to substantiate it.
6 There is evidence onthe other side—slavish faith in a singleway of thinking can make you into aworse problem solver than if you areopen to multiple ways of thinking.

Another form of transfer—designed
transfer—holds more promise. Teachers in a non-CS field, such as biology,
can bring computational thinking into
their field by showing how programming is useful and relevant in that field.
In other words, studying computer science alone will not make you a better
biologist. You need to learn biology to
accomplish that.

CS is basically science and math.
The engineering needed to produce the
technology is all based on the science
and math. History tells us otherwise.
Electrical engineers designed and built
the first electronic computers without
knowing any computer science—CS did
not exist at the time. Their goal was to
harness the movement of electrons in
circuits to perform logical operations
and numerical calculations. Programs
controlled the circuits by opening and
closing gates. Later scientists and mathematicians brought rigorous formal and
experimental methods to computing.
To find out what works and what does
not, engineers tinker and scientists test
hypotheses. In much of computing the
engineering has preceded the science.
However, both engineers and scientists
contribute to a vibrant computing profession: they need each other.

Old CS is obsolete. The important
developments in CS such as AI and
big data analysis are all recent.
Computing technology is unique among
technologies in that it sustains exponential growth (Moore’s Law) at the
levels of individual chips, systems,
and economies.
3 Thus it can seem that
computer technology continually fosters upheavals in society, economies,
and politics—and it obsoletes itself
every decade or so. Many of the familiar principles of CS were identified in
the 1950s and 1960s and continue to
be relevant today. The early CS shaped
the world we find ourselves in today.
Our history shows us what worked and
what does not. The resurrection of the
current belief that CS=programming
illustrates how those who forget history can repeat it.

Artificial intelligence is an old subfield of CS, started in the early 1950s. For