Artificial Intelligence

CS405 introduces the field of artificial intelligence (AI). Materials on AI programming, logic, search, game playing, machine learning, natural language understanding, and robotics introduce the student to AI methods, tools, and techniques, their application to computational problems, and their contribution to understanding intelligence. Because each of these topics could be a course unto itself, the material is introductory and not complete. Each unit presents the problem a topic addresses, current progress, and approaches to the problem. The readings include and cite more materials that are referenced in this course, and students are encouraged to use these resources to pursue topics of interest after this course.

Wikipedia

Requirements for Completion: Passing grade on quizzes and the final exam, achieve the objectives of each unit and the goals of the course, ability to apply the concepts taught in the course.

Time Requirements: This course should take approximately 120 hours to complete.

Tips/Suggestions: AI utilizes many disciplines, including mathematics, logic, programming, psychology, neural biology, linguistics, engineering, and even philosophy, as well as contributing its own concepts and techniques, for building hardware and software systems that perform intelligent tasks and activities. This suggests that AI be viewed as a discipline for integrating and applying knowledge from many fields to discover computational solutions to problems, tasks, and behaviors, which currently requires human capabilities.

While AI applications can be developed in any number of different languages, certain language features make programming AI applications straightforward. Prolog is structured in such a way that AI program development is supported by Prolog language features. Other languages, such as Java, support AI programming through code libraries. At this point in your career as a computer science major, you have already taken introductory programming courses; these should assist you in learning Prolog and using code libraries in other languages for AI program development.
This unit will provide you with an introduction to AI via programming features that support basic AI applications. By satisfying the goals of this unit, you will have a familiarity with AI programming and be able to use it in future models to implement various AI applications.

Instructions: Look over the slides in this introductory lecture, located in section 6, week 1, on the webpage. It gives a good summary of the types and scope of AI applications and the state of the art in the fields. This reading tells some of the early history of AI. (The slides on game playing—in particular 10, 11, and 15—apply to section 4.1.1 below, on the history of game playing in AI.)

Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.

Instructions: Read chapter 1, pages 1-4. AI concepts and techniques are learned in two steps: theory, and implementation of theory in programs. AI techniques are difficult to program and can be obscured by the programming details. To make these techniques explicit and to hide the programming details, AI languages—such as Lisp, Prolog, Scheme, and others—have been defined to have language features that directly support the implementation of AI techniques. The use of class libraries and source code libraries serve the same purpose for general-purpose languages, such as C++ and Java. The Watson text uses Java Classes for program examples of AI concepts and techniques. Java, being widely used, has the added advantage of making these techniques more widely available.

Instructions: Look over the slides in section 6.034 Notes: Section 11.1, on logic as a programming language. These slides depend on, at least, an introductory knowledge of logic. If you need to, read unit 5, slides 5.2.1 and 5.2.2 through 5.2.2.3, for a review of basic logic.

In declarative programming, you write statements that specify how to manipulate the data, i.e., you write specific algorithms. In logic programming, you write statements that specify what is true of the world represented and rely on general-purpose algorithms that are built into the logic (programming) system.

Previous coursework has familiarized you with searching algorithms. In this unit, you will learn how to implement standard searching algorithms. We will first discuss the motivation behind exploring search from an AI perspective, learning new terminology as we go that will be used in this unit and beyond. We will then learn about basic search methods, as well as time and memory requirements, concluding with a discussion of the advantages and disadvantages of searching algorithms. By the end of this unit, you will be able to apply AI techniques when developing searching algorithms.

Instructions: Select the above link and read chapter 2, page 5, up to section 2.1. This introduces the search problem. One way to solve a problem is by searching for a solution in a set, called the search space. This approach assumes that a search can be done in an acceptable amount of time at an acceptable cost.

Instructions: Study the first 10 slides, slides 2.1.1 to 2.1.10, in section 6.034 Notes: Section 2.1, on the use of trees and graphs for the representation of search state spaces. These slides present terminology for trees and graphs.

Note: This subunit is covered by the MIT reading assigned in subunit 2.1.2, slides 1 and 2, which introduce search of trees and graphs as a basic problem-solving approach.

2.1.2.2 Graphs

Note: This subunit is covered by the MIT reading assigned in subunit 2.1.2, slides 1 and 2. A graph is a tree that may contain loops and multiple parent nodes. We are interested in graphs where the links have direction.

2.1.2.2.1 Directed

Note: This subunit is covered by the MIT reading assigned in 2.1.2 above, slide 3. A directed graph is a graph where the links have direction—like a one-way street.

2.1.2.2.2 Undirected

Note: This subunit is covered by the MIT reading assigned in 2.1.2 above, slide 4. An undirected graph is defined here as a graph where the links go in both directions—like a two-way street. Examples of graphs are given in slides 5–6.

2.1.2.3 States, Actions, and Goals

Note: This subunit is covered by the MIT reading of 2.1.2 above, slides 7–10. These slides describe the use of trees and graphs in solving problems using search. They show how graph searching can be transformed into tree searching and how trees can be used to model problems by using nodes to represent states, links to represent actions, and specific nodes to represent goals.

Instructions: Select the link above and read sections 2.2 and 2.3 for ideas on basic searching and implementing the ideas in Java programs. These ideas are expanded upon in the lecture readings below. This reading also applies to 2.2.1.1 and 2.2.1.2.

Instructions: Study slides 2.2.7 through 2.2.18 for implementation guidance for Depth-First and Breadth-First searches, including the use of heuristics to improve the efficiency of the search. Slides 2.2.19 to 2.2.29 discuss the performance of the Depth-First and Breadth-First algorithms with respect to time and space (memory). Read slides 2.3.1–2.3.13 that step through the Depth-First search algorithm. Also, read slides 2.3.14–2.3.24, which step through the Breadth-First search algorithm. Stepping through the algorithms shows what happens when they are executing; this insight leads to an improved algorithm, called Progressive Deepening.

These readings also apply to sections 2.2.1.1 and 2.2.1.2. Slides 2.2.15–2.2.18 apply to 2.2.2.2 below.

Note: This subunit is covered by the MIT reading of 2.2.1 above. In Depth-First Search, the search traverses along the links to descendants, to a particular depth, before traversing the links to siblings.

2.2.1.2 Breadth-First

Note: This subunit is covered by the MIT reading of 2.2.1 above. In Breadth-First Search, the search traverses along the links to siblings before traversing the links to descendants.

Instructions: Select the link above and read section 2.4 for ideas on improving searching and implementing the ideas in Java programs. These ideas are expanded upon in the readings below. This reading also applies to 2.2.2.1 and 2.2.2.2.

Instructions: Study slides 2.2.30 through 2.2.36, which show that combining the Depth-First and Breadth-First strategies gives better performance with respect to time and space. Read slides 2.3.25–2.3.31, which step through the Best-First search algorithm, which utilizes a heuristic value in selecting the next node to search.

Note: This subunit is covered by the MIT reading of 2.2.2 above. Depth-First and Breadth-First are called 'blind' searches or uninformed searches. Best-First uses a heuristic value of the nodes to select which link to traverse next. It selects the link that goes to the node with the best heuristic value. Searches that use information about the problem goal to select nodes to traverse to are called heuristic searches, or informed searches.

2.2.2.2 Heuristics

Note: This subunit is covered by the MIT reading of 2.2.1 above, slides 2.2.15–2.2.18. Also, this subunit is addressed by the MIT reading of 2.2.2, slide 2.3.25.

Note: This subunit is covered by the MIT reading of 2.2.3 above. The Uniform Cost search algorithm is an optimal, uninformed search. Each link has an associated cost. The cost of a path is the sum of the cost of the links that comprise the path. The Uniform Cost algorithm selects and expands the node that has the lowest cost.

2.2.3.2 Shortest Path

Note: This subunit is covered by the MIT reading of 2.2.3 above. If the cost of a link is the distance between the end nodes of the link, then the Uniform Cost Algorithm finds the shortest path from the start node to the goal node. The Uniform Cost Algorithm uses the “past” information, i.e., information about a path from the start node to a node, instead of information about a path from a node to the goal node.

Instructions: Study slides 2.5.1– 2.5.21, in 6.034 Notes: Section 2.5, on the A* algorithm, which finds the shortest path to a goal (an optimal path) using a heuristic estimate of the distance to the goal.

Note: This subunit is covered by the MIT reading of 2.2.4. Uniform Cost that uses the estimated total cost of a path from the start node to the goal (which is the sum of the cost from the start to a node + the estimated cost from the node to the goal) is called an A* algorithm.

2.2.4.2 Heuristic Shortest Path

Note: This subunit is covered by the MIT reading of 2.2.4. When the A* algorithm uses distance or length for the associated cost of a link, then the A* algorithm finds the shortest path using the estimated total distance from the start node to the goal as the heuristic.

Instructions: Read slides 2.2.19 to 2.2.31, in the PDF file Chapter 2: Search1, section 6.034 Notes: Section 2.2, about the efficiency of the four types of searches. Efficiency considers time, cost, and resources, such as memory.

Instructions: Read slides 2.6.1 to 2.7.26, in the PDF file Chapter 2: Search3, section 6.034 Notes: Sections 2.6 and 2.7, for additional information on heuristics for optimality (finding an optimal path to a goal), and about complexity and efficiency of search. Efficiency considers time, cost, and resources, such as memory.

AI applications are built upon the idea of a problem statement with constraints. In AI, we must work within those constraints in order to develop an optimal solution. In this unit, we will define “problem” in specific AI terms and discuss different approaches to constraint satisfaction. Constraint satisfaction is an important subject area within AI. The famous Map Coloring Problem has simple variables and simple constraints and is thus useful in illustrating the basics of constraint satisfaction. By the end of this unit, you will be able to solve basic problems.

Note: This subunit is covered by the MIT reading of 3.2.4. See slides 3.3.1–3.3.7. This approach makes the search more efficient by reordering the variables using information available during the search.

3.2.4.2 Incremental Repair

Note: This subunit is covered by the MIT reading of 3.2.4. See slides 3.3.7–3.3.12. Incremental repair is another heuristic approach to making the search more efficient. It can be done without backtracking or used in conjunction with backtracking.

Unit 4: Game Playing

Some of the earliest and most recognizable AI applications are games like chess and tic-tac-toe, the most famous being the chess match between Garry Kasparov and Deep Blue. In this unit, we will discuss the development of game-playing applications, as well as the relationship between game-playing and searching algorithms. The unit will also provide you with some best practices for building game programs.

Unit Note: This unit has been designed to teach you how to design algorithms for game-playing applications. For our purposes, you will find tic-tac-toe, which uses features of search and constraint satisfaction, the simplest. We suggest that as an informal exercise, you create a tic-tac-toe application and then play against it, noting the algorithm's success rates and determining which modifications will need to be implemented in order to improve its performance.

Instructions: Select the PowerPoint file slides lecture 1(Intro) and review slides 10, 11, and 15. Game playing provided numerous applications that motivated the development of AI techniques—for example, search and problem-solving techniques. In addition, it served as a popular benchmark for demonstrating progress and improvements of AI research.

Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.

4.1.2 Alan Turing and Checkers

Note: The readings of 4.1 and 4.1.1 apply to this subunit. Arthur Samuel's checkers-playing program is a famous example.

4.1.3 Garry Kasparov vs. IBM’s Deep Blue

Note: The readings of 4.1 and 4.1.1 apply to this subunit. Kasparov vs. IBM's Deep Blue is a famous chess match that demonstrated master-level chess play by a machine.

Instructions: Select the link above and read section 2.5 for ideas on programming searching, and implementing the ideas in Java programs for game playing. Min-max is a search strategy for two-person games whereby a move is selected by choosing the child node that has either the maximum (a player strives to maximize his/her advantage) or minimum (a player strives to minimize the other player’s advantage). Alpha-Beta search is an improvement of min-max searching by eliminating, or pruning, branches from the search tree.

Note: The readings of 4.2.1 apply to this subunit. The states of a board game are the board positions or configurations of the game pieces—for example, the chess pieces on the chessboard. The states are represented by the nodes of a tree.

4.2.1.2 Operators: Game Moves

Note: The readings of 4.2.1 apply to this subunit. The game moves, or operators, are represented by the arcs of a tree.

4.2.1.3 Goal State: Winning Position

Note: The readings of 4.2.1 apply to this subunit. The goal state is a goal node, i.e., the board configuration that depicts the winning position of the game pieces.

4.2.1.4 Heuristics: Scoring Function

Note: The readings of 4.2.1 apply to this subunit. In many games, the size of the search tree can be very large and the complexity of the search can be high. Heuristics are used to guide the search, increase efficiency, and improve game-playing ability

Note: This subunit is addressed in the readings of 4.3. The title of this subunit gives one reason why game playing is a desirable application domain for introductory AI investigation; namely, the solution or goal has a precise specification.

4.3.2 Success Follows Long Periods of Gradual Refinement

Note: This subunit is addressed in the readings of 4.3. The title of this subunit gives an indication of the difficulty of designing and building a program that competes in difficult games at a high level. Either a revolutionary new algorithm is discovered, or improvements come from small incremental steps learned from experimentation over a long period of time.

Unit 5: Logic

We have already briefly discussed logic, but this unit will provide you with a more formal definition. We will learn about two main types of logic—propositional and first-order. Prolog was designed for expressing logic. This unit gives you a strong foundation in logic so that you will be able to use or learn Prolog more easily to program logic applications. Similarly, you will be able to use or learn class libraries that support AI techniques in other languages, like C++ and Java.

Note: The reading of 5.1 applies to this subunit. See slide 9.1.1. Logic is a formal language and the definitions, rules, and techniques that are used for formal languages apply to logic statements.

5.1.2 Building Blocks, Syntax, and Semantics

Note: The reading of 5.1 applies to this subunit. As with formal languages, logic statements have syntax—rules for writing well-formed statements in logic. Statements also have semantics, which describes the meaning associated with the statement. The meaning derives from associations of the elements of the logic statement with elements of a domain of discourse. From this association, the truth or falsehood of a statement can be determined. (A domain of discourse is a set of members, with rules for making assertions about the members, wherein the truth or falsehood of assertions is known or can be inferred.)

Instructions: Select the PowerPoint file slides lecture 8 (Logic 2) and read the slides 1–4. These will be elaborated in later sections covering the propositional calculus and the predicate calculus.

Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.

5.2 Types of Logic

Note: We will be using two types of logic: propositional logic and first-order logic. The difference between the two is in the use of variables. Propositional logic has no variables that take on values in a domain. For example, P is a sentence in the proposition calculus that is true or false. An example for the predicate calculus is P(X,Y), where P is a relation and X and Y are variables that take on values in a domain of discourse. The predicate calculus, also called the first-order calculus, extends the propositional calculus.

Instructions: Study slides 9.2.17 to 9.2.33, in 6.034 Notes: Section 9.2, which define the concepts of validity and satisfiability. These pertain to the “extent” of trueness of a sentence. This reading applies to subsections 5.2.1.4.1–5.2.1.4.3.

Note: The reading of 5.2.1.4 applies to this subunit. See slide 9.2.18. Validity is the “strongest” truth, in that a sentence is valid if it is true for all interpretations.

5.2.1.4.2 Satisfiability

Note: The reading of 5.2.1.4 applies to this subunit also. See slide 9.2.19. A sentence if satisfiable if it is true in some interpretation, i.e., there is some interpretation for which it is true.

5.2.1.4.3 Unsatisfiability

Note: The reading of 5.2.1.4 applies to this subunit also. See slide 9.2.20. A sentence is unsatisfiable if there is no interpretation for which it is true, i.e., its value is false for all interpretations.

Instructions: Study slides 9.3.1 to 9.3.7, in 6.034 Notes: Section 9.3, which define the concepts of entailment and proof. Making a conclusion using entailment involves enumerating interpretations, which may not be feasible. Making a conclusion using a proof involves applying rules of inference.

Instructions: Study slides 9.3.8 to 9.3.11 in 6.034 Notes: Section 9.3. Entailment is a key concept for understanding the semantics of logic and for understanding inference. It considers sets of interpretations.

Instructions: Study slides 9.3.19 to 9.3.34, in 6.034 Notes: Section 9.3, which present rules of inference for use in proofs. Slide 9.3.34 presents resolution, a rule of inference amenable to programming a proof. This reading also applies to subsection 5.2.1.5.3.2.

Note: This subunit is covered by the reading of 5.2.1.5.3.1. Resolution is a deduction technique used in propositional calculus and also in the predicate calculus, and which was designed for use in computational applications.

Instructions: Study slides 9.4.7 to 9.4.13, in 6.034 Notes: Section 9.4, which present the syntax of the first-order predicate calculus or, as it is also called, first-order logic. This reading also applies to subsections 5.2.2.2.1.1–5.2.2.2.1.3.

Instructions: Study slides 9.4.14 to 9.4.16, in 6.034 Notes: Section 9.4, which complete the syntax of the first-order logic. Now we can write terms using constants, variables, and functions. We can write sentences using predicates, and combine predicates and sentences to form new sentences using the quantifiers and the operators. This reading also applies to subsections 5.2.2.2.2.1 and 5.2.2.2.2.2 below.

Instructions: Study slides 9.5.1 to 9.5.8, in 6.034 Notes: Section 9.5, which define interpretation as three mappings from the constant symbols, predicate symbols, and function symbols of an FOL to those of a Domain of Discourse.

Instructions: Study slides 9.5.9 to 9.5.18, in 6.034 Notes: Section 9.5. The slides define the truth of a sentence in an FOL relative to an interpretation. This reading also applies to subsection 5.2.2.3.3 on the semantics of quantifiers.

Note: The reading of 5.2.2.3.2 applies to this subunit. For quantifiers, the variable used in the quantifier is bound to values (or takes on values) from the interpretation, i.e., values in the domain of discourse.

Instructions: Study slides 9.7.1 to 9.7.24, in 6.034 Notes: Section 9.7, which show that if a knowledge base (KB) entails a sentence, then the sentence logically follows from the KB. However, they also show the impracticality of using entailment to prove a conclusion.

Instructions: Study slides 9.7.25 to 9.7.43, in 6.034 Notes: Section 9.7, which show that a practical way to use FOL to draw conclusions from FOL statements is by using proofs. But in order to prove a conclusion the KB needs a set of axioms. Examples show that the axioms have to capture the essential information about a domain. If the axioms are too few, a false conclusion won't be proved, but it may not be possible to draw some desired conclusions. These slides also apply to the next subsection, 5.2.2.5.3.

Instructions: Select the above link and read chapter 4, pages 57–72, which discusses the Semantic Web. Reasoning assumes a body of data from which inferences can be made. This reading discusses the Semantic Web as a source of data for use in programs, in particular for inference algorithms.

Machine Learning refers to computer programs that are able to categorize data in order to maximize understanding of that information. Machine Learning is closely related to statistics and modeling and has a wide range of applications, from natural language processing, searching, robotics, and indexing, to other pattern recognition applications. This unit will begin by defining Machine Learning, its applications, and a number of other important terms that will be used in this unit. We will then go over the three main classes of Machine Learning: Supervised Learning, Semi-Supervised Learning, and Unsupervised Learning. You will also end up with an introductory foundation in Machine Learning that will be useful for further academic study in the field.

Instructions: Study slides 4.1.1–4.1.7, in 6.034 Notes: Section 4.1, which introduce the topic of learning. Machine learning is learning using methods that can be implemented in software. Study slides 4.1.8–4.1.31, which are applicable to machine learning.

Instructions: Study slides 4.1.8–4.1.42, in 6.034 Notes: Section 4.1, which present learning in terms of learning a function (this is called supervised learning when some of the input/output pairs of the function are provided—supervised learning will be discussed in a later section). These slides present three learning methods: nearest neighbor, decision trees, and neural nets. Slide 4.1.42 lists some problems that machine learning has had some success in solving. Slides 4.1.14–4.1.24 give an example of predicting future behavior and are applicable to section 6.1.2.2 below.

Note: Several models are used in the study of learning, including decision tree models, probabilistic models such as Bayes’, neuron models in neural nets, statistical models such as Gaussian distribution models in classifying data and in building training sets. These are mentioned further in their respective sections in 6.2 Types of Machine Learning.

Instructions: Select the link and read the beginning of “Supervised learning.” There are many learning methods, each having strengths and weaknesses in particular applications, for particular data sets and situations. Issues that have to be contended with include: bias (a predicted value of a learning algorithm is systematically incorrect when trained on several different data sets) and variance (variation of a predicted value for a given input when trained on different data sets), complexity of functions to be predicted, complexity of data, noisy data, missing data, etc.

The “No Free Lunch Theorem” states, informally, that no one method works best for all applications and situations.

Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.

6.2.2.1 Difference From Supervised Learning

Note: In unsupervised learning the data is not labeled and function value pairs are not provided, as in supervised learning. Approaches to unsupervised learning try to discover patterns in the data. Some of the material on supervised learning applies to unsupervised learning.

6.2.2.2 Unsupervised Methods

Note: Approaches used for unsupervised learning include clustering, feature extraction, and neural nets.

Instructions: Select the link and read the beginning of “Unsupervised learning.” There are many learning methods, each having strengths and weaknesses in particular applications, for particular data sets and situations. Some issues that have to be contended with include: complexity of functions to be predicted, complexity of data, noisy data, missing data, etc.

As for supervised learning, see 6.2.1.2 above. The “No Free Lunch Theorem” applies to unsupervised learning.

Instructions: Select the link and scroll down to Lecture 15. Watch the video lecture by Professor Ng, which continues the presentation on Principal Component Analysis (PCA) and then goes on to introduce Independent Component Analysis (ICA).

Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.

Instructions: Select the link and read the summary of semi-supervised learning.

Note: Semi-supervised learning involves labeled and unlabeled data. It is a practical approach, in that it is comparatively inexpensive to obtain a large amount of unlabeled data and a small amount of labeled data, which together result in improved learning accuracy (over unsupervised learning).

Instructions: Select the link and read section 1.3, which describes joint distributions. Joint distributions from Probability Theory are useful for studying semi-supervised learning. Two statistical techniques that are also helpful are maximum likelihood and expectation maximization, both of which are used to estimate the parameters of statistical models.

This unit will provide you with a basic introduction to Natural Language Understanding (NLU) in AI. Syntax, semantics, and ambiguity of natural language are discussed. Simple examples are presented. Some of what we have seen, in search and in learning, is applied in NLU. Natural language processing and understanding is a large field of research and has entire courses devoted to it. So, in this introduction, our objective is simply to introduce the problems and approaches.

Instructions: Read slides 12.1.1–12.1.7, in 6.034 Notes: Section 12.1, on an architecture for understanding natural language. Understanding is connecting a natural language sentence to knowledge about the world.

Instructions: Read slides 12.1.1–12.1.7, in 6.034 Notes: Section 12.1, on the different types of grammars. Natural language is not context free, but a practical approach to NLU can still be made using context free languages, because they can express a lot of the structure of natural language.

Instructions: Read slides 12.2.19–12.2.50, in 6.034 Notes: Section 12.2, on dependencies that are far apart. These are called gaps. Remember that the NLU wants to capture the meaning, including relationships. If a relative clause occurs, it refers to some person mentioned earlier or later. This reference is called a gap.

Instructions: Read slides 12.3.1–12.3.6, in 6.034 Notes: Section 12.3, which address obtaining meaning just from the syntax of a sentence; use of context information to add to the meaning will come later. (Recall the steps of the NLU architecture.)

Instructions: Select the above link and read chapter 9, pages 137–176, which gives a lot of practical examples of NLP from a programming perspective. In addition, read chapter 10, the next chapter, using this same link, but pages 177–206, which provides additional discussion on extracting semantic information from text and databases.

Robotics draws upon and integrates previous topics, as well as information and techniques from other disciplines, including many engineering fields, physics, controls, probability and statistics, differential equations, linguistics, and many applications, e.g., manufacturing, sensors, medical applications, etc. Some of the contributions of AI to robotics are search algorithms, representation and models for the robot world, inference, learning, and AI programming features and their integration.

Instructions: Select the PDF link for Chapter 1: Introduction, and read the Asada material, which gives a mechanical engineering perspective on robotics.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.

Instructions: Select the link and scroll down to Lecture 9. Watch the guest video lecture by Professor Hager, which introduces vision. This video gives us an appreciation of the need for integrating a variety of skills and applications for robots to perform various tasks.

Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.

Instructions: Select the link and scroll down to Lecture 13. Watch the first and second parts of the video lecture (“Intro” and “Control–Overview”), which introduce the topic of control. This video gives us an appreciation of the difficulties of coordination and motion control.

Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.

Instructions: You must be logged into your Saylor Foundation School account in order to access this exam. If you do not yet have an account, you will be able to create one, free of charge, after clicking the link.