A programming language is an artificial language designed to express computations that can be performed by a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine, to express algorithms precisely, or as a mode of human communication.

Many programming languages have some form of written specification of their syntax (form) and semantics (meaning). Some languages are defined by a specification document. For example, the C programming language is specified by an ISO Standard. Other languages, such as Perl, have a dominant implementation that is used as a reference.

The earliest programming languages predate the invention of the computer, and were used to direct the behavior of machines such as Jacquard looms and player pianos. Thousands of different programming languages have been created, mainly in the computer field, with many more being created every year. Most programming languages describe computation in an imperative style, i.e., as a sequence of commands, although some languages, such as those that support functional programming or logic programming, use alternative forms of description.

Definitions

A programming language is a notation for writing programs, which are specifications of a computation or algorithm.[1] Some, but not all, authors restrict the term “programming language” to those languages that can express all possible algorithms.[1][2] Traits often considered important for what constitutes a programming language include:

Function and target: A computer programming language is a language[3] used to write computer programs, which involve a computer performing some kind of computation[4] or algorithm and possibly control external devices such as printers, disk drives, robots,[5] and so on. For example PostScript programs are frequently created by another program to control a computer printer or display. More generally, a programming language may describe computation on some, possibly abstract, machine. It is generally accepted that a complete specification for a programming language includes a description, possibly idealized, of a machine or processor for that language.[6] In most practical contexts, a programming language involves a computer; consequently programming languages are usually defined and studied this way.[7] Programming languages differ from natural languages in that natural languages are only used for interaction between people, while programming languages also allow humans to communicate instructions to machines.

Abstractions: Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. The practical necessity that a programming language support adequate abstractions is expressed by the abstraction principle;[8] this principle is sometimes formulated as recommendation to the programmer to make proper use of such abstractions.[9]

The term computer language is sometimes used interchangeably with programming language.[20] However, the usage of both terms varies among authors, including the exact scope of each. One usage describes programming languages as a subset of computer languages.[21] In this vein, languages used in computing that have a different goal than expressing computer programs are generically designated computer languages. For instance, markup languages are sometimes referred to as computer languages to emphasize that they are not meant to be used for programming.[22] Another usage regards programming languages as theoretical constructs for programming abstract machines, and computer languages as the subset thereof that runs on physical computers, which have finite hardware resources.[23]John C. Reynolds emphasizes that formal specification languages are just as much programming languages as are the languages intended for execution. He also argues that textual and even graphical input formats that affect the behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and remarks that ignorance of programming language concepts is the reason for many flaws in input formats.[24]

Elements

All programming languages have some primitive building blocks for the description of data and the processes or transformations applied to them (like the addition of two numbers or the selection of an item from a collection). These primitives are defined by syntactic and semantic rules which describe their structure and meaning respectively.

A programming language’s surface form is known as its syntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, there are some programming languages which are more graphical in nature, using visual relationships between symbols to specify a program.

The syntax of a language describes the possible combinations of symbols that form a syntactically correct program. The meaning given to a combination of symbols is handled by semantics (either formal or hard-coded in a reference implementation). Since most languages are textual, this article discusses textual syntax.

a number is an unbroken sequence of one or more decimal digits, optionally preceded by a plus or minus sign;

a symbol is a letter followed by zero or more of any characters (excluding whitespace); and

a list is a matched pair of parentheses, with zero or more expressions inside it.

The following are examples of well-formed token sequences in this grammar: ‘12345‘, ‘()‘, ‘(a b c232 (1))‘

Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless ill-formed, per the language’s rules; and may (depending on the language specification and the soundness of the implementation) result in an error on translation or execution. In some cases, such programs may exhibit undefined behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by the person who wrote it.

Using natural language as an example, it may not be possible to assign a meaning to a grammatically correct sentence or the sentence may be false:

“John is a married bachelor.” is grammatically well-formed but expresses a meaning that cannot be true.

The following C language fragment is syntactically correct, but performs an operation that is not semantically defined (because p is a null pointer, the operations p->real and p->im have no meaning):

complex *p = NULL;

complex abs_p = sqrt (p->real * p->real + p->im * p->im);

If the type declaration on the first line were omitted, the program would trigger an error on compilation, as the variable “p” would not be defined. But the program would still be syntactically correct, since type declarations provide only semantic information.

The grammar needed to specify a programming language can be classified by its position in the Chomsky hierarchy. The syntax of most programming languages can be specified using a Type-2 grammar, i.e., they are context-free grammars.[25] Some languages, including Perl and Lisp, contain constructs that allow execution during the parsing phase. Languages that have constructs that allow the programmer to alter the behavior of the parser make syntax analysis an undecidable problem, and generally blur the distinction between parsing and execution.[26] In contrast to Lisp’s macro system and Perl’s BEGIN blocks, which may contain general computations, C macros are merely string replacements, and do not require code execution.[27]

Static semantics

The static semantics defines restrictions on the structure of valid texts that are hard or impossible to express in standard syntactic formalisms.[1] For compiled languages, static semantics essentially include those semantic rules that can be checked at compile time. Examples include checking that every identifier is declared before it is used (in languages that require such declarations) or that the labels on the arms of a case statement are distinct.[28] Many important restrictions of this type, like checking that identifiers are used in the appropriate context (e.g. not adding a integer to a function name), or that subroutine calls have the appropriate number and type of arguments can be enforced by defining them as rules in a logic called a type system. Other forms of static analyses like data flow analysis may also be part of static semantics. Newer programming languages like Java and C# have definite assignment analysis, a form of data flow analysis, as part of their static semantics.

Type system

A type system defines how a programming language classifies values and expressions into types, how it can manipulate those types and how they interact. The goal of a type system is to verify and usually enforce a certain level of correctness in programs written in that language by detecting certain incorrect operations. Any decidable type system involves a trade-off: while it rejects many incorrect programs, it can also prohibit some correct, albeit unusual programs. In order to bypass this downside, a number of languages have type loopholes, usually unchecked casts that may be used by the programmer to explicitly allow a normally disallowed operation between different types. In most typed languages, the type system is used only to type check programs, but a number of languages, usually functional ones, perform type inference, which relieves the programmer from writing type annotations. The formal design and study of type systems is known as type theory.

Typed versus untyped languages

A language is typed if the specification of every operation defines types of data to which the operation is applicable, with the implication that it is not applicable to other types.[29] For example, “this text between the quotes” is a string. In most programming languages, dividing a number by a string has no meaning. Most modern programming languages will therefore reject any program attempting to perform such an operation. In some languages, the meaningless operation will be detected when the program is compiled (“static” type checking), and rejected by the compiler, while in others, it will be detected when the program is run (“dynamic” type checking), resulting in a runtime exception.

A special case of typed languages are the single-type languages. These are often scripting or markup languages, such as REXX or SGML, and have only one data type—most commonly character strings which are used for both symbolic and numeric data.

In contrast, an untyped language, such as most assembly languages, allows any operation to be performed on any data, which are generally considered to be sequences of bits of various lengths.[29] High-level languages which are untyped include BCPL and some varieties of Forth.

In practice, while few languages are considered typed from the point of view of type theory (verifying or rejecting all operations), most modern languages offer a degree of typing.[29] Many production languages provide means to bypass or subvert the type system.

Static versus dynamic typing

In static typing all expressions have their types determined prior to the program being run (typically at compile-time). For example, 1 and (2+2) are integer expressions; they cannot be passed to a function that expects a string, or stored in a variable that is defined to hold dates.[29]

Statically typed languages can be either manifestly typed or type-inferred. In the first case, the programmer must explicitly write types at certain textual positions (for example, at variable declarations). In the second case, the compiler infers the types of expressions and declarations based on context. Most mainstream statically typed languages, such as C++, C# and Java, are manifestly typed. Complete type inference has traditionally been associated with less mainstream languages, such as Haskell and ML. However, many manifestly typed languages support partial type inference; for example, Java and C# both infer types in certain limited cases.[30]

Dynamic typing, also called latent typing, determines the type-safety of operations at runtime; in other words, types are associated with runtime values rather than textual expressions.[29] As with type-inferred languages, dynamically typed languages do not require the programmer to write explicit type annotations on expressions. Among other things, this may permit a single variable to refer to values of different types at different points in the program execution. However, type errors cannot be automatically detected until a piece of code is actually executed, making debugging more difficult. Ruby, Lisp, JavaScript, and Python are dynamically typed.

Weak and strong typing

Weak typing allows a value of one type to be treated as another, for example treating a string as a number.[29] This can occasionally be useful, but it can also allow some kinds of program faults to go undetected at compile time and even at runtime.

Strong typing prevents the above. An attempt to perform an operation on the wrong type of value raises an error.[29] Strongly typed languages are often termed type-safe or safe.

An alternative definition for “weakly typed” refers to languages, such as Perl and JavaScript, which permit a large number of implicit type conversions. In JavaScript, for example, the expression 2 * x implicitly converts x to a number, and this conversion succeeds even if x is null, undefined, an Array, or a string of letters. Such implicit conversions are often useful, but they can mask programming errors.

Strong and static are now generally considered orthogonal concepts, but usage in the literature differs. Some use the term strongly typed to mean strongly, statically typed, or, even more confusingly, to mean simply statically typed. Thus C has been called both strongly typed and weakly, statically typed.[31][32]

Execution semantics

Once data has been specified, the machine must be instructed to perform operations on the data. For example, the semantics may define the strategy by which expressions are evaluated to values, or the manner in which control structures conditionally execute statements. The execution semantics (also known as dynamic semantics) of a language defines how and when the various constructs of a language should produce a program behavior. There are many ways of defining execution semantics. Natural language is often used to specify the execution semantics of languages commonly used in practice. A significant amount of academic research went into formal semantics of programming languages, which allow execution semantics to be specified in a formal manner. Results from this field of research have seen limited application to programming language design and implementation outside academia.

Core library

Most programming languages have an associated core library (sometimes known as the ‘standard library’, especially if it is included as part of the published language standard), which is conventionally made available by all implementations of the language. Core libraries typically include definitions for commonly used algorithms, data structures, and mechanisms for input and output.

A language’s core library is often treated as part of the language by its users, although the designers may have treated it as a separate entity. Many language specifications define a core that must be made available in all implementations, and in the case of standardized languages this core library may be required. The line between a language and its core library therefore differs from language to language. Indeed, some languages are designed so that the meanings of certain syntactic constructs cannot even be described without referring to the core library. For example, in Java, a string literal is defined as an instance of the java.lang.String class; similarly, in Smalltalk, an anonymous function expression (a “block”) constructs an instance of the library’s BlockContext class. Conversely, Scheme contains multiple coherent subsets that suffice to construct the rest of the language as library macros, and so the language designers do not even bother to say which portions of the language must be implemented as language constructs, and which must be implemented as parts of a library.

Design and implementation

Programming languages share properties with natural languages related to their purpose as vehicles for communication, having a syntactic form separate from its semantics, and showing language families of related languages branching one from another.[3] But as artificial constructs, they also differ in fundamental ways from languages that have evolved through usage. A significant difference is that a programming language can be fully described and studied in its entirety, since it has a precise and finite definition.[33] By contrast, natural languages have changing meanings given by their users in different communities. While constructed languages are also artificial languages designed from the ground up with a specific purpose, they lack the precise and complete semantic definition that a programming language has.

Many languages have been designed from scratch, altered to meet new needs, combined with other languages, and eventually fallen into disuse. Although there have been attempts to design one “universal” programming language that serves all purposes, all of them have failed to be generally accepted as filling this role.[34] The need for diverse programming languages arises from the diversity of contexts in which languages are used:

Programs range from tiny scripts written by individual hobbyists to huge systems written by hundreds of programmers.

Programmers range in expertise from novices who need simplicity above all else, to experts who may be comfortable with considerable complexity.

Programs may be written once and not change for generations, or they may undergo continual modification.

Finally, programmers may simply differ in their tastes: they may be accustomed to discussing problems and expressing them in a particular language.

One common trend in the development of programming languages has been to add more ability to solve problems using a higher level of abstraction. The earliest programming languages were tied very closely to the underlying hardware of the computer. As new programming languages have developed, features have been added that let programmers express ideas that are more remote from simple translation into underlying hardware instructions. Because programmers are less tied to the complexity of the computer, their programs can do more computing with less effort from the programmer. This lets them write more functionality per time unit.[35]

Natural language processors have been proposed as a way to eliminate the need for a specialized language for programming. However, this goal remains distant and its benefits are open to debate. Edsger W. Dijkstra took the position that the use of a formal language is essential to prevent the introduction of meaningless constructs, and dismissed natural language programming as “foolish”.[36]Alan Perlis was similarly dismissive of the idea.[37] Hybrid approaches have been taken in Structured English and SQL.

A language’s designers and users must construct a number of artifacts that govern and enable the practice of programming. The most important of these artifacts are the language specification and implementation.

Specification

The specification of a programming language is intended to provide a definition that the language users and the implementors can use to determine whether the behavior of a program is correct, given its source code.

A programming language specification can take several forms, including the following:

A description of the behavior of a translator for the language (e.g., the C++ and Fortran specifications). The syntax and semantics of the language have to be inferred from this description, which may be written in natural or a formal language.

Implementation

An implementation of a programming language provides a way to execute that program on one or more configurations of hardware and software. There are, broadly, two approaches to programming language implementation: compilation and interpretation. It is generally possible to implement a language using either technique.

The output of a compiler may be executed by hardware or a program called an interpreter. In some implementations that make use of the interpreter approach there is no distinct boundary between compiling and interpreting. For instance, some implementations of BASIC compile and then execute the source a line at a time.

Programs that are executed directly on the hardware usually run several orders of magnitude faster than those that are interpreted in software.[citation needed]

One technique for improving the performance of interpreted programs is just-in-time compilation. Here the virtual machine, just before execution, translates the blocks of bytecode which are going to be used to machine code, for direct execution on the hardware.

Usage

Thousands of different programming languages have been created, mainly in the computing field.[41] Programming languages differ from most other forms of human expression in that they require a greater degree of precision and completeness. When using a natural language to communicate with other people, human authors and speakers can be ambiguous and make small errors, and still expect their intent to be understood. However, figuratively speaking, computers “do exactly what they are told to do”, and cannot “understand” what code the programmer intended to write. The combination of the language definition, a program, and the program’s inputs must fully specify the external behavior that occurs when the program is executed, within the domain of control of that program.

A programming language provides a structured mechanism for defining pieces of data, and the operations or transformations that may be carried out automatically on that data. A programmer uses the abstractions present in the language to represent the concepts involved in a computation. These concepts are represented as a collection of the simplest elements available (called primitives).[42]Programming is the process by which programmers combine these primitives to compose new programs, or adapt existing ones to new uses or a changing environment.

Measuring language usage

It is difficult to determine which programming languages are most widely used, and what usage means varies by context. One language may occupy the greater number of programmer hours, a different one have more lines of code, and a third utilize the most CPU time. Some languages are very popular for particular kinds of applications. For example, COBOL is still strong in the corporate data center, often on large mainframes; FORTRAN in engineering applications; C in embedded applications and operating systems; and other languages are regularly used to write many different kinds of applications.

Various methods of measuring language popularity, each subject to a different bias over what is measured, have been proposed:

counting the number of job advertisements that mention the language[43]

Taxonomies

There is no overarching classification scheme for programming languages. A given programming language does not usually have a single ancestor language. Languages commonly arise by combining the elements of several predecessor languages with new ideas in circulation at the time. Ideas that originate in one language will diffuse throughout a family of related languages, and then leap suddenly across familial gaps to appear in an entirely different family.

The task is further complicated by the fact that languages can be classified along multiple axes. For example, Java is both an object-oriented language (because it encourages object-oriented organization) and a concurrent language (because it contains built-in constructs for running multiple threads in parallel). Python is an object-oriented scripting language.

In broad strokes, programming languages divide into programming paradigms and a classification by intended domain of use. Traditionally, programming languages have been regarded as describing computation in terms of imperative sentences, i.e. issuing commands. These are generally called imperative programming languages. A great deal of research in programming languages has been aimed at blurring the distinction between a program as a set of instructions and a program as an assertion about the desired answer, which is the main feature of declarative programming.[47] More refined paradigms include procedural programming, object-oriented programming, functional programming, and logic programming; some languages are hybrids of paradigms or multi-paradigmatic. An assembly language is not so much a paradigm as a direct model of an underlying machine architecture. By purpose, programming languages might be considered general purpose, system programming languages, scripting languages, domain-specific languages, or concurrent/distributed languages (or a combination of these).[48] Some general purpose languages were designed largely with educational goals.[49]

A programming language may also be classified by factors unrelated to programming paradigm. For instance, most programming languages use English language keywords, while a minority do not. Other languages may be classified as being esoteric or not.

History

A selection of textbooks that teach programming, in languages both popular and obscure. These are only a few of the thousands of programming languages and dialects that have been designed in history.

Early developments

The first programming languages predate the modern computer. The 19th century had “programmable” looms and player piano scrolls which implemented what are today recognized as examples of domain-specific languages. By the beginning of the twentieth century, punch cards encoded data and directed mechanical processing. In the 1930s and 1940s, the formalisms of Alonzo Church‘s lambda calculus and Alan Turing‘s Turing machines provided mathematical abstractions for expressing algorithms; the lambda calculus remains influential in language design.[50]

Refinement

The period from the 1960s to the late 1970s brought the development of the major language paradigms now in use, though many aspects were refinements of ideas in the very first Third-generation programming languages:

The 1960s and 1970s also saw expansion of techniques that reduced the footprint of a program as well as improved productivity of the programmer and user. The card deck for an early 4GL was a lot smaller for the same functionality expressed in a 3GL deck.

Consolidation and growth

The 1980s were years of relative consolidation. C++ combined object-oriented and systems programming. The United States government standardized Ada, a systems programming language derived from Pascal and intended for use by defense contractors. In Japan and elsewhere, vast sums were spent investigating so-called “fifth generation” languages that incorporated logic programming constructs.[59] The functional languages community moved to standardize ML and Lisp. Rather than inventing new paradigms, all of these movements elaborated upon the ideas invented in the previous decade.

One important trend in language design for programming large-scale systems during the 1980s was an increased focus on the use of modules, or large-scale organizational units of code. Modula-2, Ada, and ML all developed notable module systems in the 1980s, although other languages, such as PL/I, already had extensive support for modular programming. Module systems were often wedded to generic programming constructs.[60]

The rapid growth of the Internet in the mid-1990s created opportunities for new languages. Perl, originally a Unix scripting tool first released in 1987, became common in dynamic websites. Java came to be used for server-side programming, and bytecode virtual machines became popular again in commercial settings with their promise of “Write once, run anywhere” (UCSD Pascal had been popular for a time in the early 1980s). These developments were not fundamentally novel, rather they were refinements to existing languages and paradigms, and largely based on the C family of programming languages.

Programming language evolution continues, in both industry and research. Current directions include security and reliability verification, new kinds of modularity (mixins, delegates, aspects), and database integration such as Microsoft’s LINQ.

The 4GLs are examples of languages which are domain-specific, such as SQL, which manipulates and returns sets of data rather than the scalar values which are canonical to most programming languages. Perl, for example, with its ‘here document‘ can hold multiple 4GL programs, as well as multiple JavaScript programs, in part of its own perl code and use variable interpolation in the ‘here document’ to support multi-language programming.[61]

High-level programming languages, while simple compared to human languages, are more complex than the languages the computer actually understands, called machine languages. Each different type of CPU has its own unique machine language.

Lying between machine languages and high-level languages are languages called assembly languages. Assembly languages are similar to machine languages, but they are much easier to program in because they allow a programmer to substitute names for numbers. Machine languages consist of numbers only.

See compile and interpreter for more information about these two methods.

The question of which language is best is one that consumes a lot of time and energy among computer professionals. Every language has its strengths and weaknesses. For example, FORTRAN is a particularly good language for processing numerical data, but it does not lend itself very well to organizing large programs. Pascal is very good for writing well-structured and readable programs, but it is not as flexible as the C programming language. C++ embodies powerful object-orientedfeatures, but it is complex and difficult to learn.

The choice of which language to use depends on the type of computer the program is to run on, what sort of program it is, and the expertise of the programmer.

Language in which a computer programmer writes instructions for a computer to execute. Some languages, such as COBOL, FORTRAN, Pascal, and C, are known as procedural languages because they use a sequence of commands to specify how the machine is to solve a problem. Others, such as LISP, are functional, in that programming is done by invoking procedures (sections of code executed within a program). Languages that support object-oriented programming take the data to be manipulated as their point of departure. Programming languages can also be classified as high-level or low-level. Low-level languages address the computer in a way that it can understand directly, but they are very far from human language. High-level languages deal in concepts that humans devise and can understand, but they must be translated by means of a compiler into language the computer understands.

The different notations used to communicate algorithms to a computer. A computer executes a sequence of instructions (a program) in order to perform some task. In spite of much written about computers being electronic brains or having artificial intelligence, it is still necessary for humans to convey this sequence of instructions to the computer before the computer can perform the task. The set of instructions and the order in which they have to be performed is known as an algorithm. The result of expressing the algorithm in a programming language is called a program. The process of writing the algorithm using a programming language is called programming, and the person doing this is the programmer. See alsoAlgorithm.

In order for a computer to execute the instructions indicated by a program, the program needs to be stored in the primary memory of the computer. Each instruction of the program may occupy one or more memory locations. Instructions are stored as a sequence of binary numbers (sequences of zeros and ones), where each number may indicate the instruction to be executed (the operator) or the pieces of data (operands) on which the instruction is carried out. Instructions that the computer can understand directly are said to be written in machine language. Programmers who design computer algorithms have difficulty in expressing the individual instructions of the algorithm as a sequence of binary numbers. To alleviate this problem, people who develop algorithms may choose a programming language. Since the language used by the programmer and the language understood by the computer are different, another computer program called a compiler translates the program written in a programming language into an equivalent sequence of instructions that the computer is able to understand and carry out. See alsoComputer storage technology.

Machine language

For the first machines in the 1940s, programmers had no choice but to write in the sequences of digits that the computer executed. For example, assume we want to compute the absolute value of A + B − C, where A is the value at machine address 3012, B is the value at address 3013, and C is the value at address 3014, and then store this value at address 3015.

It should be clear that programming in this manner is difficult and fraught with errors. Explicit memory locations must be written, and it is not always obvious if simple errors are present. For example, at location 02347, writing 101… instead of 111… would compute |A + B + C| rather than what was desired. This is not easy to detect.

Assembly language

Since each component of a program stands for an object that the programmer understands, using its name rather than numbers should make it easier to program. By naming all locations with easy-to-remember names, and by using symbolic names for machine instructions, some of the difficulties of machine programming can be eliminated. A relatively simple program called an assembler converts this symbolic notation into an equivalent machine language program.

The symbolic nature of assembly language greatly eased the programmer’s burden, but programs were still very hard to write. Mistakes were still common. Programmers were forced to think in terms of the computer’s architecture rather than in the domain of the problem being solved.

High-level language

The first programming languages were developed in the late 1950s. The concept was that if we want to compute |A + B − C|, and store the result in a memory location called D, all we had to do was write D = |A + B − C| and let a computer program, the compiler, convert that into the sequences of numbers that the computer could execute. FORTRAN (an acronym for Formula Translation) was the first major language in this period.

FORTRAN statements were patterned after mathematical notation. In mathematics the = symbol implies that both sides of the equation have the same value. However, in FORTRAN and some other languages, the equal sign is known as the assignment operator. The action carried out by the computer when it encounters this operator is, “Make the variable named on the left of the equal sign have the same value as the expression on the right.” Because of this, in some early languages the statement would have been written as −D → D to imply movement or change, but the use of → as an assignment operator has all but disappeared.

The compiler for FORTRAN converts that arithmetic statement into an equivalent machine language sequence. In this case, we did not care what addresses the compiler used for the instructions or data, as long as we could associate the names A, B, C, and D with the data values we were interested in.

Structure of programming languages

Programs written in a programming language contain three basic components: (1) a mechanism for declaring data objects to contain the information used by the program; (2) data operations that provide for transforming one data object into another; (3) an execution sequence that determines how execution proceeds from start to finish.

Data declarations

Data objects can be constants or variables. A constant always has a specific value. Thus the constant 42 always has the integer value of forty-two and can never have another value. On the other hand, we can define variables with symbolic names. The declaration of variable A as an integer informs the compiler that A should be given a memory location much like the way the variable A in example (2) was given the machine address 03012. The program is given the option of changing the value stored at this memory location as the program executes.

Each data object is defined to be of a specific type. The type of a data object is the set of values the object may have. Types can generally be scalar or aggregate. An object declared to be a scalar object is not divisible into smaller components, and generally it represents the basic data types executable on the physical computer. In a data declaration, each data object is given a name and a type. The compiler will choose what machine location to assign for the declared name.

Data operations

Data operations provide for setting the values into the locations allocated for each declared data variable. In general this is accomplished by a three-step process: a set of operators is defined for transforming the value of each data object, an expression is written for performing several such operations, and an assignment is made to change the value of some data object.

For each data type, languages define a set of operations on objects of that type. For the arithmetic types, there are the usual operations of addition, subtraction, multiplication, and division. Other operations may include exponentiation (raising to a power), as well as various simple functions such as modula or remainder (when dividing one integer by another). There may be other binary operations involving the internal format of the data, such as binary and, or, exclusive or, and not functions. Usually there are relational operations (for example, equal, not equal, greater than, less than) whose result is a boolean value of true or false. There is no limit to the number of operations allowed, except that the programming language designer has to decide between the simplicity and smallness of the language definition versus the ease of using the language.

Execution sequence

The purpose of a program is to manipulate some data in order to produce an answer. While the data operations provide for this manipulation, there must be a mechanism for deciding which expressions to execute in order to generate the desired answer. That is, an algorithm must trace a path through a series of expressions in order to arrive at an answer. Programming languages have developed three forms of execution sequencing: (1) control structures for determining execution sequencing within a procedure; (2) interprocedural communication between procedures; and (3) inheritance, or the automatic passing of information between two procedures.

Corrado Böhm and Giuseppi Jacopini showed in 1966 that a programming language needs only three basic statements for control structures: an assignment statement, an IF statement, and a looping construct. Anything else can simplify programming a solution, but is not necessary. If we add an input and an output statement, we have all that we need for a programming language. Languages execute statements sequentially with the following variations to this rule.

IF statement. Most languages include the IF statement. In the IF-THEN statement, the expression is evaluated, and if the value is true, then Statement1 is executed next. If the value is false, then the statement after the IF statement is the next one to execute. The IF-THEN-ELSE statement is similar, except that specific true and false options are given to execute next. After executing either the THEN or ELSE part, the statement following the IF statement is the next one to execute.

The usual looping constructs are the WHILE statement and the REPEAT statement. Although only one is necessary, languages usually have both.

Inheritance is the third major form of execution sequencing. In this case, information is passed automatically between program segments. This is the basis for the models used in the object-oriented languages C++ and Java.

Inheritance involves the concept of a class object. There are integer class objects, string class objects, file class objects, and so forth. Data objects are instances of these class objects. Objects inherit the properties of the objects from which they were created. Thus, if an integer object were designed with the methods (that is, functions) of addition and subtraction, each instance of an integer object would inherit those same functions. One would only need to develop these operations once and then the functionality would pass on to the derived object.

All objects are derived from one master object called an Object. An Object is the parent class of objects such as magnitude, collection, and stream. Magnitude now is the parent of objects that have values, such as numbers, characters, and dates. Collections can be ordered collections such as an array or an unordered collection such as a set. Streams are the parent objects of files. From this structure an entire class hierarchy can be developed.

If we develop a method for one object (for example, print method for object), then this method gets inherited to all objects derived from that object. Therefore, there is not the necessity to always define new functionality. If we create a new class of integer that, for example, represents the number of days in a year (from 1 to 366), then this new integerlike object will inherit all of the properties of integers, including the methods to add, subtract, and print values. It is this concept that has been built into C++, Java, and current object-oriented languages.

Once we build concepts around a class definition, we have a separate package of functions that are self-contained. We are able to sell that package as a new functionality that users may be willing to pay for rather than develop themselves. This leads to an economic model where companies can build add-ons for existing software, each add-on consisting of a set of class definitions that becomes inherited by the parent class. See alsoObject-oriented programming.

Current programming language models

C was developed by AT&T Bell Laboratories during the early 1970s. At the time, Ken Thompson was developing the UNIX operating system. Rather than using machine or assembly language as in (2) or (3) to write the system, he wanted a high-level language. See alsoOperating system.

C has a structure like FORTRAN. A C program consists of several procedures, each consisting of several statements, that include the IF, WHILE, and FOR statements. However, since the goal was to develop operating systems, a primary focus of C was to include operations that allow the programmer access to the underlying hardware of the computer. C includes a large number of operators to manipulate machine language data in the computer, and includes a strong dependence on reference variables so that C programs are able to manipulate the addressing hardware of the machine.

C++ was developed in the early 1980s as an extension to C by Bjarne Stroustrup at AT&T Bell Labs. Each C++ class would include a record declaration as well as a set of associated functions. In addition, an inheritance mechanism was included in order to provide for a class hierarchy for any program.

By the early 1990s, the World Wide Web was becoming a significant force in the computing community, and web browsers were becoming ubiquitous. However, for security reasons, the browser was designed with the limitation that it could not affect the disk storage of the machine it was running on. All computations that a web page performed were carried out on the web server accessed by web address (its Uniform Resource Locator, or URL). That was to prevent web pages from installing viruses on user machines or inadvertently (or intentionally) destroying the disk storage of the user.

Java bears a strong similarity to C++, but has eliminated many of the problems of C++. The three major features addressed by Java are:

There are no reference variables, thus no way to explicitly reference specific memory locations. Storage is still allocated by creating new class objects, but this is implicit in the language, not explicit.

There is no procedure call statement; however, one can invoke a procedure using the member of class operation. A call to CreateAddress for class address would be encoded as address.CreateAddress( ).

A large class library exists for creating web-based objects.

The Java bytecodes (called applets) are transmitted from the web server to the client web site and then execute. This saves transmission time as the executing applet is on the user’s machine once it is downloaded, and it frees machine time on the server so it can process more web “hits” effectively. See alsoClient-server system.

Visual Basic, first released in 1991, grew out of Microsoft’s GW Basic product of the 1980s. The language was organized around a series of events. Each time an event happened (for example, mouse click, pulling down a menu), the program would respond with a procedure associated with that event. Execution happens in an asynchronous manner.

Although Prolog development began in 1970, its use did not spread until the 1980s. Prolog represents a very different model of program execution, and depends on the resolution principle and satisfaction of Horn clauses of Robert A. Kowalski at the University of Edinburgh. That is, a Prolog statement is of the form p:- q, r which means p is true if both q is true or r is true.

A Prolog program consists of a series Horn clauses, each being a sequence of relations concerning data in a database. Execution proceeds sequentially through these clauses. Each relation can invoke another Horn clause to be satisfied. Evaluation of a relation is similar to returning a procedure value in imperative languages such as C or C++.

Unlike the other languages mentioned, Prolog is not a complete language. That means there are algorithms that cannot be programmed in Prolog. However, for problems that are amenable for searching large databases, Prolog is an efficient mechanism for describing those algorithms. See alsoSoftware engineering; Software engineering.

In computer technology, a set of conventions in which instructions for the machine are written. There are many languages that allow humans to communicate with computers; FORTRAN, BASIC, and Pascal are some common ones.

A language used to write instructions for the computer. It lets the programmer express data processing in a symbolic manner without regard to machine-specific details.

From Source Code to Machine Language

The statements that are written by the programmer are called “source language,” and they are translated into the computer’s “machine language” by programs called “assemblers,” “compilers” and “interpreters.” For example, when a programmer writes MULTIPLY HOURS TIMES RATE, the verb MULTIPLY must be turned into a code that means multiply, and the nouns HOURS and RATE must be turned into memory locations where those items of data are actually located.

Grammar and Syntax

Like human languages, each programming language has its own grammar and syntax. There are many dialects of the same language, and each dialect requires its own translation system. Standards have been set by ANSI for many programming languages, and ANSI-standard languages are dialect free. However, it can take years for new features to be included in ANSI standards, and new dialects inevitably spring up as a result.

Low Level and High Level

Programming languages fall into two categories: low-level assembly languages and high-level languages. Assembly languages are available for each CPU family, and each assembly instruction is translated into one machine instruction by the assembler program. With high-level languages, a programming statement may be translated into one or several machine instructions by the compiler.

Following is a brief summary of the major high-level languages. Look up each one for more details. For a list of high-level programming languages designed for client/server development, see client/server development system.

Used for statistics and mathematical matrices. Requires special keyboard symbols. See APL.

BASIC

Developed as a timesharing language in the 1960s. It has been widely used in microcomputer programming in the past, and various dialects of BASIC have been incorporated into many different applications. Microsoft’s Visual Basic is widely used. See BASIC and Visual Basic.

C

Developed in the 1980s at AT&T. Widely used to develop commercial applications. Unix is written in C. See C.

C++

Object-oriented version of C that is popular because it combines object-oriented capability with traditional C programming syntax. See C++.

C#

Pronounced “C-sharp.” A Microsoft .NET language based on C++ with elements from Visual Basic and Java. See .NET.

COBOL

Developed in the 1960s. Widely used for mini and mainframe programming. See COBOL.

dBASE

Used to be widely used in business applications, but FoxPro (Microsoft’s dBASE) has survived the longest. See Visual FoxPro, FoxBase, Clipper and Quicksilver.

F#

Pronounced “F-sharp.” A Microsoft .NET scripting language based on ML. See F#.

FORTH

Developed in the 1960s, FORTH has been used in process control and game applications. See FORTH.

FORTRAN

Developed in 1954 by IBM, it was the first major scientific programming language and continues to be widely used. Some commercial applications have been developed in FORTRAN. See FORTRAN.

Java

The programming language developed by Sun and repositioned for Web use. It is widely used on the server side, although client applications are increasingly used. See Java.

JavaScript

The de facto scripting language on the Web. JavaScript is embedded into millions of HTML pages. See JavaScript.

Originally an academic language developed in the 1970s. Borland commercialized it with its Turbo Pascal. See Pascal.

Perl

A scripting language widely used on the Web to write CGI scripts. See Perl.

Prolog

Developed in France in 1973. Used throughout Europe and Japan for AI applications. See Prolog.

Python

A scripting language used for system utilities and Internet scripts. Developed in Amsterdam by Guido van Rossum. See Python.

REXX

Runs on IBM mainframes and OS/2. Used as a general-purpose macro language. See REXX.

VBScript

Subset of Visual Basic used on the Web similar to JavaScript. See VBScript.

Visual Basic

Version of BASIC for Windows programming from Microsoft that has been widely used. See Visual Basic.

Web Languages

Languages such as JavaScript, Jscript, Perl and CGI are used to automate Web pages as well as link them to other applications running in servers.

Millions of Languages!

Programmers must use standard names for the instruction verbs (add, compare, etc.) in the language they use. In addition, a company generally uses standardized names for the data elements in its databases. However, programmers typically “make up” names for all the functions (subroutines) in the program. Since programmers are loathe to document their code, the readability of the names chosen for these routines is critical.

In a single program, the programmer could make up hundreds of function names as well as names for data structures that hold fixed sums, predefined tables and display messages.

Just Make It Up!

Unless rigid naming conventions are enforced or pair programming is used, whereby one person looks over the shoulders of the other, programmers can make up names that make no sense whatsoever. Little understood by non-programmers, this is the bane of many professionals when they have to modify someone else’s program. Debugging another person’s code is very difficult if the names are cryptic, and there are few comments, which is often the case. It often requires tracing the logic one statement at a time.

programming language, syntax, grammar, and symbols or words used to give instructions to a computer.

Development of Low-Level Languages

All computers operate by following machine language programs, a long sequence of instructions called machine code that is addressed to the hardware of the computer and is written in binary notation (see numeration), which uses only the digits 1 and 0. First-generation languages, called machine languages, required the writing of long strings of binary numbers to represent such operations as “add,” “subtract,” “and compare.” Later improvements allowed octal, decimal, or hexadecimal representation of the binary strings.

Because writing programs in machine language is impractical (it is tedious and error prone), symbolic, or assembly, languages-second-generation languages-were introduced in the early 1950s. They use simple mnemonics such as A for “add” or M for “multiply,” which are translated into machine language by a computer program called an assembler. The assembler then turns that program into a machine language program. An extension of such a language is the macro instruction, a mnemonic (such as “READ”) for which the assembler substitutes a series of simpler mnemonics. The resulting machine language programs, however, are specific to one type of computer and will usually not run on a computer with a different type of central processing unit (CPU).

Evolution of High-Level Languages

The lack of portability between different computers led to the development of high-level languages-so called because they permitted a programmer to ignore many low-level details of the computer’s hardware. Further, it was recognized that the closer the syntax, rules, and mnemonics of the programming language could be to “natural language” the less likely it became that the programmer would inadvertently introduce errors (called “bugs”) into the program. Hence, in the mid-1950s a third generation of languages came into use. These algorithmic, or procedural, languages are designed for solving a particular type of problem. Unlike machine or symbolic languages, they vary little between computers. They must be translated into machine code by a program called a compiler or interpreter.

Early computers were used almost exclusively by scientists, and the first high-level language, Fortran [Formula translation], was developed (1953-57) for scientific and engineering applications by John Backus at the IBM Corp. A program that handled recursive algorithms better, LISP [LISt Processing], was developed by John McCarthy at the Massachusetts Institute of Technology in the early 1950s; implemented in 1959, it has become the standard language for the artificial intelligence community. COBOL [COmmon Business Oriented Language], the first language intended for commercial applications, is still widely used; it was developed by a committee of computer manufacturers and users under the leadership of Grace Hopper, a U.S. Navy programmer, in 1959. ALGOL [ALGOrithmic Language], developed in Europe about 1958, is used primarily in mathematics and science, as is APL [A Programming Language], published in the United States in 1962 by Kenneth Iverson. PL/1 [Programming Language 1], developed in the late 1960s by the IBM Corp., and ADA [for Ada Augusta, countess of Lovelace, biographer of Charles Babbage], developed in 1981 by the U.S. Dept. of Defense, are designed for both business and scientific use.

BASIC [Beginner’s All-purpose Symbolic Instruction Code] was developed by two Dartmouth College professors, John Kemeny and Thomas Kurtz, as a teaching tool for undergraduates (1966); it subsequently became the primary language of the personal computer revolution. In 1971, Swiss professor Nicholas Wirth developed a more structured language for teaching that he named Pascal (for French mathematician Blaise Pascal, who built the first successful mechanical calculator). Modula 2, a Pascallike language for commercial and mathematical applications, was introduced by Wirth in 1982. Ten years before that, to implement the UNIX operating system, Dennis Ritchie of Bell Laboratories produced a language that he called C; along with its extensions, called C++, developed by Bjarne Stroustrup of Bell Laboratories, it has perhaps become the most widely used general-purpose language among professional programmers because of its ability to deal with the rigors of object-oriented programming. Java is an object-oriented language similar to C++ but simplified to eliminate features that are prone to programming errors. Java was developed specifically as a network-oriented language, for writing programs that can be safely downloaded through the Internet and immediately run without fear of computer viruses. Using small Java programs called applets, World Wide Web pages can be developed that include a full range of multimedia functions.

Fourth-generation languages are nonprocedural-they specify what is to be accomplished without describing how. The first one, FORTH, developed in 1970 by American astronomer Charles Moore, is used in scientific and industrial control applications. Most fourth-generation languages are written for specific purposes. Fifth-generation languages, which are still in their infancy, are an outgrowth of artificial intelligence research. PROLOG [PROgramming LOGic], developed by French computer scientist Alain Colmerauer and logician Philippe Roussel in the early 1970s, is useful for programming logical processes and making deductions automatically.

Many other languages have been designed to meet specialized needs. GPSS [General Purpose System Simulator] is used for modeling physical and environmental events, and SNOBOL [String-Oriented Symbolic Language] is designed for pattern matching and list processing. LOGO, a version of LISP, was developed in the 1960s to help children learn about computers. PILOT [Programmed Instruction Learning, Or Testing] is used in writing instructional software, and Occam is a nonsequential language that optimizes the execution of a program’s instructions in parallel-processing systems.

There are also procedural languages that operate solely within a larger program to customize it to a user’s particular needs. These include the programming languages of several database and statistical programs, the scripting languages of communications programs, and the macro languages of word-processing programs.

Compilers and Interpreters

Once the program is written and has had any errors repaired (a process called debugging), it may be executed in one of two ways, depending on the language. With some languages, such as C or Pascal, the program is turned into a separate machine language program by a compiler, which functions much as an assembler does. Other languages, such as LISP, do not have compilers but use an interpreter to read and interpret the program a line at a time and convert it into machine code. A few languages, such as BASIC, have both compilers and interpreters. Source code, the form in which a program is written in a high-level language, can easily be transferred from one type of computer to another, and a compiler or interpreter specific to the machine configuration can convert the source code to object, or machine, code.

The PHP development team would like to announce the immediate availability of PHP 5.3.3. This release focuses on improving the stability and security of the PHP 5.3.x branch with over 100 bug fixes, some of which are security related. All users are encouraged to upgrade to this release.

Backwards incompatible change:

Methods with the same name as the last element of a namespaced class name will no longer be treated as constructor. This change doesn’t affect non-namespaced classes.

The PHP development team would like to announce the immediate availability of PHP 5.2.14. This release focuses on improving the stability of the PHP 5.2.x branch with over 60 bug fixes, some of which are security related.

This release marks the end of the active support for PHP 5.2. Following this release the PHP 5.2 series will receive no further active bug maintenance. Security fixes for PHP 5.2 might be published on a case by cases basis. All users of PHP 5.2 are encouraged to upgrade to PHP 5.3.

Security Enhancements and Fixes in PHP 5.2.14:

Rewrote var_export() to use smart_str rather than output buffering, prevents data disclosure if a fatal error occurs.

PHP is proud to announce TestFest 2010. TestFest is PHP’s annual campaign to increase the overall code coverage of PHP through PHPT tests. During TestFest, PHP User Groups and individuals around the world organize local events where new tests are written and new contributors are introduced to PHP’s testing suite.

Last year was very successful with 887 tests submitted and a code coverage increase of 2.5%. This year we hope to do better.

TestFest’s own SVN repository and reporting tools are back online for this year’s event. New to TestFest this year are automated test environment build tools as well as screencasts showing those build tools in action.

Please visit the TestFest 2010 wiki page for all the details on events being organized in your area, or find out how you can organize your own event.

The PHP development team is proud to announce the immediate release of PHP 5.3.2. This is a maintenance release in the 5.3 series, which includes a large number of bug fixes.

Security Enhancements and Fixes in PHP 5.3.2:

Improved LCG entropy. (Rasmus, Samy Kamkar)

Fixed safe_mode validation inside tempnam() when the directory path does not end with a /). (Martin Jansen)

The PHP development team would like to announce the immediate availability of PHP 5.2.13. This release focuses on improving the stability of the PHP 5.2.x branch with over 40 bug fixes, some of which are security related. All users of PHP 5.2 are encouraged to upgrade to this release.

Security Enhancements and Fixes in PHP 5.2.13:

Fixed safe_mode validation inside tempnam() when the directory path does not end with a /). (Martin Jansen)

The term algorithm, probably not something foreign to us. Friends there who know the meaning of the word ‘algorithm’? Judging from the origin of he said, the word ‘algorithm’ has a rather strange history. People just find a word that means the process of calculating Algorism with Arabic numerals. Someone said ‘Algorist’ If calculating using Arabic numerals. Linguists attempt to discover the origin of this word but the results are less satisfactory. Finally, historians of mathematics found the origin of the word is derived from the name of the author of the famous Arabic book, namely Abu Abdullah Muhammad Ibn Musa Al-Khuwarizmi read people west into Algorism.

Algorithm Definition

The definition of algorithm is “logical steps solving problems systematically and logically arranged.” A simple example is the preparation of a food recipe, which usually are the steps how to cook these dishes. However, the algorithm is generally used to create a flowchart (flowchart) in computer science / informatics.

The inventor of the concept of algorithm and algebra

The inventor is a mathematician from Uzbekistan named Abu Abdullah Muhammad Ibn Musa al-Khwarizmi. In western literature, he is better known as Algorism. Speed is then used to refer to the concept of an algorithm that finds. Abu Abdullah Muhammad Ibn Musa al-Khwarizmi (770-840) was born in Khwarizm (Kheva), city in the south of the river Oxus (now Uzbekistan) in 770 AD. Both his parents later moved to a place south of Baghdad (Iraq), when he was little. Khwarizm known as the man who introduced the concept of algorithms in mathematics, the concept is taken from a last name.

Al khwarizmi also is the inventor of some branch of mathematics known as an astronomer and geographer. He is one of the greatest mathematical scientists who ever lived, and his writings were very influential in its time. Algebra theory is also the invention and ideas of Al khwarizmi. Name algebra is taken from his famous book titled “Al Jabr wa al Muqabilah”. He developed a table that contains details of the trigonometric functions of sine, cosine and kotangen and the concept of differentiation.

His influence in the development of mathematics, astronomy and geography is no doubt in the historical records. The approach he uses a systematic and logical approach. He combines knowledge of Greek Hindu coupled with his very own in developing mathematics. Khwarizm adopted the use of zero, in the science of arithmetic and decimal system. Some of his many translated into Latin in the early 12th century, by two of the leading translators of Bath Adelard and Gerard of Cremona. Aritmetikanya treatises, such as Kitab al-wal-Tafreeq Jam’a Hisab bil al-Hindi, Algebra, Al-Maqala fi Hisab Jabr wa-al-al-Muqabilah, known only from a translation of the Latin language. The books were kept in use until the 16th century as a basic handbook by universities in Europe.

Geography book titled The Book of Surat-al-Ard containing maps of the world has also been translated into English. Fruit Khwarizmi thought in the field of geography is also outstanding. He does not just revise Ptolemy’s views on geography, but also fix some parts. Seventy-one geographers have worked under the leadership of Al khwarizmi when making a map of the world’s first 830 years. He is reported to have also formed a partnership with the Caliph Mamun al-Rashid when running a project to determine the volume and circumference of the earth.

Source: IlmuKomputer.com

The definition of algorithm

Once we know who invented the algorithm and the origin of the word ususl algorithm itself, it is time to identify its own algorithm, good read …., yes

“The algorithm is a sequence of logical steps solving problems systematically and logically arranged.” Logical word is the key word in the algorithm. The steps in the algorithm must be logical and must be determined is false or true.

Introduction

All agreed that the computer is a tool to solve the problem. In order for computers to solve problems then necessary to formulate the first steps in a series of instructions. A set of instructions that are solving a problem called “Program”.

So that the computer can run the program, it must be written in a language that can be understood by computers. Because the computer is a machine then the program should be written in a language specifically designed to communicate with the computer. Computer language used in writing a program called a programming language. One example of a programming language is the language C.

In solving problems with computer assistance, the first step taken is to create a design (design). Design presents ways of thinking programmers in solving problems. This design contains a sequence of steps achieving solutions that are written in descriptive notations. The sequence of systematic steps to solve the problem called ALGORITHM.

The algorithm according to Big Indonesian Dictionary is the logical sequence of decision-making for solving the problem.

Examples of algorithms in daily life:

Algorithm Process Steps in Algorithm

Cake Making Cake Recipe 3buah take eggs, take the egg yolks and beaten

Making clothes Clothes Pattern Cut fabric according to pattern

Contents Voucher dial 555 HP Guide

________________________________________

Algorithm notation

Notation notation algorithm is not a programming language, so anyone can make a notation algorithm that berbeda.Hal important about it is easy to read notation and notation dimengerti.Meskipun algorithms are not standard notation but compliance with the notation need to be taken to avoid mistakes.

With the notation style of descriptive phrases, descriptions of each step can be explained by language gamblang.Misalnya, the process begins with a verb such as’ read ‘,’ count ‘,’ replace ‘and sebagainya.Sedangkan conditional statement expressed by’ if ….’ , ‘Then ….’.

This notation is good for a short algorithm, but for large problems this is not possible dipakai.Selain notation, the conversion of algorithms into a programming language notation tends to be relatively difficult.

2.Notasi II: using the flow chart (flow-chart).

Flowchart popular in the early era of programming with more flow komputer.Diagram describe the flow of instructions in the program visually shows the structure rather than program.Notasi algorithm is also suitable for small problems, not suitable for large problems because it would require pages and pages of paper to describe the flow program process.

3. Notation III: using pseudo-code

Pseudocode (pseudo pseudo or no actual meaning) is a notation that resembles a high-level programming language notation such as language C. The results show that programming languages generally have a notation that is almost similar to some instructions, such as notation if-then-else, while-do, repeat-Until, read, write and so forth. Based on these observations, it is defined that the notation algorithm that can explain his orders with a clear language without confusing the reader can be called with the notation algorithm using pseudo-code. Unlike a programming language that bothered by a semicolon and so on, special words, index, format, and others, then the pseudo-code will be more easier and profitable. The advantages of using pseudo-code notation is ease convert kebahasa programming, because there is a correspondence between each pseudo-code with this program.Korespondensi language notation can be realized with table translation of notation algorithm into a program language notations

________________________________________

Posting Rules Algorithm

The text contains a description of the algorithm steps masalah.Deskripsi settlement can be written notation of any nature, provided easy to read and understand. There is no standard notation in the text entry algorithms. Each person can make writing rules and algorithms sendiri.Namun, for notation algorithm can be easily translated into natural language notation, the notation should correspond with the notation that the algorithm programming languages in general.

Example command: Write the value of x and y

In the notation algorithm becomes: write (x, y)

In Turbo C language was written: printf (“% d% d”, x, y

);________________________________________

Type, Name, and Value

Type

In general, computer programs work by manipulating objects (data) in memory. Objects that will be programmed a variety of types, for instance type numeric, character strings, and records (recordings).

Data types can be grouped into 2, namely: basic types and type formations. Basic type is the type that can be used directly, while the type formations formed from the basic types or from other types of formations that have been defined.

A type referenced from namanya.Nilai-values covered by that type is stated in the realm (domain) value. Operations (and operators) that can be done against these types are also defined. In other words a type indicated by its name, the realm of values they contain, how to write the constants, and operations that can be done to him.

Basic Type

In a programming language which includes basic types are:

1. number of logic

2. integer

3. character or string

4. real numbers

Type formation

This type of formation is a type defined by the programmer (user define). Type formations composed by one or more basic types. There are two types of formations, namely:

1.Tipe foundation that is named with the name of a new type of example: BilanganBulat type: integer

BilanganBulat variable is an integer type is the same as type integer. Suppose we have a variable named X and type BilanganBulat, it means that the variable X is also an integer.

2.Rekaman (records).

Records prepared by one or more field.Tiap fields store data from certain basic types or from other types of formations that have been dideinisikan sebellumnya.Nama record itentukan by the programmer. Because stukturnya yang diusun above fields, the record also inamakan structured type (stuctured type).

Name

Each object is given to the wild have nama.Nama easily identifiable object, referenced, and are distinguished from objects lainya.Di in the algorithm name used as an identifier “something” and the programmer refers to “something” that through his name. Therefore, each name must be unique, there can be no two pieces of the same name.

In the algorithm “something” that is named can be:

1.Variabel

2.Konstanta

3.Tipe formation

4.Nama Function

5.Nama Procedure

An important thing to note is, the name should be interpretive, that reflects the intrinsic value or function dikandungnya.Pemrogram highly recommended to give an explanation of the name that is defined.

All names used in the algorithm should be defined or declared section declaration. The declaration is used as a place of reference to know the meaning of a word or terjemahannya.Tempat to explain the name and type of the name.

Value

The value is the amount of data that already didefinisikan.Nilai type can be either the content stored by variable name or constant name, value from the calculation, or the value sent by fungsi.Algoritma were essentially manipulate the value stored in the memory element. The value contained by the manipulated variables, among others, by the way: fill the other variables of the same type, used for calculations, or written to the output device.

Example algorithm: stirng print “Hello, how are you?” to the output device.

Version 1. String “Hello, how are you?” printed directly without using variables.

Algorithm:

Declaration

(None)

Description

write (“Hello, how are you?”)

2.String version of “Hello, how are you?” stored in a variable of type string.

Algorithm:

Declaration

utterance: a string

Description

<—— saying ‘Hello, how are you? ”

write (greeting)

3.String version of “Hello, how are you?” stored as constants

Algorithm:

Declaration

const greeting = ‘Hello, how are you? ”

Description

write (greeting)

The output generated by the algorithm version 1, 2, and 3 are:

Hello, how are ?________________________________________

SEQUENCE / runs

The algorithm is of runs (sequences) of one or more instructions, which means that:

1. Each instruction is done one by one.

2. Each instruction executed exactly once, no instructions are repeated.

3. Ordering instructions are executed the same processor with a sequence of instructions written on the algorithm.

One of the advantages compared to human computer is its ability to carry out an instruction repeated tirelessly and bosan.Pengulangan procedure or a loop (repetition or loop) can be done ejumlah times or until a condition is reached.

Repetition Structure

Generally consists of 2 parts:

1. Repetition condition, ie boolean expression that must be met to implement this pengulangan.Kondisi exist that explicitly by the programmer or managed by a computer seniri (implicit).

2. Agency repetition, which is part of the algorithm is repeated.

Structural repetition is usually accompanied by parts:

1. Initialization, ie actions done before the loop is the first time.

2. Termination, the actions performed after repetition is completed.

Initialization and termination does not always have to have (optional), but in many cases the initialization is generally required.

Repetition structure in general:

<inisialisasi>

early recurrence

loop body

end loop

<terminasi>

In the algorithm there are several kinds of repetition structure berbeda.Beberapa structure can be used for the same problem, but there is repetition notation is only suitable for certain problems saja.Pemilihan exact repetition structure depends on the matter to be diprogram.Banyak once notation repetition structures, such as:

For 1.Struktur

For the structure used to produce a repetition of n times already dispesifikasikan.Jumlah repetition of known or can be ascertained before the execution program.Bentuk generally there are 2 kinds of

a.Menaik (ascending)

for enumerators to nilai_akhir do <————- nilai_awal

action

endfor

b.Menurun (descending)

for enumerators <————- nilai_akhir do downto nilai_awal

action

endfor

2. While the structure

While the structure is the general form:

while the do

action

endwhile

Action (action of runs) will be carried out repeatedly during konii worth true.Jika condition is false then the loop will selesai.Agar initial conditions is true becomes false then there must be a value-altering conditions.

3. Repeat structure

Repeat structure is the general form:

repeat

action

Until condition

Repetition is based on the conditions inside the body boolean.Aksi loop is repeated until konisi true.Jika boolean value false then the loop will still continue to it necessary or action that changes the value modifier conditions. Repeat structure has the same meaning while, and in some problems these two structures complement each other.

To note is that the repetition must stop. Repetition who never stopped showing the wrong algorithm.

________________________________________

FUNCTION

Does it function?

Function is a program module which provides / return (return) a value of a particular type. Functions accessed by calling his name, the name of the function should contain a list of parameters can unik.Fungsi formal.Parameter on masukan.Jenis function is always a parameter in the function input parameter is an input used by the function is to generate value.

Structure similar to the structure function algorithm

– Header section containing the function name and function specification,

– The declaration

– Body functions

Notation algorithm to define the function are:

NamaFungsi function (input formal parameter list )—–> type results

(Specification of function, explaining what was done and the returned function)

Declaration

(All names used by the wild disini.Nama algorithm defined functions declared in the local declaration is only known and used in this function only)

Description:

(Body functions, contains the instructions to generate the value to be returned by the function)

return result (return value) function generated

Array

What is the array?

Array is a data structure that stores a set of elements of the same type. setpa elements directly accessible via the index.

Defining array

Array is a static data structure, that is, the number of array elements must be known before the program starts. Defines the number of array elements means ordering a number of places in memory.

Logical word is the key word in the algorithm. The steps in the algorithm

must be logical and must be determined is false or true.

The algorithm is the heart of computer science or informatics. Many branches of computer science terminology referenced in the algorithm. However, do not assume the algorithm is always synonymous with computer science course. In real life there are many processes haripun expressed in an algorithm.

Ways to make a cake or a dish that is expressed in a recipe can also be referred to as algorithms. In every recipe there is always a sequence of step-lankah made cuisine. If the steps are not logical, can not produce the desired cuisine. Mothers who try a recipe will be read one by one the steps and then he did the process of making appropriate that he read. S

ecara general, the (object) who worked on the process called the processor (processor). Processing may take the form of human, computer, robot or other electronic alatalat. Processors perform a process to carry out or “execute” an algorithm that outlines the process. Implement the algorithm means doing the steps in the algorithm. Processors working on the process in accordance with the algorithm given to him. The cook makes a cake based on a recipe given to him, the pianist played a song by musical notes board.

Therefore, an algorithm must be expressed in a form that can be understood by the processor.