3 Abstract A single programming paradigm is not sufficient to implement all computational problems in the most optimal way. Using language symbiosis and multi-paradigm programming different programming languages and paradigms can be used interchangeably allowing the programmer to use a wide range of possibilities to implement a solution for his problem. However, current implementations of multi-paradigm programming languages suffer from a weak design because they first try to combine the concrete syntaxes of the individual paradigms and then implement the evaluator for the newly created language. We propose to use the converse approach. By starting the design of a multi-paradigm language from an abstract syntax, we can create a clean implementation of the evaluator for this new abstract language by combining existing evaluators. In a second step we will design a concrete syntax for it. We validate this approach by implementing the symbiosis of a logic and an imperative programming language on the basis of their respective abstract syntaxes and evaluators. ii

4 Acknowledgements This dissertation would have never been finished without the great support of a lot of people. Therefore I wish to express my gratitude towards: Prof. Dr. Theo D Hondt for coming up with the subject of this dissertation, guiding me through the different stages and for promoting this dissertation. Kris Gybels and Wolfgang De Meuter for proofreading even during the last stages. Without their help this dissertation would not be nearly as readable as it is now. Maja D Hondt for her comments and suggestions during the early stages of this dissertation. The researchers of the Programming Technology Lab for their constructive comments during the thesis presentations. My fellow thesis students for their help and support during the last four years. Katrien Steurs for her linguistic proofreading and for her support over the last year. The Vrije Universiteit Brussel and Departement Informatica for providing an excellent education in a fun and inspiring environment. Last but not least my friends and family for supporting me and giving me the opportunity to study in the best possible circumstances. iii

8 List of Figures 2.1 The base level encapsulated in the meta level Example read-eval-print evaluation The two stages of the scanner M1 is implemented by L1 and at the same time implements L A class-based object A prototype-based object A small family tree Derivation tree of the family tree example Comparing the full language and the kernel language in Oz vii

9 List of Tables 2.1 A comparison of the reflectional capabilities of some important programming languages A coroutine that alternately prints one and two on the screen Example BNF: the US postal address An abstract syntax, excerpt from the Pico abstract syntax Difficulty to combine the four main programming paradigms Legend for table SISC code to draw a window on the screen Kawa code to draw a window on the screen JScheme code to draw a window on the screen JScheme code to draw a window on the screen and adding a label using JLIB viii

10 Chapter 1 Introduction Programming languages are the key to the development of new ideas and applications. Over the years, lots of programming languages have been proposed and implemented. These programming languages were developed because there was a need for new features. Every language has its advantages and disadvantages: some languages have a very simple syntax while other languages can be used to generate extremely fast programs. Some are used for educational purposes, others are used for major business applications. But most of these languages share at least one thing: they can be classified on the basis of the general principles behind the design of the language. The principles that the languages in one group share is the programming paradigm. Each programming paradigm is best suited to solve certain computational problems, but in some cases it would be better to use a combination of paradigms to implement a solution. To make this possible multi-paradigm programming languages were developed. These languages allow programmers to use the different ideas from different paradigms in one programming language. Such a language is usually implemented as the symbiosis of two or more programming languages. By symbiosis we mean that they are able to use each others functionality and data structures. For the languages to be usable, it is important to have a simple syntax and an easy system to program using the different programming paradigms. Therefore special attention has to be paid to the design of the syntax of the language, as well as its internal implementation. We notice however that it is hard to program using many multi-paradigm programming languages because the syntax is too complex. This is caused by the design decisions made during the development of the language. Usually one paradigm is added to a programming language that already supports another paradigm. C++ [Str91] is an example of this: object-oriented programming constructs were added while the original language supports imperative programming. This resulted in a syntax that makes it easier to program in an imperative style than in an object-oriented style. Another drawback of concentrating on the syntax distracts from the real problem: finding a well designed combination of both paradigms. This causes difficulty in sharing values between the different languages and sometimes forces the programmer to manually convert values between languages. 1

11 CHAPTER 1. INTRODUCTION Thesis Because the design of current multi-paradigm programming languages is too much focused on the syntax of the language, less attention is given to the implementation and the internal representation of the languages. To solve these weak design decisions, we propose a converse approach: we will first focus on the internal representation of the languages we want to combine. To do this we first distinguish two aspects of the symbiosis of paradigms: Symbiosis on the syntactic level combines the syntax of the paradigms and attempts to design a usable syntax. Symbiosis on the semantic level focuses on the internal representation of the programming elements of the paradigms. Current multi-paradigm programming languages try to solve both problems at the same time or focus on the first, the syntactic level of the language. In this dissertation however we will focus on the latter as it is the most difficult one. The quality of design of the syntactic level of the language is highly subjective and will be addressed later. We claim that by combining languages on the semantic level, a multi-paradigm programming language can be obtained that has a clean design of its internal data structures and allows simple sharing of values between paradigms. To support this claim we will design a multi-paradigm programming language by combining the abstract syntax of two programming languages. We want our resulting language to: Offer a simple system to share data between the different programming paradigms, such as variables and values. Have a clean and simple design and implementation. We will accomplish these conditions by creating a programming language that: Merges the abstract grammar of the paradigms and contains data structures that are the same in both paradigms. Uses the existing evaluators of the original languages and changes them as little as possible. We will demonstrate this by combining a meta-circular evaluator for Pico [D H03] with a logic evaluator named Loco. We will combine their internal data structures and combine their respective evaluators and change them as little as possible. Using the basic reflective support of Pico we will illustrate how we can use our combined language to implement a limited example of Logic Meta Programming [Bru03].

12 CHAPTER 1. INTRODUCTION Outline In the rest of this dissertation we will introduce the terms and concepts used in programming and programming language design, and explain our experimental implementation. The text is structured as follows: in chapter 2, we will explain the technology that is needed in programming language design. We will introduce metaprogramming and show how it relates to program language design. We will also see how the syntax of a language is related to its implementation and explain why typing of data is important. In chapter 3 we will explain what a programming paradigm exactly is and get an overview of the different paradigms that exist. Chapter 4 takes a more in depth look at the logic programming paradigm. In chapter 5 we introduce the notion of multi-paradigm programming, and illustrate the theory using existing multi-paradigm languages. In chapter 6 we explain our experimental implementation of a symbiosis between two programming languages on the level of the abstract syntax. And finally in chapter 7 we suggest some topics for future work and conclude this dissertation with some final remarks.

13 Chapter 2 Preliminaries In this chapter we will introduce some concepts and techniques concerning programming languages and programming language design. We will look at technology used to reason about programs, and techniques used to implement a programming language and finally we will define the syntax for it. 2.1 Meta-Programming and Reflection Programming languages are used to implement programs that help us with our work or for our amusement. Therefore the program has to reason about data and communicate with the user or with other programs. This data can be anything such as financial data, the text of a book or some animations for a film. But we can also use the program to program about another program. This is called meta-programming Meta-Programming Meta-programming is a programming technique that enables us to program about programs. We can for example control the executing of the program, or reason about the program itself to extract information from it. Meta-programming structures the programming system in different levels. The first level, or base-level, is the basic program running the computation as a normal program. The second level, or meta-level, specifies a program that reasons about the basic program. This metalevel program is often the interpreter of the language. It will intervene in the basic computation when certain conditions are met or run simultaneously inspecting the program that is running. Depending on the implementation of the reflection system, the meta-program can reason about function calls, variable assignments or even every single statement. The meta-level hierarchy is not limited to two levels: when necessary and if the implementation supports it, one can program about a metaprogram, called meta-meta-programming. In the ultimate case the amount of levels is indefinite. Figure 2.1 shows how the different levels can be structured. Meta-programming can be used to resolve cross-cutting concerns [KLM + 97]. A cross-cutting concern is a piece of functionality that is scattered throughout the 4

14 CHAPTER 2. PRELIMINARIES 5 Figure 2.1: The base level encapsulated in the meta level program to implement its functionality but that is loosely coupled with the code around it and it would be better if it was completely separated. A logging system is the standard example of a cross-cutting concern. It is usually added to a program to help the debugging process. To add this logging system to a program the programmer adds the logging code to certain parts of the program, but the code has no functional meaning at these locations. It would be better if we could separate the logging code from the rest of the program so that it can be removed easily when debugging is done. We can achieve this by hooking the logging system as a meta-program to certain function calls. This meta-program is separated from the base program and can be easily removed when logging is not necessary anymore. This programming method is called separation of concerns [LH95]: the different, independent parts of a program are separated from each other. We can compare this to functions that encapsulate common behaviours that are loosely coupled with each other. Metaprogramming can be used to encapsulate different concerns and to isolate them from each other and from the base-level program Reification Before programs can reason about other programs, the meta-level program must have a means to reference and represent the base-level program they want to reason about. So the meta-language must have a data structure to represent function calls, objects or whatever programming element the base-level language supports, and functionality to capture these data and act upon it. Reification is the term used to describe the process of making something that was previously implicit, explicitly available to the programming language for manipulation. It is the key to metaprogramming languages as it is a programming language construct that allows to treat a program as data and reference to it. When a program wants to reason about any programming language element it has to reify or absorb the programming element it wants to reason about to a value that can be used in further computation by the program.

15 CHAPTER 2. PRELIMINARIES 6 For example, if we want to refer to a function call, we want to be able to get the name of the function that is called, and the parameters that are given. Initially, this information is implicit, it is just a function call. By reifying this function call, we make it explicit by creating a data structure that contains the name of the function and its parameters. These reification data are causally connected to the actual reified information, so that when one of both is modified, the other is updated too. This is accomplished not by creating a data structure that contains the data we need, but instead by creating a data structure that refers to the function call and by adding functionality that will extract the name of the function and its parameters from this data structure when necessary Reflection When the languages that are used for the base-level as well as for the meta-level are the same, we can use the language to reason about itself. This is called reflection: the ability of a system to inspect and interact with its own computation by reifying parts of itself. A system that supports this reflection is self-aware and can reason about its own process on a meta-level. The system has the possibility to inspect the state of its current computation and can possibly intervene in the flow of that computation. Reflection is important for components of artificial intelligence [Kam95], research in programming languages and debugging. It was first introduced by Brian Smith [Smi84] for procedural languages. Pattie Maes [Mae87] improved the ideas of computational reflection and introduced reflection for object-oriented programming languages. Support for reflection in a programming language can be implemented in many different ways. Some languages that support reflection have the ability to reason about and change themselves. Others can only inspect their own computation. In the rest of this chapter we will describe the terminology used in reflection research, show some uses of reflection and give a short overview of the reflectional capabilities of some important programming languages Terminology As not all implementations of reflection have the same power, we have to be able to classify the different implementations. Therefore we distinguish two aspects of reflection: Introspection: it is possible to inspect the program while it is running and take actions depending on the state of the program. Intercession: it is possible to change the program at runtime and to change its actual code such as variables, functions and methods. A system that supports only introspection is often wrongly called a reflectional system. To distinguish these partial reflectional systems from real reflectional implementations, systems supporting introspection and intercession are called fully reflectional. This full reflection is important, without it only inspection of the program

16 CHAPTER 2. PRELIMINARIES 7 can be done, without changing anything. It is thus possible to detect certain states of a program, but the actual state can not be changed. By adding intercession however, we could for example add methods to an object or add or change the values of variables, all at runtime. With intercession, we are thus able to change the state of the program. This allows the programmer to make his program totally adaptable to a situation: it can change its own state to fit the circumstances. Consider for example a window on the screen. In the same way that this window is updated with new fields or buttons by actions of the user, a program can be updated to fit the requirements of the particular situation. With an extensive implementation of full reflection it is even possible to completely change the program at runtime, so that it becomes a completely different program. This technique is used by some computer viruses as we will see in section where we give a short overview of the reflectional properties of some important programming languages. Apart from this classification of reflectional implementations using introspection and intercession, we can also classify them on the basis of their way of implementing reflection: whether they change the original program or not. Behavioural Reflection: it is possible to alter the way actions, such as method calls, are performed. Structural Reflection: it is possible to let the program alter the definitions of the data structures such as classes and methods. Behavioural reflection does not change the original program. It only changes the behaviour of the program without changing its implementation. This is accomplished by using hooks at different locations in the program. These hooks define locations in the program where the reflectional system can act upon and are used by the compiler or the interpreter to implement the reflective capabilities. Depending on the implementation, the hooks can be placed on function calls, variable accesses or even on every individual statement. When the particular statement is called, the reflective system intercepts the call and runs its own code instead of the original code. Structural reflection is more intrusive, it changes the original program by adding or deleting methods, variables, or even entire objects in the original program code. This way it is not only possible to change the behaviour of the program but really alter a part of the program itself or even its entire implementation Reflection in Object-Oriented Languages Reflection is especially interesting in object-oriented programming: it is an important extension of standard object-oriented programming and makes separation of concerns [LH95], reusability and flexibility of the source code much easier. Reflection in objectoriented languages is usually achieved by associating a meta-object to every existing base-object. By sending messages or calling methods to this meta-object, we can make changes to the base object, for example:

17 CHAPTER 2. PRELIMINARIES 8 Functionality can be added before and after a method call, such as a logging system. A method call can be blocked, unless certain constraints are met. The base objects can be extended by new methods or variables. Some program development environments, such as VisualWorks [How95] or Squeak [IKM + 97] for Smalltalk, depend highly on reflection to implement the development environment. Most of the the development environment itself is written in Smalltalk, so reflection can be used to list all methods of an object or change and add methods or objects. The changes are made on the spot and are active as soon as they are added. Because the development environment is written in Smalltalk and is completely accessible to the programmer, he can change the development environment or debug it while at the same time working with it. There is no compilation stage and the development environment does not need to restart when changes to it are made. This allows really rapid program development. The implementation of reflection in an object-oriented programming language is called the Meta-Object Protocol, in short: MOP [Meua, KdRB91]. It is the protocol that defines what programming constructs are available to access the meta-objects and interact with them. Depending on the implementation this Meta-Object Protocol can be a part of the main programming language or can be a new language on top of it. The notion of reflection can also be augmented to an extra meta-meta-level, where reasoning about the entire collection of objects is possible. For example, we can count the number of objects of a certain type or even detect design patterns [KP96] in source code Aspect-Oriented Programming Many programming problems can not be solved using standard object-oriented or procedural programming techniques. The adaptability of standard object-oriented design is not sufficient, for example it is not possible to add a logging system to a program without having to add it at different places in the program. So the implementation of certain functionality can become scattered throughout the entire program. These bits of functionality are called aspects and the reason that they are hard to design properly is that they cross-cut the basic functionality of the program. Aspect-Oriented Programming (AOP) [KLM + 97], which is usually implemented as an extension of object-oriented programming, introduces a solution to these crosscutting concerns. By isolating the aspect in code by separating it from the main source code and merging them later, at compile time, it becomes possible to cleanly implement the base program as well as the aspect without them interfering with each other. Our example of logging is an obvious example of an aspect. Its functionality is often added to objects for debugging reasons and has usually nothing to do with the main functionality of the object. When the debugging is finished and the logging code is not necessary anymore, it is often hard to find all its appearances in the entire source code to remove them. By separating the logging functionality in an aspect,

18 CHAPTER 2. PRELIMINARIES 9 Programming language Introspection Intercession Behavioural Structural C++ Java X Javassist X X X X AspectJ X X Smalltalk X X X X Table 2.1: A comparison of the reflectional capabilities of some important programming languages. it is possible to add to, and remove logging from the program without changing the main objects or source code. It is like plugging a certain module in and out without telling the other modules about it. The drawback of using Aspect-oriented technology is that it is more difficult to get an overview of the entire programming project. And although it can help the programmer to debug the program, when used for other purposes it becomes even harder to debug it. In most implementations, the merge between the main source code and the aspects is done at compile time. While that is an advantage for solving cross-cutting problems, it is less obvious to do this merge as a human programmer, who tries to understand the program by getting an overview of it. Although aspect-oriented programming is often seen as a distinct technique, it is easy to show that AOP can be implemented using reflective programming techniques [BL02]. The only drawback with this approach is a sacrifice in performance, but we get extreme flexibility and reusability instead Reflection Implementations Programming languages differ a lot in their reflective capabilities. Some programming languages claiming to support reflection only support a small subset of it. Table 2.1 gives an overview of the reflective capabilities of some important programming languages. Notice that languages that support structural reflection also support behavioural reflection. This is because when we can alter the program itself, it is of course possible to alter the behaviour of the program. In this section we will give a short overview of current programming languages and their reflective capabilities. C++ Standard C++ has no support for reflection: neither introspection nor intercession is available. Several extensions of C++ were proposed [MIKC92, Vol00, MS01] to add support for reflection. Java Support for reflection is very limited in Java, only introspection is supported, so it is impossible to make changes to the state of the objects. The reflection capabilities are

19 CHAPTER 2. PRELIMINARIES 10 available through the java.lang.reflect [Mic03a] library. Because of this very limited support for reflection, extensions of the reflection API were developed. Most of these extensions add behavioural reflection to specific kinds of operations such as method calls, field access and object creation. They are implemented in an aspect-oriented programming style, using hooks on these operations. In the next two sections we will describe two implementations of extended reflection for Java. AspectJ AspectJ [KHH + 01] use the most wide-spread approach to implement reflection. It is the reference implementation of the Aspect-Oriented programming extension to Java. and implements behavioural reflection: it is possible to intercept method calls and thus change the behaviour of the program by executing the aspect code when the method is called. AspectJ uses join points, which are well-defined points in the execution of the program that can be hooked to method calls or variable accesses to implement a reflective system. Using those join points, the programmer can intercept the operations and add additional code to implement a cross-cutting concern. The AspectJ compiler compiles the source code to standard Java Bytecode that can be run on any Java Virtual Machine. Javassist As opposed to AspectJ, Javassist [Chi98] allows structural reflection to be performed when a class is loaded into the Java Virtual Machine. It is not possible in Java to implement full reflection at any time during the lifetime of a running program without altering the Java Virtual Machine or avoiding a big performance problem. Javassist extends the standard Java reflection API with structural reflection, thus allowing reflection on all parts of the program. The only restriction is that it allows modification only before a program is loaded into the runtime system, thus at load time. Javassist modifies the program by making direct changes to the Java Bytecode of the classes before they are loaded. Once loaded no more changes to the program are possible. This Java Bytecode is an intermediate code that can be run on almost any platform using a Java Virtual Machine. The modification of the Bytecode is implemented by a special class loader that is used instead of the system class loader. No precompiler is used, programs written with Javassist can be compiled using the standard Java compiler and can run on a standard Java Virtual Machine. Smalltalk Smalltalk has extensive support for reflection. Everything from a number to a metaclass is an object or can be reified as an object and used in further computation. All communication between objects is done using message passing so it is not possible to directly affect the state of an object, only indirectly, by sending messages. Because Smalltalk is almost entirely written in itself, it allows easy access to the source code, portability to other computer systems, debugging and meta-programming. Classes can be modified at runtime, the program has access to its own code and can change

20 CHAPTER 2. PRELIMINARIES 11 itself completely. To support all these features, Smalltalk has full and structural reflection. Because of this extensive support for reflection, Smalltalk is an interesting language for programming language research. The SOUL programming language [WD01], developed at the Program Technology Lab, is a logic meta-programming language written in Smalltalk and used to reason about Smalltalk programs. Using logic programming rules, we can derive information from the Smalltalk code or even add code to the code base. Viruses Virus programming and research is often not considered when discussing reflective programming implementations. But current viruses use advanced structural reflection to hide their real meaning and disguise themselves as ordinary programs [Leb02]. Viruses are perhaps the most widespread usage of reflection and they show that advanced reflection is not only a research goal Conclusion We have seen that reflection and meta-programming are interesting techniques to reason about programs. We can use these techniques to help with debugging, resolving crosscutting concerns and even programming language design. Where metaprogramming allows programs to reason about other programs, reflection-capable programs can reason about themselves. 2.2 Continuations A Continuation is the state of a running program at a certain point in time. Using reification, this state can be grabbed and made available as an explicit data structure that can be passed as a variable for using in further computation. It is a data structure mostly used in functional programming languages that captures the complete rest of the computation. We will show the theory with an example, inspired by [Meub]. Consider the following code excerpt in Pico: { x:1; y:2; z: x+y; display(z) } This piece of code defines two variables x and y and puts two values in them, respectively 1 and 2. It then defines a third variable z and puts the sum of the previous two variables in it. Finally it displays the value of z on the screen. The continuation of the second definition (y:2;) in this example is the rest of the code: add x and y, put the result in z and display z. To be able to reify the continuation, we must have a language construct that grabs the continuation. The continuation can be grabbed as follows: call(...expression using a variable called continuation...)

21 CHAPTER 2. PRELIMINARIES 12 The expression in the argument of the call function has access to the variable continuation. This variable contains the future of the computation after the call and can be passed to the function continue as follows: continue(a_continuation, any_value) This will run a_continuation and force it to return any_value. Consider the following piece of code in Pico, again taken from [Meub]: call({ n:10; while(true, if(n=0, continue(continuation,"ok"), n:=n-1)) }) This code counts down starting from 10 and when it reaches 0 it returns with the string "ok" as the result of the call of call(). Note that at first sight the computation is an infinite loop which runs forever. However, when the computation reaches the contine() statement, it runs its first argument, which is in this case the current continuation. As this continuation represents the code after the call, which is nothing, the computation stops and returns with "ok". Using this technique we could implement a special loop construct: { cont:call(continuation); display("ok"); continue(cont,cont) } In the first line the continuation is saved to a variable cont. This continuation is the code after the call, thus the last two lines. The second line prints "ok" on the screen and the third line calls the continuation in cont and forces it to return with the value of the same variable cont. So the call in line one returns with this same continuation that it returned when it was first run and saves it again in the variable cont. Now the process starts all over again and a loop is accomplished. The result of this program will be a continuous printing of "ok" on the screen Coroutines Continuations can also be used to implement a special style of programming. Instead of calling functions that return values, different continuations call each other and the actual computation jumps between the different continuations. Instead of returning values as the result of a function call and returning to the place where the function was called, the programmer has to code the swaps between continuations manually. This is called programming in a continuation-passing style. An example of this style are coroutines. These are functions or procedures that call each other but without return information. When one coroutine calls another coroutine it will not

22 CHAPTER 2. PRELIMINARIES 13 { loop: void; loop:=call({ loop:continuation; while(true,loop:=call({ display("one"); continue(loop,continuation); continuation}))}); while(true,loop:=call({ display("two"); continue(loop,continuation); continuation}))} Table 2.2: A coroutine that alternately prints one and two on the screen. automatically return to the first one. The only way to return is to explicitly call the first one. Table 2.2 from [Meub] shows an example of two coroutines that alternately print one and two on the screen. Both routines run indefinitely, but after every display statement they jump to the other coroutine using the variable loop which always contains the future, in this case the other coroutine. Coroutines vs threads Different, independently running routines can also be achieved using threads. These routines can, like coroutines, compute their result independently from each other. The difference between coroutines and threads lies in the way the swap between the two computations is done. In coroutine programming the programmer has to code the jump to the other coroutine manually. In thread based programming however, this decision is done by the programming language or the operating system running the program on the basis of the priorities of the different threads. For the programmer threads seem to run simultaneously but in reality the implementation of the thread system schedules the threads and quickly swaps between them. When we would implement the example of table 2.2 using threads, the program would occasionally print one or two several times, without alternating with the other word. This is caused by the scheduling of the threading implementation. 2.3 Evaluators Programming languages can be interpreted at runtime using an evaluator, or can be compiled to machine code or an intermediate binary format to be able to execute the program. It takes longer to run a program under an interpreter than to run the compiled code directly on the machine, but interpreting is still faster than the complete compile-run cycle. This is important while testing and debugging where

23 CHAPTER 2. PRELIMINARIES 14 Figure 2.2: Example read-eval-print evaluation. the delay of the compile cycle is a waste of time. Interpreters can also make it easier to add new concepts to the programming language and can help in testing them more quickly. Where a compiler converts the source code of a program to machine language, the evaluator interprets the source code at runtime by checking the syntax and evaluating it. An evaluator usually consists of three phases: Read: converts the sequence of characters from the source code to a data structure the evaluator can understand Eval: evaluates the output of the reader and returns a value Print: prints the output of the evaluator to a screen or file The reader usually consists of two sub phases, the scanner (or lexical analyser) and the parser (or structural analyser). The scanner analyses the source code, detects syntactical errors and outputs a more abstract stream of data without comments, whitespace or other unneeded information. The parser takes this input and converts it to a parse three. The evaluator evaluates this parse tree and returns a value representing the result of the evaluation. The printer finally prints this value on the screen, representing the data in a readable way. Figure 2.2 shows how the three main parts of the evaluator work together. The reader reads the input string "add(5,6)", parses it and outputs the parse tree that consists of an application (APL) of the function add on two numbers (NUM) 5 and 6. The evaluator takes this parse tree as its input and applies the function add to the

24 CHAPTER 2. PRELIMINARIES 15 Figure 2.3: The two stages of the scanner. two numbers. The function add adds the two numbers and returns the sum of both as a new number. The evaluator then passes this number on to the printer that converts it to a string and prints it on the screen. In figure 2.3 we focus on the workings of the reader. The scanner, or tokenizer, converts the input string "add(5,6)" to a list of tokens. These tokens refer to the different elements in the input string. The scanner distinguishes successively a name add, a left parenthesis, a number, a comma, a number and a right parenthesis, finally it concludes with an end token. The parser takes this list of tokens, checks whether they meet the rules of the syntax and outputs the parse tree, which is evaluated by the evaluator as shown earlier in figure 2.2. The evaluator loops over the three phases, successively running the reader, evaluator and printer. This is called the Read-Eval-Print Loop or REPL. In many functional languages this can be simply implemented as: (loop (print (eval (read)))) Meta-Circular Evaluation The evaluator for a programming language is often implemented as a meta-circular evaluator, which is an evaluator that is implemented in the same language that it interprets. Figure 2.4 illustrates the architecture of a metacircular interpreter. The interpreter M1 is itself interpreted by the interpreter of L1 and at the same time implements an interpreter for L1. Programming languages such as Scheme [KCE98] or Pico [D H03] allow very nice and simple meta-circular interpreters [ASS96, D H03] to be written. It is a very common exercise when learning these languages to write a meta-circular interpreter for these languages to get familiar with the details of the semantics of the language

25 CHAPTER 2. PRELIMINARIES 16 Figure 2.4: M1 is implemented by L1 and at the same time implements L1. and the techniques required to implement it. It can help to understand the parts of the language that would otherwise be overlooked because they are too tricky or seemingly trivial. By implementing an interpreter, one cannot skip these parts and is forced to implement and thus understand the whole picture. Besides being useful for educational purposes, meta-circular interpreters can also be used for debugging programs, adding unusual or experimental control strategies and language extensions or even to allow a proof system to reason about the semantics of the language. Finally meta-circular evaluation can be used to add reflectional properties to a language that does not support reflection natively. We will use the meta-circular interpreter for Pico in our research in chapter 6 to implement an experimental language, based on Pico. It is clear that meta-circularity is an important concept for programming language design. A meta-circular evaluator can at the same time be a reference implementation and a test case for the real interpreter. It is usually simpler than the base interpreter written in a more low-level programming language and it is evaluated by a machine language interpreter. The meta-circular evaluator is thus very useful to teach the syntax and semantics of a language, as well as serve as a reference implementation for an interpreter or compiler. 2.4 Concrete and Abstract Syntax When designing a programming language, the syntax of the language highly influences its usability and readability. The syntax must also be parsable by the interpreter or the compiler and it must be deterministic, non-ambiguous. The parser should always be able to determine the correct meaning of the source code, without ambiguities. In section 2.3 we explained how evaluators interpret the text or source code of a program and evaluate it. In this section we will see which internal languages and data structures are used in these evaluators to support the implementation of a programming language.

26 CHAPTER 2. PRELIMINARIES 17 <postal-address> ::= <name-part> <street-address> <zip-part> <personal-part> ::= <name> <initial> "." <name-part> ::= <personal-part> <last-name> <EOL> <personal-part> <name-part> <street-address> ::= <house-num> <street-name> <EOL> <zip-part> ::= <town-name> "," <state-code> <ZIP-code> <EOL> Table 2.3: Example BNF: the US postal address Concrete Syntax The definition of the syntax of a programming language is called the Concrete Syntax or Concrete Grammar. It specifies tokens and keywords that allow the parser to construct an abstract syntax tree unambiguously. This concrete syntax is usually expressed in a context-free grammar. In such a grammar, every production is of the form V w where every V is a non-terminal symbol and w is a string containing terminals and non-terminals. The grammar is called context-free because the variable V can always be replaced by w, no matter in what context it occurs. This makes the implementation of the parser easier as it only has to care about the current symbol, without knowing what it was before. The concrete grammar is often expressed in Backus-Naur form (BNF), introduced by Backus and Naur [Nau63]. It is a meta-syntactic notation used to specify the syntax of programming languages and communication protocols, for example HTTP [FGM + 99]. Table 2.3 contains an example of the BNF notation, in this case used to define the syntax of a US postal address. The left hand side of the ::= sign equals the V mentioned earlier while the right hand side is the w part. We can see that a postal-address consists of a name-part followed by a street-address and a identification or zip-part. The sign is an or operator indicating that or the left, or the right of the sign should be used. As an example we will check whether the following fictional address matches the syntax of a US postal address: Adriaan Peeters 24 Highstreet Donnel, DO 1337 Starting from postal-address we will start to try and match the address with the concrete syntax. We can match Adriaan with the name in the personal-part. The last-name Peeters can be matched to the last-name and together they form the name-part. This finishes the first line, we can match the rest of the address in a similar fashion. Finally we can conclude that the address matches the concrete syntax of a US postal address. When the concrete syntax is correct, this does not mean that the information is reasonable. When we refer to a non existing fictitious

27 CHAPTER 2. PRELIMINARIES 18 person or a non existing street name, the information is useless, although the syntax is correct. To indicate the meaning of the syntax, we use the abstract syntax Abstract Syntax The Abstract Syntax or Abstract Grammar is a specification of internal representation of the computer program that is independent of the machine-oriented structures and encodings, and also of the physical representation of the data, the concrete syntax. The internal representation of a program will be typically specified by an abstract syntax in terms of categories such as statements, expressions and identifiers. The abstract syntax is implemented as a simple data structure, and is used by the evaluator or the compiler to handle the program. It is usually a lot simpler than the concrete syntax and is usually implemented as an abstract syntax tree that can easily be interpreted by the evaluator or the compiler. The abstract syntax of a language is usually not communicated to the programmers that use the programming language, as it is normally only useful to the implementors of the programming language. It consists of all internal data structures representing the program that the compiler or evaluator uses to do its job. When the programming language is reflective in which case the programmer has access to the internal state of the program the organisation of this abstract grammar must be known in order to be able to reason about it. The data is usually tagged to indicate the type of data it represents. In the next section 2.5 we will explain the applications of this typing. In table 2.4 we see an example of an Abstract Syntax, it is an excerpt from the Pico abstract syntax. We start with an expression and see that it can consist of a number, text, variable, application or void. Furthermore we see that an application is tagged with an APL tag and consists of a name and arguments. The other expressions have a similar interpretation. 2.5 Static and Dynamic Typing Almost all programming languages have a type-system, a system that defines what types of data can be manipulated in the language and how these variables can be stored in variables and referenced. We will explain the different typing systems using some simple examples in pseudo code Types To aid the execution of a program whether it is interpreted or compiled all data, or values, in a programming language are tagged with the type of value it represents. Most programming languages support data structures such as integers, text strings and tables or arrays. These data structures are stored in memory and tagged so that the evaluator or the compiler can quickly see what type of data it encounters and act accordingly. The tags that are used correspond to the tags in the abstract grammar as we saw in table 2.4. Without this tagging the compiler or evaluator should have to deduce the type of the value from the data itself. This

29 CHAPTER 2. PRELIMINARIES 20 is slow and sometimes difficult or even impossible to do. Because computers only manipulate bits, or numbers, there is no distinction in the hardware between memory addresses, instruction code, characters, strings and integers. Everything is stored in memory as numbers. An interpreter or compiler without typing can only see these numbers and is thus unable to deduce whether the number should be interpreted as an integer or as a text string. So there must be a way to distinguish the different types of data in memory. Table 2.4 shows in rows 6 and 7 how an integer in Pico is tagged with a NBR tag and a string value with a TXT tag. Because different programming languages use different data structures, they thus use different data types Static Typing In programming languages that use static typing the type of the values used in the program is based on a static analysis of the program s source code. Static typed systems usually assign a single type to each variable, function parameter or any other reference to a data structure. Once this type is assigned, it is not possible to change it, so when the same variable is assigned to two values of a different type, an error is raised. Consider the following example: var x; // (1) x = 5; // (2) x = "hello"; // (3) In this example, (1) declares a variable named x, (2) binds the integer value 5 to x and (3) binds the string value "hello" to x. In a statically typed programming language, this code fragment would raise an error as it is illegal to bind two values of a different type to a variable. In contrast to this example, most statically typed programming languages define the type of a variable when declaring the variable. This way the evaluator or the compiler knows exactly what type of data each variable contains and can enforce this type throughout the entire program. Static type checking operates on the program source code rather than at execution time. Therefore, the interpreter or compiler can detect variable type violations without executing the programs which allows early detection of certain errors. Furthermore the compiler can optimise the code by including only functionality for the data types that are used and by avoiding type checks at runtime. Implementations Programming languages that use static typing include C [Str91] and Java [Mic03b]. Some statically typed languages such as C include functionality to circumvent the static type system. They are called weakly typed programming languages as opposed to strongly typed languages that do not have this back door.

30 CHAPTER 2. PRELIMINARIES Dynamic Typing Programming languages that use dynamic typing assign a type to each data element at runtime. All the different types of values can be assigned to a single variable and once a value of one type is assigned, it is still possible to assign a value of a different type to the variable without an error being raised. As opposed to statically typed languages where the variable and the value are tagged with the type they represent, dynamically typed languages only tag the value and all computation is done based on the tag of this value. The variables are not tagged and only contain a reference to the value they contain. In dynamically typed languages, the example from section will not cause an error, nor at compile time, nor at runtime. As a result of this dynamic typing, errors related to the misuse of values or type errors are only detected at runtime, when the erroneous statement or expression is executed. Consider the following example: var x = 5; // (1) var y = "hello"; // (2) x + y; // (3) This code fragment binds the integer value 5 to x (1), the string value "hello" to y (2) and tries to add x and y (3). In a dynamically typed language, the values 5 and "hello" are respectively tagged as integer and text string, but the variables x and y are not tagged. So when compiling the program, the compiler can not know what type of value the variables will contain and can not foresee possible problems. When the program finally attempts to add both values when executing line (3), the system checks the type tags of the values, finds out that the operation + is not defined on integer and text string and signals an error. Dynamic typing simplifies the programming task, because the programmer does not have to worry about data types when it is not necessary. The drawback is that typing errors are only detected when running the program. This complicates the task of verifying that the code is correct and requires debugging at runtime. When a typing bug is not discovered during the verifying stage of the development of a project, it can slip through in the final product and is then difficult to resolve. Dynamic typing advocates that these disadvantages are outweighed by the flexibility of the system. Implementations Many recent programming languages use dynamic typing. As computational power becomes cheaper, the optimised solution of static typing is not that important anymore. Type checking can be done easily at runtime, the performance drawback can almost be ignored. Programming languages such as Scheme [KCE98], Perl, PHP and Smalltalk [GR83] use a dynamic typing system.

31 CHAPTER 2. PRELIMINARIES Overloading A typing system is not only important for performance reasons or to avoid errors, it can also be used to determine what computation should be performed. Consider the following code example in pseudo code: fun add(int a, int b); fun add(string a, string b); The first line declares a function to add two integers and the latter defines a function to add two strings. Both functions have the same name. Without typing of the parameters, this would cause an error, but using typing the compiler can decide whether the first or the second function should be used on the basis of the parameters to the function when it is called. When we call the function add as follows var y = 6; add(5,y); the compiler can infer that both parameters to add are integers and thus the first function from our example should be called. When we call the function using two strings var last = "Doe"; add("john",last); the compiler finds two strings as parameters and calls the second add function in this place. This system is called overloading: several functions with the same name can be implemented, as long as they use different types of arguments. It is also possible to overload a function on the basis of the number of arguments or a combination of both argument types and argument count. Although overloading can be very powerful, when used incorrect it can affect the readability of the program source code. The operator << in C++ is a good example of how incorrect use of overloading can complicate the usability. The expression a << 1 will return two times the value of a when a is an integer, but if a is an output stream it will write 1 into it Conclusion Typing is an important aspect of the design of a programming language. Both static and dynamic typing have their advantages and disadvantages. Therefore some languages implement a mixture of static and dynamic typing. In Perl for example it is possible to choose between static and dynamic typing for every individual variable.

32 CHAPTER 2. PRELIMINARIES Summary In this chapter we have introduced several techniques and terms concerning programming languages and programming languages research. We saw the different approaches to the implementation of reflection and how an evaluator for a programming language works. We explained what continuations and coroutines are and how they can be used. Furthermore we introduced the concept of concrete and abstract grammars which play an important role in the design and development of current and new programming languages. Finally we explained the importance of a typing system for programming language design and implementation.

33 Chapter 3 Programming Paradigms In this chapter we will describe the most important programming paradigms developed and used during the history of computer science. First we will discuss what a programming paradigm is, explain the theory behind the solutions of computational problems and make clean why different programming paradigms emerged. Finally we will sketch an overview of the main programming paradigms or programming styles. 3.1 Paradigm Before we can talk about programming paradigms, we have to decide what a programming paradigm is. We will start with the meaning of the words programming and paradigm. The definition of programming is clear, we will include it here for completeness. pro gram ming or pro gra ming n. 1. The designing, scheduling, or planning of a program, as in broadcasting. 2. The writing of a computer program. We will use the definition in 2. More puzzling is the definition of paradigm. The American Heritage Dictionary of the English Language states paradigm as follows: par a digm n. 1. One that serves as a pattern or model. 2. A set or list of all the inflectional forms of a word or of one of its grammatical categories: the paradigm of an irregular verb. 3. A set of assumptions, concepts, values, and practices that constitutes a way of viewing reality for the community that shares them, especially in an intellectual discipline. A programming paradigm serves as a model for a certain type of programming practice. There exist many different programming paradigms and paradigm can be seen as a different approach to find a solution for a computational problem. Timothy A. Budd says [Bud95] a programming paradigm is the way of conceptualising what it means to perform a computation, and it describes how tasks that have to be carried 24

34 CHAPTER 3. PROGRAMMING PARADIGMS 25 out should be structured and organised. Both 1. and 3. are accurate descriptions of what a programming paradigm is. It is the combination of methods, theories and standards of coding. In the real world, we can use different words to describe the same thing, but we can also use completely different languages, or even sign language to describe it. When programming we can use different languages or methods to implement our program. Therefore we make a distinction between programming languages and programming paradigms. A programming language is an implementation of a programming paradigm. A programming paradigm is a collection of properties that is shared by multiple programming languages and that defines a certain style of programming. We will see that we can apply the same system of different ways to express an idea to computer programming paradigms and languages. 3.2 Why different paradigms? Budd notes that the language we use in our everyday life influences our view of the world. This is called the Sapir-Whorf hypothesis: people want to organise the world and the main tool they use is language. The language we use determines how we experience the world. So there is a close relationship between the structure and vocabulary of a language and the culture that uses the language. Although this hypothesis is largely discredited in modern linguistics, especially in its strongest form [Phi98], Budd claims that it can be applied to a more restricted form of language known as programming languages. He draws on this Sapir-Whorf hypothesis to make a case for multi-paradigm programming. Budd demonstrates his claim by showing the influence of the programming paradigm on the solution that a programmer found for a specific problem. A genetic research student had to check if a short sequence (size M) of DNA was repeated in a large genetic sequence (size N). He solved the problem in what was for him the most efficient programming language: FORTRAN. He came up with the following: DO 10 I = 1, N-M DO 10 J = 1, N-M FOUND =.TRUE. DO 20 K = 1, M 20 IF X[I+K-1].NE. X[J+K-1] THEN FOUND =.FALSE. IF FOUND THEN CONTINUE At first sight this program seemed to be able to solve the problem, but it needed more than ten hours to complete. The student then discussed the problem with another student and she proposed to reformulate the problem in APL [Bud95]. Because it is more natural to use sorting in APL instead of loops, she came up with a solution that divided the long sequence in short sequences with the length M to check. Then she sorted those short sequences and if a duplicate was found, the short sequence was repeated. With this solution, the program ran just a few minutes instead of hours. Although the overall performance of FORTRAN for standard computational

35 CHAPTER 3. PROGRAMMING PARADIGMS 26 tasks was much better than APL, the APL solution was tens of times faster than the FORTRAN solution, because of the approach taken. This shows how important the influence of the paradigm is on the solution that is found. More examples prove that there is indeed an influence of the paradigm that is used. With this result we can see that it is important to investigate whether it is possible to combine the best properties of multiple paradigms. If we can achieve a successful merge of multiple paradigms, we can allow the programmer to use the paradigm that fits best for one particular problem, while using a single programming language. Using abstraction it should be possible to reuse a piece of code written in another paradigm, without having to worry about which data structures need to be used to communicate between the paradigms. In chapter 5 we explain how we can implement such a language and see some example implementations. 3.3 Church, Turing Before we will investigate the different programming paradigms, we will introduce the theory of computability and explain why we can compute the same result using different programming paradigms. In the beginning of the nineteenth century mathematicians came up with some questions they wanted to be solved. One of them was the question whether there is an algorithm that can be applied to any mathematical assertion and will eventually tell whether that assertion is true of false. This question is known as the Entscheidungsproblem or decision problem. Both Alan Turing and Alonzo Church proposed a solution to this question and answered that it is not possible to construct such an algorithm Turing Machine Alan Turing defined a hypothetical machine to prove that the decision problem is undecidable. This Turing Machine [Tur36] consists of an infinitely long bidirectional tape, with symbols at regular intervals. The machine knows the current location on the tape and the current state, which is one of a finite set of internal states. At each step, the machine reads a symbol from the tape. Using the internal program that specifies the action to take for every current state and symbol read, we can decide what the next state will be, what symbol has to be written to the tape and whether the tape should move left, right or stay in its original position. This simple system implements the entire Turing machine. Using this machine, Turing proved that the answer to the decision problem is negative for elementary number theory and proved the incomputability of the stop problem [Tur36] Church Conjecture Another solution to prove that the decision problem can not be computed was proposed by Church: the lambda calculus. This calculus is the theoretical basis for functional programming. Now there were two solutions to the decision problem that

36 CHAPTER 3. PROGRAMMING PARADIGMS 27 were both able to prove that the decision problem could not be proven. So they had the same computational power. This led Church to formulate the conjecture that all formalisms of computability are equivalent. This is later reformulated and named after him [HMU01]: Church s Conjecture: Any procedure for which there exists an effective procedure can be realized by a Turing machine. Translated to current computer science terms, this means that any program can be executed by a Turing machine. If we accept Church s conjecture, then any language in which it is possible to simulate a Turing machine is powerful enough to implement any algorithm. The language is then called Turing complete. When we apply this to programming paradigms, we can solve a problem using any paradigm, if we can solve it using one of the other paradigms. No machine matching the exact Turing design has ever been implemented, except for educational purposes. Although it would be theoretically possible to use it for any computation, this would be very difficult as the machine would be too slow and complicated to use and program. This is called the Turing tarpit: a language or system that is theoretically universal, but in practice too complicated to use. 3.4 Main paradigms Turing Machines could be used to solve all computational problems, but they are very difficult to use. Programming a Turing machine is complicated, and fixing bugs even more. So turing machines were only used for theoretical computer science such as research in computability. During the history of computer science, different programming paradigms were developed as different ideas about programming emerged. Every paradigm has its own purpose and background. Some became very popular, others did not, despite their excellent characteristics. When talking about programming paradigms, we distinguish four main paradigms. Imperative paradigm A sequential enumeration of instructions used to alter distinct memory locations. Functional paradigm Computation using functions, based on mathematical theory. Logic paradigm Queries that return result sets computed using logic theory. Object-oriented paradigm Models real world things, computation is done through messages between objects.

37 CHAPTER 3. PROGRAMMING PARADIGMS 28 There exist other, less used paradigms, some of which are subsets or extensions of the four main paradigms, some examples are: Parallel paradigm A technique to allow programming for multi-processor architectures. Visual paradigm Programming based on combining components through images. Constraint paradigm Tries to meet constraints. Constraint programming for example is closely related to logic programming. In this dissertation we will focus on the four main paradigms. We can also divide them in stateful and stateless paradigms [MMR95]: stateful In stateful paradigms, the computation goes from one state to another by applying functions to variables. The values in the variables are overwritten when they are changed. stateless In stateless paradigms, functions have no side-effects. Variables can not be assigned, the value of a variable can only be assigned when defining the variable. Stateful paradigms are a more engineering view on programming, while stateless paradigms are more mathematically oriented. In the next sections we will see how this relates to the ideas behind the different paradigms. Implementations of the different programming paradigms often combine bits and pieces from several paradigms. For example, object-oriented programming languages mostly use an imperative memory model and updates to the objects are done using destructive programming constructs. Programming languages that implement a single programming paradigm, without any influence of other paradigms, are called pure programming languages. Programming languages implementing a mixture of paradigms are called impure. In the remainder of this section we will describe the four main programming paradigms and give some examples of programming languages that instantiate the paradigms Imperative Paradigm The Imperative Paradigm is the classical model of computation. It is very close to the way real hardware machines work. The Paradigm is based on sequentions of conditional and looping constructs, and ways to destructively update the memory that is used. The Imperative Paradigm is often described as a system that goes from one state to another via incremental changes to the state. At every step in the program, the entire system goes from one state to another. During this computation the results are placed directly into memory. The memory locations where the data is

38 CHAPTER 3. PROGRAMMING PARADIGMS 29 stored are fixed, but the contents change over time. The elements of a large structure, such as an array, are manipulated one by one to compute the result. Programming in an imperative paradigm is not very human-friendly, as it is quite different from the way humans think. Because it is very close to the way hardware machines work, it is very difficult to get an overall picture of what the program does. Extending programs and debugging them is very difficult. To improve the usability, most instantiations of the Imperative Paradigm include special statements such as if/else and while statements. Variables and arrays are also often included. These extensions try to simplify programming in an imperative language an they do help a bit, but it s still quite messy. We will see that other paradigms try to resolve these problems of complexity by offering different solutions for encapsulation or abstraction of functionality and data. Instantiations The most well-known examples of imperative programming are Fortran [Ame78], C [KR88] and Pascal [Bur74]. All assembler languages are also apparent examples of the Imperative Paradigm Functional Paradigm The Functional Paradigm takes a completely different approach. As the name states, all computation is done through functions. Functional programs contain no assignments: values can be created, but once created they can not be changed. Computation on structures for example adding a value to every item of an array does not change the array itself, but returns a new array containing the result of the computation. So as there are no changes to the state of the program over time, we say that functional programs are atemporal ( without time ). We can even delay some computations until the value is really needed. Because we have no assignments, the result of a function given the same arguments is always the same, no matter when the function is called. We call this property referential transparency. Functions that possess referential transparency can be more easily manipulated than mathematical objects and allow programs to be proven correct through mathematics. Another important feature of functional languages is that a function itself is firstclass. We can save them in variables, pass them as arguments to functions or return them as result from a function. This enables programmers to create high-level functions that accept functions as arguments. We can for example write a general sort function and pass the comparison function as a parameter. This way we can easily create many different sort functions. Quite complicated computations can thus become trivial to implement. Instantiations ML [MTH90] and Haskell [Jon03] are both pure functional programming languages. Other functional programming languages such as Scheme [KCE98] are often extended with imperative procedures and are thus impure.

39 CHAPTER 3. PROGRAMMING PARADIGMS Logic Paradigm The Logic Programming approach arose from the research in Artificial Intelligence and Automatic Theorem Proving. It is used in AI to create an abstract model of the world and reason about it. A Logic Program consists of three basic statements: facts, rules and queries. The only data structure is a logical term, inherited from first-order logic. Using these facts and rules, the programmer specifies what he wants to be computed instead of how it has to be computed. This is called Declarative Programming: the desired answer is specified and the computation is a search of the space of possible solutions to match this answer. The search tries all facts and rules and uses backtracking when a rule can not be met. Like Functional languages, the logic paradigm has no state. A query will always return the same result, given the fact that the database of facts and rules did not change. We will explain the details of logic programming in chapter 4. Instantiations The most well-known logic programming language is PROLOG [DEDC96]. It is the reference implementation of logic programming and PROLOG is very often used as a synonym for logic programming. Other programming languages that use logic programming often combine it with other functionality, as we will see in chapter Object-Oriented Paradigm Object-oriented programming (OO) can be considered as the most important programming paradigm. Although it has its flaws, it is the most used paradigm in new programming projects. In object-oriented programming everything is an object. The object contains both the data and the functions that can be applied to the data structure. Because the data in the object changes over time, the system is stateful so it does not possess the referential transparency property. Encapsulating data and functionality in one object is a fundamentally different approach in comparison to the other paradigms and is a major advantage for abstraction: different parts of a complex system can be easily separated and protected from each other. This offers options for reuse and independent code creation when working with different programmers on large projects. To implement the functionality of the program, the programmers define relationships between the different objects. Using polymorphism new objects, based on inherited data and functions from other objects, can be adapted by changing the inherited functionality. This way the object internal data structures can differ completely from the parent object, but this is not visible from the outside. Objects are often compared to real-world objects and are usually implemented to represent the real-world object in a computational environment. These objects then act as components offering a specific service. Other objects can use the service but do not have to know how the service is performed and implemented. In large projects this becomes the most important quality of object-oriented programming. It allows

40 CHAPTER 3. PROGRAMMING PARADIGMS 31 Figure 3.2: A prototype-based object. Figure 3.1: A class-based object. to develop the different objects independently as well as replacing them with other objects without having to change anything of the rest of the project. There are different views on object-oriented programming, based on how the objects are created: Class-based Object-orientation is the traditional way of object-oriented programming. A class is used to organise the basic layout and functionality of other objects. New objects are created by instantiating this class and assigning memory for its internal state. The object contains only its state, which is the data, and a reference to the class it instantiates to be able to access the functionality. When interacting with the object, the code from the class is used, along with the data from the object. Figure 3.1 illustrates this. Prototype-based Object-orientation takes another approach, there are only objects and no classes. Both functionality and state are contained in the object and only for its own usage. So an object is very simple, as we see in figure 3.2. When a new object has to be created, it is copied from an existing one. Functionality is added by adding methods and variables to the newly created object. This system makes it easier to adapt the functionality of an object to the physical object or to the information it represents. A large amount of tools arose to aid object-oriented software design. Modelling languages such as UML[FS00] are almost indispensable in organising large objectoriented projects. Design Patterns In an attempt to capture often used patterns in object-oriented software development, Design Patterns [GHJV95] were developed. A design pattern describes the

Programming Languages Programming languages bridge the gap between people and machines; for that matter, they also bridge the gap among people who would like to share algorithms in a way that immediately

1 Introduction The purpose of this assignment is to write an interpreter for a small subset of the Lisp programming language. The interpreter should be able to perform simple arithmetic and comparisons

Moving from CS 61A Scheme to CS 61B Java Introduction Java is an object-oriented language. This document describes some of the differences between object-oriented programming in Scheme (which we hope you

INTRODUCTION 1 Programming languages have common concepts that are seen in all languages This course will discuss and illustrate these common concepts: Syntax Names Types Semantics Memory Management We

Answer first, then check at the end. Review questions for Chapter 9 True/False 1. A compiler translates a high-level language program into the corresponding program in machine code. 2. An interpreter is

Integration of Application Business Logic and Business Rules with DSL and AOP Bogumiła Hnatkowska and Krzysztof Kasprzyk Wroclaw University of Technology, Wyb. Wyspianskiego 27 50-370 Wroclaw, Poland Bogumila.Hnatkowska@pwr.wroc.pl

New Generation of Software Development Terry Hon University of British Columbia 201-2366 Main Mall Vancouver B.C. V6T 1Z4 tyehon@cs.ubc.ca ABSTRACT In this paper, I present a picture of what software development

The programming language C sws1 1 The programming language C invented by Dennis Ritchie in early 1970s who used it to write the first Hello World program C was used to write UNIX Standardised as K&C (Kernighan

Interpreters and virtual machines Michel Schinz 2007 03 23 Interpreters Interpreters Why interpreters? An interpreter is a program that executes another program, represented as some kind of data-structure.

Fundamentals of Java Programming This document is exclusive property of Cisco Systems, Inc. Permission is granted to print and copy this document for non-commercial distribution and exclusive use by instructors

1 The Java Virtual Machine About the Spec Format This document describes the Java virtual machine and the instruction set. In this introduction, each component of the machine is briefly described. This

Parsing Technology and its role in Legacy Modernization A Metaware White Paper 1 INTRODUCTION In the two last decades there has been an explosion of interest in software tools that can automate key tasks

Algorithm & Flowchart & Pseudo code Staff Incharge: S.Sasirekha Computer Programming and Languages Computers work on a set of instructions called computer program, which clearly specify the ways to carry

The Smalltalk Programming Language Beatrice Åkerblom beatrice@dsv.su.se 'The best way to predict the future is to invent it' - Alan Kay. History of Smalltalk Influenced by Lisp and Simula Object-oriented

Appendix E Glossary of Object Oriented Terms abstract class: A class primarily intended to define an instance, but can not be instantiated without additional methods. abstract data type: An abstraction

Chapter 6: The Assembly Process This chapter will present a brief introduction to the functioning of a standard two pass assembler. It will focus on the conversion of the text of an assembly language program,

Java Application Developer Certificate Program Competencies After completing the following units, you will be able to: Basic Programming Logic Explain the steps involved in the program development cycle

C++ Programming: From Problem Analysis to Program Design, Fifth Edition Chapter 2: Basic Elements of C++ Objectives In this chapter, you will: Become familiar with the basic components of a C++ program,

Chapter 1 1.1Reasons for Studying Concepts of Programming Languages a) Increased ability to express ideas. It is widely believed that the depth at which we think is influenced by the expressive power of

The Role of Programming in Informatics Curricula A. J. Cowling Department of Computer Science University of Sheffield Structure of Presentation Introduction The problem, and the key concepts. Dimensions

Objectives To describe the services an operating system provides to users, processes, and other systems To discuss the various ways of structuring an operating system Chapter 2: Operating-System Structures

C programming Intro to syntax & basic operations Example 1: simple calculation with I/O Program, line by line Line 1: preprocessor directive; used to incorporate code from existing library not actually

The Java Series Java Essentials I What is Java? Basic Language Constructs Slide 1 What is Java? A general purpose Object Oriented programming language. Created by Sun Microsystems. It s a general purpose

Building Applications Using Micro Focus COBOL Abstract If you look through the Micro Focus COBOL documentation, you will see many different executable file types referenced: int, gnt, exe, dll and others.

1. Overview of the Java Language What Is the Java Technology? Java technology is: A programming language A development environment An application environment A deployment environment It is similar in syntax

CS 40 Computing for the Web Art Lee January 20, 2015 Announcements Course web on Sakai Homework assignments submit them on Sakai Email me the survey: See the Announcements page on the course web for instructions

Adjusted/Modified by Nicole Tobias Chapter 2: Basic Elements of C++ Objectives In this chapter, you will: Become familiar with functions, special symbols, and identifiers in C++ Explore simple data types

Java in Education Introduction Choosing appropriate tool for creating multimedia is the first step in multimedia design and production. Various tools that are used by educators, designers and programmers