INT (x86 instruction)

This article is being considered for deletion in accordance with Wikipedia's deletion policy.Please share your thoughts on the matter at this article's entry on the Articles for deletion page.Feel free to improve the article, but the article must not be blanked, and this notice must not be removed, until the discussion is closed. For more information, particularly on merging or moving the article during the discussion, read the guide to deletion.%5B%5BWikipedia%3AArticles+for+deletion%2FFCMOV%5D%5DAFD

When written in assembly language, the instruction is written like this:

INT X

where X is the software interrupt that should be generated (0-255).

Depending on the context, compiler, or assembler, a software interrupt number is often given as a hexadecimal value, sometimes with a prefix 0x or the suffix h. For example, INT 21H will generate the software interrupt 0x21 (33 in decimal), causing the function pointed to by the 34th vector in the interrupt table to be executed, which is typically an MS-DOS API call.

Contents

When generating a software interrupt, the processor calls one of the 256 functions pointed to by the interrupt address table, which is located in the first 1024 bytes of memory while in real mode (See Interrupt vector). It is therefore entirely possible to use a far-call instruction to start the interrupt-function manually after pushing the flag register.

One of the most useful DOS software interrupts was interrupt 0x21. By calling it with different parameters in the registers (mostly ah and al) you could access various IO operations, string output and more.[2]

Most Unix systems and derivatives do not use software interrupts, with the exception of interrupt 0x80, used to make system calls. This is accomplished by entering a 32-bit value corresponding to a kernel function into the EAX register of the processor and then executing INT 0x80.

The INT 3 instruction is defined for use by debuggers to temporarily replace an instruction in a running program in order to set a breakpoint. Other INT instructions are encoded using two bytes. This makes them unsuitable for use in patching instructions (which can be one byte long); see SIGTRAP.

The opcode for INT 3 is 0xCC, as opposed to the opcode for INT immediate, which is 0xCD imm8. Since the dedicated 0xCC opcode has some desired special properties for debugging, which are not shared by the normal two-byte opcode for an INT 3, assemblers do not normally generate the generic 0xCD 0x03 opcode from mnemonics.[3]

1.
X86
–
X86 is a family of backward-compatible instruction set architectures based on the Intel 8086 CPU and its Intel 8088 variant. The term x86 came into being because the names of several successors to Intels 8086 processor end in 86, many additions and extensions have been added to the x86 instruction set over the years, almost consistently with full backward compatibility. The architecture has been implemented in processors from Intel, Cyrix, AMD, VIA and many companies, there are also open implementations. In the 1980s and early 1990s, when the 8088 and 80286 were still in common use, today, however, x86 usually implies a binary compatibility also with the 32-bit instruction set of the 80386. An 8086 system, including such as 8087 and 8089. There were also terms iRMX, iSBC, and iSBX – all together under the heading Microsystem 80, however, this naming scheme was quite temporary, lasting for a few years during the early 1980s. Today, x86 is ubiquitous in both stationary and portable computers, and is also used in midrange computers, workstations, servers. A large amount of software, including operating systems such as DOS, Windows, Linux, BSD, Solaris and macOS, functions with x86-based hardware. There have been attempts, including by Intel itself, to end the market dominance of the inelegant x86 architecture designed directly from the first simple 8-bit microprocessors. Examples of this are the iAPX432, the Intel 960, Intel 860, however, the continuous refinement of x86 microarchitectures, circuitry and semiconductor manufacturing would make it hard to replace x86 in many segments. The table below lists processor models and model series implementing variations of the x86 instruction set, each line item is characterized by significantly improved or commercially successful processor microarchitecture designs. Such x86 implementations are seldom simple copies but often employ different internal microarchitectures as well as different solutions at the electronic, quite naturally, early compatible microprocessors were 16-bit, while 32-bit designs were developed much later. For the personal computer market, real quantities started to appear around 1990 with i386 and i486 compatible processors, other companies, which designed or manufactured x86 or x87 processors, include ITT Corporation, National Semiconductor, ULSI System Technology, and Weitek. Some early versions of these microprocessors had heat dissipation problems, AMD later managed to establish itself as a serious contender with the K6 set of processors, which gave way to the very successful Athlon and Opteron. There were also other contenders, such as Centaur Technology, Rise Technology, VIA Technologies energy efficient C3 and C7 processors, which were designed by the Centaur company, have been sold for many years. Centaurs newest design, the VIA Nano, is their first processor with superscalar and it was, perhaps interestingly, introduced at about the same time as Intels first in-order processor since the P5 Pentium, the Intel Atom. The instruction set architecture has twice been extended to a word size. In 1999-2003, AMD extended this 32-bit architecture to 64 bits and referred to it as x86-64 in early documents, Intel soon adopted AMDs architectural extensions under the name IA-32e, later using the name EM64T and finally using Intel 64

2.
Central processing unit
–
The computer industry has used the term central processing unit at least since the early 1960s. The form, design and implementation of CPUs have changed over the course of their history, most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may also contain memory, peripheral interfaces, some computers employ a multi-core processor, which is a single chip containing two or more CPUs called cores, in that context, one can speak of such single chips as sockets. Array processors or vector processors have multiple processors that operate in parallel, there also exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be rewired to perform different tasks. Since the term CPU is generally defined as a device for software execution, the idea of a stored-program computer was already present in the design of J. Presper Eckert and John William Mauchlys ENIAC, but was initially omitted so that it could be finished sooner. On June 30,1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC and it was the outline of a stored-program computer that would eventually be completed in August 1949. EDVAC was designed to perform a number of instructions of various types. Significantly, the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the wiring of the computer. This overcame a severe limitation of ENIAC, which was the considerable time, with von Neumanns design, the program that EDVAC ran could be changed simply by changing the contents of the memory. Early CPUs were custom designs used as part of a larger, however, this method of designing custom CPUs for a particular application has largely given way to the development of multi-purpose processors produced in large quantities. This standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit. The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers, both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, the so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. Relays and vacuum tubes were used as switching elements, a useful computer requires thousands or tens of thousands of switching devices. The overall speed of a system is dependent on the speed of the switches, tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed very rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems, most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, the design complexity of CPUs increased as various technologies facilitated building smaller and more reliable electronic devices

3.
Unix
–
Among these is Apples macOS, which is the Unix version with the largest installed base as of 2014. Many Unix-like operating systems have arisen over the years, of which Linux is the most popular, Unix was originally meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmer users. The system grew larger as the system started spreading in academic circles, as users added their own tools to the system. Unix was designed to be portable, multi-tasking and multi-user in a time-sharing configuration and these concepts are collectively known as the Unix philosophy. By the early 1980s users began seeing Unix as a universal operating system. Under Unix, the system consists of many utilities along with the master control program. To mediate such access, the kernel has special rights, reflected in the division between user space and kernel space, the microkernel concept was introduced in an effort to reverse the trend towards larger kernels and return to a system in which most tasks were completed by smaller utilities. In an era when a standard computer consisted of a disk for storage and a data terminal for input and output. However, modern systems include networking and other new devices, as graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, and semaphores. In microkernel implementations, functions such as network protocols could be moved out of the kernel, Multics introduced many innovations, but had many problems. Frustrated by the size and complexity of Multics but not by the aims and their last researchers to leave Multics, Ken Thompson, Dennis Ritchie, M. D. McIlroy, and J. F. Ossanna, decided to redo the work on a much smaller scale. The name Unics, a pun on Multics, was suggested for the project in 1970. Peter H. Salus credits Peter Neumann with the pun, while Brian Kernighan claims the coining for himself, in 1972, Unix was rewritten in the C programming language. Bell Labs produced several versions of Unix that are referred to as Research Unix. In 1975, the first source license for UNIX was sold to faculty at the University of Illinois Department of Computer Science, UIUC graduate student Greg Chesson was instrumental in negotiating the terms of this license. During the late 1970s and early 1980s, the influence of Unix in academic circles led to adoption of Unix by commercial startups, including Sequent, HP-UX, Solaris, AIX. In the late 1980s, AT&T Unix System Laboratories and Sun Microsystems developed System V Release 4, in the 1990s, Unix-like systems grew in popularity as Linux and BSD distributions were developed through collaboration by a worldwide network of programmers

4.
Hexadecimal
–
In mathematics and computing, hexadecimal is a positional numeral system with a radix, or base, of 16. It uses sixteen distinct symbols, most often the symbols 0–9 to represent values zero to nine, Hexadecimal numerals are widely used by computer system designers and programmers. As each hexadecimal digit represents four binary digits, it allows a more human-friendly representation of binary-coded values, one hexadecimal digit represents a nibble, which is half of an octet or byte. For example, a byte can have values ranging from 00000000 to 11111111 in binary form. In a non-programming context, a subscript is typically used to give the radix, several notations are used to support hexadecimal representation of constants in programming languages, usually involving a prefix or suffix. The prefix 0x is used in C and related languages, where this value might be denoted as 0x2AF3, in contexts where the base is not clear, hexadecimal numbers can be ambiguous and confused with numbers expressed in other bases. There are several conventions for expressing values unambiguously, a numerical subscript can give the base explicitly,15910 is decimal 159,15916 is hexadecimal 159, which is equal to 34510. Some authors prefer a text subscript, such as 159decimal and 159hex, or 159d and 159h. example. com/name%20with%20spaces where %20 is the space character, thus &#x2019, represents the right single quotation mark, Unicode code point number 2019 in hex,8217. In the Unicode standard, a value is represented with U+ followed by the hex value. Color references in HTML, CSS and X Window can be expressed with six hexadecimal digits prefixed with #, white, CSS allows 3-hexdigit abbreviations with one hexdigit per component, #FA3 abbreviates #FFAA33. *nix shells, AT&T assembly language and likewise the C programming language, to output an integer as hexadecimal with the printf function family, the format conversion code %X or %x is used. In Intel-derived assembly languages and Modula-2, hexadecimal is denoted with a suffixed H or h, some assembly languages use the notation HABCD. Ada and VHDL enclose hexadecimal numerals in based numeric quotes, 16#5A3#, for bit vector constants VHDL uses the notation x5A3. Verilog represents hexadecimal constants in the form 8hFF, where 8 is the number of bits in the value, the Smalltalk language uses the prefix 16r, 16r5A3 PostScript and the Bourne shell and its derivatives denote hex with prefix 16#, 16#5A3. For PostScript, binary data can be expressed as unprefixed consecutive hexadecimal pairs, in early systems when a Macintosh crashed, one or two lines of hexadecimal code would be displayed under the Sad Mac to tell the user what went wrong. Common Lisp uses the prefixes #x and #16r, setting the variables *read-base* and *print-base* to 16 can also used to switch the reader and printer of a Common Lisp system to Hexadecimal number representation for reading and printing numbers. Thus Hexadecimal numbers can be represented without the #x or #16r prefix code, MSX BASIC, QuickBASIC, FreeBASIC and Visual Basic prefix hexadecimal numbers with &H, &H5A3 BBC BASIC and Locomotive BASIC use & for hex. TI-89 and 92 series uses a 0h prefix, 0h5A3 ALGOL68 uses the prefix 16r to denote hexadecimal numbers, binary, quaternary and octal numbers can be specified similarly

5.
Interrupt
–
In system programming, an interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing, the processor responds by suspending its current activities, saving its state, and executing a function called an interrupt handler to deal with the event. This interruption is temporary, and, after the interrupt handler finishes, there are two types of interrupts, hardware interrupts and software interrupts. Hardware interrupts are used by devices to communicate that they require attention from the operating system, for example, pressing a key on the keyboard or moving the mouse triggers hardware interrupts that cause the processor to read the keystroke or mouse position. Unlike the software type, hardware interrupts are asynchronous and can occur in the middle of instruction execution, the act of initiating a hardware interrupt is referred to as an interrupt request. A software interrupt is caused either by a condition in the processor itself. The former is called a trap or exception and is used for errors or events occurring during program execution that are exceptional enough that they cannot be handled within the program itself. For example, an exception will be thrown if the processors arithmetic logic unit is commanded to divide a number by zero as this instruction is in error. The operating system will catch this exception, and can choose to abort the instruction, each interrupt has its own interrupt handler. The number of interrupts is limited by the number of interrupt request lines to the processor. Interrupts are a commonly used technique for computer multitasking, especially in real-time computing, such a system is said to be interrupt-driven. Hardware interrupts were introduced as an optimization, eliminating unproductive waiting time in polling loops and they may be implemented in hardware as a distinct system with control lines, or they may be integrated into the memory subsystem. If implemented as part of the controller, interrupts are mapped into the systems memory address space. Interrupts can be categorized into different types, Maskable interrupt. Non-maskable interrupt, an interrupt that lacks an associated bit-mask. NMIs are used for the highest priority tasks such as timers, inter-processor interrupt, a special case of interrupt that is generated by one processor to interrupt another processor in a multiprocessor system. Software interrupt, an interrupt generated within a processor by executing an instruction, software interrupts are often used to implement system calls because they result in a subroutine call with a CPU ring level change. Spurious interrupt, an interrupt that is unwanted

6.
Debugger
–
A debugger or debugging tool is a computer program that is used to test and debug other programs. Some debuggers offer two modes of operation, full or partial simulation, to limit this impact, a trap occurs when the program cannot normally continue because of a programming bug or invalid data. For example, the program might have tried to use an instruction not available on the current version of the CPU or attempted to access unavailable or protected memory, if it is a low-level debugger or a machine-language debugger it shows the line in the disassembly. Typically, debuggers offer a query processor, a symbol resolver, an interpreter. Some debuggers have the ability to modify program state while it is running and it may also be possible to continue execution at a different location in the program to bypass a crash or logical error. It often also makes it useful as a verification tool, fault coverage. Most mainstream debugging engines, such as gdb and dbx, provide console-based command line interfaces, debugger front-ends are popular extensions to debugger engines that provide IDE integration, program animation, and visualization features. Some debuggers include a feature called reverse debugging, also known as historical debugging or backwards debugging and these debuggers make it possible to step a programs execution backwards in time. Microsoft Visual Studio offers IntelliTrace reverse debugging for C#, Visual Basic. NET, and some other languages, reverse debuggers also exist for C, C++, Java, Python, Perl, and other languages. Some are open source, some are commercial software. Some reverse debuggers slow down the target by orders of magnitude, reverse debugging is very useful for certain types of problems, but is still not commonly used yet. Some debuggers operate on a specific language while others can handle multiple languages transparently. Some debuggers also incorporate memory protection to avoid storage violations such as buffer overflow and this may be extremely important in transaction processing environments where memory is dynamically allocated from memory pools on a task by task basis. Most modern microprocessors have at least one of features in their CPU design to make debugging easier, Hardware support for single-stepping a program. In-system programming allows an external hardware debugger to reprogram a system under test, many systems with such ISP support also have other hardware debug support. Hardware support for code and data breakpoints, such as address comparators and data value comparators or, with more work involved. JTAG access to hardware debug interfaces such as those on ARM architecture processors or using the Nexus command set, processors used in embedded systems typically have extensive JTAG debug support. Micro controllers with as few as six pins need to use low pin-count substitutes for JTAG, such as BDM, Spy-Bi-Wire, debugWIRE, for example, uses bidirectional signaling on the RESET pin

7.
Breakpoint
–
In software development, a breakpoint is an intentional stopping or pausing place in a program, put in place for debugging purposes. It is also simply referred to as a pause. More generally, a breakpoint is a means of acquiring knowledge about a program during its execution, during the interruption, the programmer inspects the test environment to find out whether the program is functioning as expected. In practice, a breakpoint consists of one or more conditions that determine when an execution should be interrupted. Breakpoints are most commonly used to interrupt a running program immediately before the execution of a programmer-specified instruction and this is often referred to as an instruction breakpoint. Other kinds of conditions can also be used, such as the reading, writing and this is often referred to as a conditional breakpoint, a data breakpoint, or a watchpoint. Breakpoints can also be used to interrupt execution at a particular time, when a breakpoint is hit, various tools are used to inspect the state of the program or alter it. Stack trace of each thread may be used to see the chain of function calls that led to the paused instruction, a list of watches allows one to view the values of selected variables and expressions. There may also be tools to show the contents of registers, loaded program modules, many processors include hardware support for breakpoints. As an example, the x86 instruction set architecture provides hardware support for breakpoints with its x86 debug registers, such hardware may include limitations, for example not allowing breakpoints on instructions located in branch delay slots. This kind of limitation is imposed by the microarchitecture of the processor, without hardware support, debuggers have to implement breakpoints in software. Also, if the program resides in protected memory, overwriting of instructions may be prevented, interpreted languages can effectively use the same concept as above in their program cycle. Instrumenting all the code with additional source statements that issue a function that invokes an internal or external debug subroutine, is yet another common approach. This method increases the size and might adversely affect normal memory allocation and exception handlers. Debug options exist on some compilers to implement this technique semi-transparently, some debuggers allow registers or program variables in memory to be modified before resuming, effectively allowing the introduction of hand-coded temporary assignments for test purposes. Similarly, program instructions can often be skipped to determine the effect of changes to the program logic - enabling questions about program execution to be answered in a direct way. In many cases it may be the practical method of testing obscure event-driven error subroutines that rarely, if ever. Manually changing the location within a paused program can be used to enter an otherwise rarely executed section of code

8.
Compiler
–
A compiler is a computer program that transforms source code written in a programming language into another computer language, with the latter often having a binary form known as object code. The most common reason for converting source code is to create an executable program, the name compiler is primarily used for programs that translate source code from a high-level programming language to a lower level language. If the compiled program can run on a computer whose CPU or operating system is different from the one on which the compiler runs, more generally, compilers are a specific type of translator. While all programs that take a set of programming specifications and translate them, a program that translates from a low-level language to a higher level one is a decompiler. A program that translates between high-level languages is called a source-to-source compiler or transpiler. A language rewriter is usually a program that translates the form of expressions without a change of language, the term compiler-compiler is sometimes used to refer to a parser generator, a tool often used to help create the lexer and parser. A compiler is likely to many or all of the following operations, lexical analysis, preprocessing, parsing, semantic analysis, code generation. Program faults caused by incorrect compiler behavior can be difficult to track down and work around, therefore. Software for early computers was written in assembly language. The notion of a high level programming language dates back to 1943, no actual implementation occurred until the 1970s, however. The first actual compilers date from the 1950s, identifying the very first is hard, because there is subjectivity in deciding when programs become advanced enough to count as the full concept rather than a precursor. 1952 saw two important advances. Grace Hopper wrote the compiler for the A-0 programming language, though the A-0 functioned more as a loader or linker than the notion of a full compiler. Also in 1952, the first autocode compiler was developed by Alick Glennie for the Mark 1 computer at the University of Manchester and this is considered by some to be the first compiled programming language. The FORTRAN team led by John Backus at IBM is generally credited as having introduced the first unambiguously complete compiler, COBOL was an early language to be compiled on multiple architectures, in 1960. In many application domains the idea of using a higher level language quickly caught on, because of the expanding functionality supported by newer programming languages and the increasing complexity of computer architectures, compilers have become more complex. Early compilers were written in assembly language, the first self-hosting compiler – capable of compiling its own source code in a high-level language – was created in 1962 for the Lisp programming language by Tim Hart and Mike Levin at MIT. Since the 1970s, it has become practice to implement a compiler in the language it compiles

9.
Byte
–
The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of used to encode a single character of text in a computer. The size of the byte has historically been hardware dependent and no standards existed that mandated the size. The de-facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte, the international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits, the popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit size. The unit symbol for the byte was designated as the upper-case letter B by the IEC and IEEE in contrast to the bit, internationally, the unit octet, symbol o, explicitly denotes a sequence of eight bits, eliminating the ambiguity of the byte. It is a respelling of bite to avoid accidental mutation to bit. Early computers used a variety of four-bit binary coded decimal representations and these representations included alphanumeric characters and special graphical symbols. S. Government and universities during the 1960s, the prominence of the System/360 led to the ubiquitous adoption of the eight-bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different. In the early 1960s, AT&T introduced digital telephony first on long-distance trunk lines and these used the eight-bit µ-law encoding. This large investment promised to reduce costs for eight-bit data. The development of microprocessors in the 1970s popularized this storage size. A four-bit quantity is called a nibble, also nybble. The term octet is used to specify a size of eight bits. It is used extensively in protocol definitions, historically, the term octad or octade was used to denote eight bits as well at least in Western Europe, however, this usage is no longer common. The exact origin of the term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers. The unit symbol for the byte is specified in IEC 80000-13, IEEE1541, in the International System of Quantities, B is the symbol of the bel, a unit of logarithmic power ratios named after Alexander Graham Bell, creating a conflict with the IEC specification. However, little danger of confusion exists, because the bel is a used unit