Contents

Apache Mynewt is a real-time operating system with a rich set of libraries intended to make prototyping, deploying, and managing 32-bit microcontroller based IoT devices easy.[4] It is highly composable, to allow building embedded system applications (e.g., locks, medical devices, industrial IoT) across different types of microcontrollers. The name Mynewt is wordplay on the English word minute, meaning very small: the kernel is only 6 KB in size.

The OS is designed for connectivity, and comes with a full implementation of the Bluetooth low energy 4.2 stack. With the addition of BLE (supporting all Bluetooth 4.2 compliant security features except privacy) and various utilities such as the default file system, console, shell, logs, stats, etc., the image size is approximately 96 KB for the Nordic nRF51822 Bluetooth SoC.[5] This size metric excludes the boot loader image.

The first network stack available in Mynewt is Bluetooth low energy[6] and is called NimBLE. It complies with Bluetooth Core Specification 4.2.[7]

NimBLE includes both the host and controller components. Access to the controller source code makes the BLE performance highly configurable. For example, the BLE throughput can be adjusted by changing the connection intervals, data packet size, packet queue size etc. A use case requiring a large number of concurrent connections can similarly be configured, provided there is adequate RAM allocated. Example applications that demonstrate how to use available services are included in the package.

The project includes the Newt Tool which is a command-line interface (CLI) based smart source package manager system for embedded systems development. Also, it allows composing builds with specified packages and compiler options, generating images and their digital signatures, and finally downloading and debugging the firmware on different targets.

1.
Software developer
–
A software developer is a person concerned with facets of the software development process, including the research, design, programming, and testing of computer software. Other job titles which are used with similar meanings are programmer, software analyst. According to developer Eric Sink, the differences between system design, software development, and programming are more apparent, even more so that developers become systems architects, those who design the multi-leveled architecture or component interactions of a large software system. In a large company, there may be employees whose sole responsibility consists of one of the phases above. In smaller development environments, a few people or even an individual might handle the complete process. The word software was coined as a prank as early as 1953, before this time, computers were programmed either by customers, or the few commercial computer vendors of the time, such as UNIVAC and IBM. The first company founded to provide products and services was Computer Usage Company in 1955. The software industry expanded in the early 1960s, almost immediately after computers were first sold in mass-produced quantities, universities, government, and business customers created a demand for software. Many of these programs were written in-house by full-time staff programmers, some were distributed freely between users of a particular machine for no charge. Others were done on a basis, and other firms such as Computer Sciences Corporation started to grow. The computer/hardware makers started bundling operating systems, systems software and programming environments with their machines, new software was built for microcomputers, so other manufacturers including IBM, followed DECs example quickly, resulting in the IBM AS/400 amongst others. The industry expanded greatly with the rise of the computer in the mid-1970s. In the following years, it created a growing market for games, applications. DOS, Microsofts first operating system product, was the dominant operating system at the time, by 2014 the role of cloud developer had been defined, in this context, one definition of a developer in general was published, Developers make software for the world to use. The job of a developer is to crank out code -- fresh code for new products, code fixes for maintenance, code for business logic, bus factor Software Developer description from the US Department of Labor

2.
Apache Software Foundation
–
The Apache Software Foundation /əˈpætʃiː/ is an American non-profit corporation to support Apache software projects, including the Apache HTTP Server. The ASF was formed from the Apache Group and incorporated in Delaware, the Apache Software Foundation is a decentralized open source community of developers. The software they produce is distributed under the terms of the Apache License and is free, the Apache projects are characterized by a collaborative, consensus-based development process and an open and pragmatic software license. Each project is managed by a team of technical experts who are active contributors to the project. The ASF is a meritocracy, implying that membership of the foundation is granted only to volunteers who have contributed to Apache projects. The ASF is considered a second generation open-source organization, in that support is provided without the risk of platform lock-in. Among the ASFs objectives are, to legal protection to volunteers working on Apache projects. The ASF also holds several ApacheCon conferences each year, highlighting Apache projects, the history of the Apache Software Foundation is linked to the Apache HTTP Server, development beginning in February 1993. A group of eight developers started working on enhancing the NCSA HTTPd daemon and they came to be known as the Apache Group. On March 25,1999, the Apache Software Foundation was formed, the name Apache was chosen from respect for the Native American Apache Nation, well known for their superior skills in warfare strategy and their inexhaustible endurance. It also makes a pun on a patchy web server—a server made from a series of patches—but this was not its origin, the group of developers who released this new software soon started to call themselves the Apache Group. Apache divides its software development activities into separate areas called top-level projects. Unlike some other organizations that host FOSS projects, before a project is hosted at Apache it has to be licensed to the ASF with a grant or contributor agreement. In this way, the ASF gains the necessary intellectual property rights for the development, the ASF board of directors has responsibility for overseeing the ASFs activities and acting as a central point of contact and communication for its projects. The board assigns corporate issues, assigning resources to projects, and manages corporate services, including funds and it does not make technical decisions about individual projects, these are made by the individual Project Management Committees. With no employees and 2,663 volunteers, it spent $270,846 on infrastructure, $92,364 on public relations, Apache Attic Apache Incubator Wikinomics, How Mass Collaboration Changes Everything, Don Tapscott, Anthony D. Williams. Official website ASF Projects Directory ASF Committer Directory General ASF Wiki ApacheCon website ApacheCon Wiki Apache popular APIs in GitHub

3.
Programming language
–
A programming language is a formal computer language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to programs to control the behavior of a machine or to express algorithms. From the early 1800s, programs were used to direct the behavior of such as Jacquard looms. Thousands of different programming languages have created, mainly in the computer field. Many programming languages require computation to be specified in an imperative form while other languages use forms of program specification such as the declarative form. The description of a language is usually split into the two components of syntax and semantics. Some languages are defined by a document while other languages have a dominant implementation that is treated as a reference. Some languages have both, with the language defined by a standard and extensions taken from the dominant implementation being common. A programming language is a notation for writing programs, which are specifications of a computation or algorithm, some, but not all, authors restrict the term programming language to those languages that can express all possible algorithms. For example, PostScript programs are created by another program to control a computer printer or display. More generally, a language may describe computation on some, possibly abstract. It is generally accepted that a specification for a programming language includes a description, possibly idealized. In most practical contexts, a programming language involves a computer, consequently, abstractions Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. Expressive power The theory of computation classifies languages by the computations they are capable of expressing, all Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL-92 and Charity are examples of languages that are not Turing complete, markup languages like XML, HTML, or troff, which define structured data, are not usually considered programming languages. Programming languages may, however, share the syntax with markup languages if a computational semantics is defined, XSLT, for example, is a Turing complete XML dialect. Moreover, LaTeX, which is used for structuring documents. The term computer language is used interchangeably with programming language

4.
C (programming language)
–
C was originally developed by Dennis Ritchie between 1969 and 1973 at Bell Labs, and used to re-implement the Unix operating system. C has been standardized by the American National Standards Institute since 1989, C is an imperative procedural language. Therefore, C was useful for applications that had formerly been coded in assembly language. Despite its low-level capabilities, the language was designed to encourage cross-platform programming, a standards-compliant and portably written C program can be compiled for a very wide variety of computer platforms and operating systems with few changes to its source code. The language has become available on a wide range of platforms. In C, all code is contained within subroutines, which are called functions. Function parameters are passed by value. Pass-by-reference is simulated in C by explicitly passing pointer values, C program source text is free-format, using the semicolon as a statement terminator and curly braces for grouping blocks of statements. The C language also exhibits the characteristics, There is a small, fixed number of keywords, including a full set of flow of control primitives, for, if/else, while, switch. User-defined names are not distinguished from keywords by any kind of sigil, There are a large number of arithmetical and logical operators, such as +, +=, ++, &, ~, etc. More than one assignment may be performed in a single statement, function return values can be ignored when not needed. Typing is static, but weakly enforced, all data has a type, C has no define keyword, instead, a statement beginning with the name of a type is taken as a declaration. There is no function keyword, instead, a function is indicated by the parentheses of an argument list, user-defined and compound types are possible. Heterogeneous aggregate data types allow related data elements to be accessed and assigned as a unit, array indexing is a secondary notation, defined in terms of pointer arithmetic. Unlike structs, arrays are not first-class objects, they cannot be assigned or compared using single built-in operators, There is no array keyword, in use or definition, instead, square brackets indicate arrays syntactically, for example month. Enumerated types are possible with the enum keyword and they are not tagged, and are freely interconvertible with integers. Strings are not a data type, but are conventionally implemented as null-terminated arrays of characters. Low-level access to memory is possible by converting machine addresses to typed pointers

5.
Open-source model
–
Open-source software may be developed in a collaborative public manner. According to scientists who studied it, open-source software is a prominent example of open collaboration, a 2008 report by the Standish Group states that adoption of open-source software models has resulted in savings of about $60 billion per year to consumers. In the early days of computing, programmers and developers shared software in order to learn from each other, eventually the open source notion moved to the way side of commercialization of software in the years 1970-1980. In 1997, Eric Raymond published The Cathedral and the Bazaar and this source code subsequently became the basis behind SeaMonkey, Mozilla Firefox, Thunderbird and KompoZer. Netscapes act prompted Raymond and others to look into how to bring the Free Software Foundations free software ideas, the new term they chose was open source, which was soon adopted by Bruce Perens, publisher Tim OReilly, Linus Torvalds, and others. The Open Source Initiative was founded in February 1998 to encourage use of the new term, a Microsoft executive publicly stated in 2001 that open source is an intellectual property destroyer. I cant imagine something that could be worse than this for the software business, IBM, Oracle, Google and State Farm are just a few of the companies with a serious public stake in todays competitive open-source market. There has been a significant shift in the corporate philosophy concerning the development of FOSS, the free software movement was launched in 1983. In 1998, a group of individuals advocated that the free software should be replaced by open-source software as an expression which is less ambiguous. Software developers may want to publish their software with an open-source license, the Open Source Definition, notably, presents an open-source philosophy, and further defines the terms of usage, modification and redistribution of open-source software. Software licenses grant rights to users which would otherwise be reserved by law to the copyright holder. Several open-source software licenses have qualified within the boundaries of the Open Source Definition, the open source label came out of a strategy session held on April 7,1998 in Palo Alto in reaction to Netscapes January 1998 announcement of a source code release for Navigator. They used the opportunity before the release of Navigators source code to clarify a potential confusion caused by the ambiguity of the free in English. Many people claimed that the birth of the Internet, since 1969, started the open source movement, the Free Software Foundation, started in 1985, intended the word free to mean freedom to distribute and not freedom from cost. Since a great deal of free software already was free of charge, such software became associated with zero cost. The Open Source Initiative was formed in February 1998 by Eric Raymond and they sought to bring a higher profile to the practical benefits of freely available source code, and they wanted to bring major software businesses and other high-tech industries into open source. Perens attempted to open source as a service mark for the OSI. The Open Source Initiatives definition is recognized by governments internationally as the standard or de facto definition, OSI uses The Open Source Definition to determine whether it considers a software license open source

6.
Software release life cycle
–
Usage of the alpha/beta test terminology originated at IBM. As long ago as the 1950s, IBM used similar terminology for their hardware development, a test was the verification of a new product before public announcement. B test was the verification before releasing the product to be manufactured, C test was the final test before general availability of the product. Martin Belsky, a manager on some of IBMs earlier software projects claimed to have invented the terminology, IBM dropped the alpha/beta terminology during the 1960s, but by then it had received fairly wide notice. The usage of beta test to refer to testing done by customers was not done in IBM, rather, IBM used the term field test. Pre-alpha refers to all activities performed during the project before formal testing. These activities can include requirements analysis, software design, software development, in typical open source development, there are several types of pre-alpha versions. Milestone versions include specific sets of functions and are released as soon as the functionality is complete, the alpha phase of the release life cycle is the first phase to begin software testing. In this phase, developers generally test the software using white-box techniques, additional validation is then performed using black-box or gray-box techniques, by another testing team. Moving to black-box testing inside the organization is known as alpha release, alpha software can be unstable and could cause crashes or data loss. Alpha software may not contain all of the features that are planned for the final version, in general, external availability of alpha software is uncommon in proprietary software, while open source software often has publicly available alpha versions. The alpha phase usually ends with a freeze, indicating that no more features will be added to the software. At this time, the software is said to be feature complete, Beta, named after the second letter of the Greek alphabet, is the software development phase following alpha. Software in the stage is also known as betaware. Beta phase generally begins when the software is complete but likely to contain a number of known or unknown bugs. Software in the phase will generally have many more bugs in it than completed software, as well as speed/performance issues. The focus of beta testing is reducing impacts to users, often incorporating usability testing, the process of delivering a beta version to the users is called beta release and this is typically the first time that the software is available outside of the organization that developed it. Beta version software is useful for demonstrations and previews within an organization

7.
ARM Cortex-M
–
The ARM Cortex-M is a group of 32-bit RISC ARM processor cores licensed by ARM Holdings for microcontroller use. The cores consist of the Cortex-M0, Cortex-M0+, Cortex-M1, Cortex-M3, Cortex-M4, Cortex-M7, Cortex-M23, if the Cortex-M4 / M7 / M33 silicon has the FPU option, then the core is known as the Cortex-M4F / Cortex-M7F / Cortex-M33F. ARM Cortex-M cores have been shipped in tens of billions of devices, the ARM Cortex-M family are ARM microprocessor cores which are designed for use in microcontrollers, ASICs, ASSPs, and SoC. Though 8-bit microcontrollers were very popular in the past, Cortex-M has slowly been chipping away at the 8-bit market as the prices of low-end Cortex-M chips have moved downward. Cortex-M have become a popular replacements for 8-bit chips in applications that benefit from 32-bit math operations, ARM Holdings neither manufactures nor sells CPU devices based on its own designs, but rather licenses the processor architecture to interested parties. ARM offers a variety of licensing terms, varying in cost, integrated device manufacturers receive the ARM Processor IP as synthesizable RTL. In this form, they have the ability to perform architectural level optimizations and extensions and this allows the manufacturer to achieve custom design goals, such as higher clock speed, very low power consumption, instruction set extensions, optimizations for size, debug support, etc. To determine which components have included in a particular ARM CPU chip, consult the manufacturer datasheet. Some of the most important options for the Cortex-M cores are, SysTick timer, when present, it also provides an additional configurable priority SysTick interrupt. Though the SysTick timer is optional, it is rare to find a Cortex-M microcontroller without it. Bit-Band, Maps a complete word of memory onto a single bit in the bit-band region, for example, writing to an alias word will set or clear the corresponding bit in the bit-band region. Though the bit-band is optional, it is common to find a Cortex-M3. Some Cortex-M0 and Cortex-M0+ microcontrollers have bit-band, Memory Protection Unit, Provides support for protecting regions of memory through enforcing privilege and access rules. It supports up to eight different regions, each of which can be split into a further eight equal-size sub-regions, Tightly-Coupled Memory, Low-latency RAM that is used to hold critical routines, data, stacks. It is typically the fastest memory in the microcontroller, note, Most Cortex-M3 and M4 chips have bit-band and MPU. The bit-band option can be added to the Cortex-M0 / M0+ using the Cortex-M System Design Kit, note, Software should validate the existence of a feature before attempting to use it. Additional silicon options, Data endianness, Little-endian or big-endian, unlike legacy ARM cores, the Cortex-M is permanently fixed in silicon as one of these choices. Interrupts,1 to 32,1 to 240,1 to 480, instruction fetch width, 16-bit only, or mostly 32-bit

8.
MIPS architecture
–
MIPS is a reduced instruction set computer instruction set architecture developed by MIPS Technologies. The early MIPS architectures were 32-bit, with 64-bit versions added later, multiple revisions of the MIPS instruction set exist, including MIPS I, MIPS II, MIPS III, MIPS IV, MIPS V, MIPS32, and MIPS64. The current revisions are MIPS32 and MIPS64, MIPS32 and MIPS64 define a control register set as well as the instruction set. Computer architecture courses in universities and technical schools often study the MIPS architecture, the architecture greatly influenced later RISC architectures such as Alpha. It used to be popular in supercomputers but all systems have dropped off the TOP500 list. Until late 2006, they were used in many of SGIs computer products. MIPS implementations were also used by Digital Equipment Corporation, NEC, Pyramid Technology, Siemens Nixdorf, in the mid to late 1990s, it was estimated that one in three RISC microprocessors produced was a MIPS implementation. Windows NT supported MIPS until the release of Windows NT4.0 SP3 in 1997, MIPS is a modular architecture supporting up to four coprocessors. In MIPS terminology, COP0 is the System Control Coprocessor, COP1 is an optional FPU, for example, in the original Playstation game console, COP0 is the System Control Coprocessor and COP2 is Geometry Transformation Engine. In the Playstation 2 game console, COP0 is a Toshiba R5900 chip, COP1 is a FPU, MIPS is a load-store architecture, meaning it only performs arithmetic and logic operations between CPU registers, requiring load/store instructions to access memory. Processors based upon the MIPS instruction set have been in production since 1988, over time several enhancements of the instruction set were made. The different revisions which have introduced are MIPS I, MIPS II, MIPS III, MIPS IV. Each revision is a superset of its predecessors, when MIPS Technologies was spun out of Silicon Graphics again in 1998, they refocused on the embedded market. At that time, this property was found to be a problem, and the architecture definition was changed to define a 32-bit MIPS32. Introduced in 1985 with the R2000, introduced in 1990 with the R6000. Introduced in 1992 in the R4000 and it adds 64-bit registers and integer instructions and a floating point square root instruction. MIPS IV is the version of the architecture. It is a superset of MIPS III and is compatible with all existing versions of MIPS, the first implementation of MIPS IV was the R8000, which was introduced in 1994

9.
PIC microcontroller
–
PIC is a family of microcontrollers made by Microchip Technology, derived from the PIC1650 originally developed by General Instruments Microelectronics Division. The name PIC initially referred to Peripheral Interface Controller, the first parts of the family were available in 1976, by 2013 the company had shipped more than twelve billion individual parts, used in a wide variety of embedded systems. Early models of PIC had read-only memory or field-programmable EPROM for program storage, all current models use flash memory for program storage, and newer models allow the PIC to reprogram itself. Program memory and data memory are separated, data memory is 8-bit, 16-bit, and, in latest models, 32-bit wide. Program instructions vary in bit-count by family of PIC, and may be 12,14,16, the instruction set also varies by model, with more powerful chips adding instructions for digital signal processing functions. Low-power and high-speed variations exist for many types, the manufacturer supplies computer software for development known as MPLAB, assemblers and C/C++ compilers, and programmer/debugger hardware under the MPLAB and PICKit series. Third party and some tools are also available. Some parts have in-circuit programming capability, low-cost development programmers are available as well as high-production programmers, the original PIC was intended to be used with General Instruments new CP1600 16-bit central processing unit. The PIC used simple microcode stored in ROM to perform its tasks, in 1985, General Instrument sold their microelectronics division and the new owners cancelled almost everything which by this time was mostly out-of-date. The PIC, however, was upgraded with an internal EPROM to produce a channel controller. In 2001, Microchip introduced Flash programmable devices, with production commencing in 2002. Today, a variety of PICs are available with various on-board peripherals and program memory from 256 words to 64K words. PIC and PICmicro are registered trademarks of Microchip Technology, the acronym was quickly replaced with Programmable Intelligent Computer. The Microchip 16C84, introduced in 1993, was the first Microchip CPU with on-chip EEPROM memory, by 2013, Microchip was shipping over one billion PIC microcontrollers every year. PICmicro chips are designed with a Harvard architecture, and are offered in various device families, the baseline and mid-range families use 8-bit wide data memory, and the high-end families use 16-bit data memory. The latest series, PIC32MZ is a 32-bit MIPS-based microcontroller, instruction words are in sizes of 12-bit, 14-bit and 24-bit. The binary representations of the machine instructions vary by family and are shown in PIC instruction listings, within these families, devices may be designated PICnnCxxx or PICnnFxxx. C devices are classified either End-Of-Life, or Not suitable for new development

10.
RISC-V
–
RISC-V is an open instruction set architecture based on established reduced instruction set computing principles. In contrast to most ISAs, the RISC-V ISA can be used for any purpose, permitting anyone to design, manufacture and sell RISC-V chips. Such uses demand that the designers consider both performance and power efficiency, the instruction set also has a substantial body of supporting software, which fixes a usual weakness of new instruction sets. The RISC-V ISA has been designed with small, fast, and low-power real-world implementations in mind, as of January 2017, version 2.1 of the userspace ISA is fixed and the privileged ISA is available as draft version 1.9.1. The RISC-V authors aim to provide several CPU designs freely available under a BSD license, such licenses allow derivative works, such as RISC-V chip designs, to be either open and free, like RISC-V itself, or closed and proprietary. By contrast, commercial vendors such as ARM Holdings and MIPS Technologies charge substantial license fees for the use of their patents. They also require non-disclosure agreements before releasing documents that describe their designs advantages, many design advances are completely proprietary, never described even to customers. The secrecy interferes with legitimate public educational use, security auditing, and the development of public, low–cost free and open-source software compilers, developing a CPU requires design expertise in several specialties, electronic logic, compilers, and operating systems. Its rare to find this outside of an engineering team. The result is that modern, high-quality general-purpose computer instruction sets have not recently been widely available anywhere, or even explained, because of this, many RISC-V contributors see it as a unified community effort. This need for a base of contributors is part of the reason why RISC-V was engineered to fit so many uses. The RISC-V authors also have research and user-experience validating their designs in silicon and simulation. The RISC-V ISA is a development from a series of academic computer-design projects. It was originated in part to aid such projects, the term RISC dates from about 1980. Before this, there was some knowledge that simpler computers could be effective, simple, effective computers have always been of academic interest. Academics created the RISC instruction set DLX for the first edition of Computer Architecture, david Patterson was an author, and later assisted RISC-V. However DLX was for educational use, academics and hobbyists implemented it using field-programmable gate arrays, but it was not a commercial success. ARM CPUs, version 2 and earlier, had an instruction set, and it is still supported by the GCC

11.
Software license
–
A software license is a legal instrument governing the use or redistribution of software. Under United States copyright law all software is copyright protected, in code as also object code form. The only exception is software in the public domain, most distributed software can be categorized according to its license type. Two common categories for software under copyright law, and therefore with licenses which grant the licensee specific rights, are proprietary software and free, unlicensed software outside the copyright protection is either public domain software or software which is non-distributed, non-licensed and handled as internal business trade secret. Contrary to popular belief, distributed unlicensed software is copyright protected. Examples for this are unauthorized software leaks or software projects which are placed on public software repositories like GitHub without specified license. As voluntarily handing software into the domain is problematic in some international law domains, there are also licenses granting PD-like rights. Therefore, the owner of a copy of software is legally entitled to use that copy of software. Hence, if the end-user of software is the owner of the respective copy, as many proprietary licenses only enumerate the rights that the user already has under 17 U. S. C. §117, and yet proclaim to take away from the user. Proprietary software licenses often proclaim to give software publishers more control over the way their software is used by keeping ownership of each copy of software with the software publisher. The form of the relationship if it is a lease or a purchase, for example UMG v. Augusto or Vernor v. Autodesk. The ownership of goods, like software applications and video games, is challenged by licensed. The Swiss based company UsedSoft innovated the resale of business software and this feature of proprietary software licenses means that certain rights regarding the software are reserved by the software publisher. Therefore, it is typical of EULAs to include terms which define the uses of the software, the most significant effect of this form of licensing is that, if ownership of the software remains with the software publisher, then the end-user must accept the software license. In other words, without acceptance of the license, the end-user may not use the software at all, one example of such a proprietary software license is the license for Microsoft Windows. The most common licensing models are per single user or per user in the appropriate volume discount level, Licensing per concurrent/floating user also occurs, where all users in a network have access to the program, but only a specific number at the same time. Another license model is licensing per dongle which allows the owner of the dongle to use the program on any computer, Licensing per server, CPU or points, regardless the number of users, is common practice as well as site or company licenses

12.
Apache License
–
The Apache License, Version 2.0 is a permissive free software license written by the Apache Software Foundation. The Apache License requires preservation of the notice and disclaimer. This makes ALv2 a FRAND-RF license, the ASF and its projects release the software they produce under the Apache License and many non-ASF projects are also using the ALv2. A free software license is a notice that grants the recipient extensive rights to modify, Software using such a license is free software as conferred by the copyright holder. Free software licenses are applied to software in source code as also object code form. The ASF adopted the Apache License 2.0 in January 2004, the Apache License is permissive in that it does not require a derivative work of the software, or modifications to the original, to be distributed using the same license. Modifications may have appropriate copyright notices, and may provide different license terms for the modifications, in October 2012,8,708 projects located at SourceForge. net were available under the terms of the Apache License. In a blog post from May 2008, Google mentioned that over 25% of the nearly 100,000 projects then hosted on Google Code were using the Apache License, including the Android operating system. As of 2015, according to Black Duck Software and GitHub, free software license Comparison of free and open-source software licenses Software using the Apache license Apache Licenses Quick Summary of the Apache License 2.0

13.
Internet of things
–
In 2013 the Global Standards Initiative on Internet of Things defined the IoT as the infrastructure of the information society. Each thing is uniquely identifiable through its embedded computing system but is able to interoperate within the existing Internet infrastructure, experts estimate that the IoT will consist of almost 50 billion objects by 2020. Legal scholars suggest to look at Things as a mixture of hardware, software, data. These devices collect data with the help of various existing technologies. IoT is one of the platforms of todays Smart City, the term the Internet of Things was coined by Kevin Ashton of Procter & Gamble, later MITs Auto-ID Center, in 1999. This means that the fields of embedded systems, wireless sensor networks, control systems, automation. Mark Weisers seminal 1991 paper on ubiquitous computing, The Computer of the 21st Century, as well as venues such as UbiComp. In 1994 Reza Raji described the concept in IEEE Spectrum as small packets of data to a set of nodes. Between 1993 and 1996 several companies proposed solutions like Microsofts at Work or Novells NEST, however, only in 1999 did the field start gathering momentum. Bill Joy envisioned Device to Device communication as part of his Six Webs framework, the concept of the Internet of Things became popular in 1999, through the Auto-ID Center at MIT and related market-analysis publications. Radio-frequency identification was seen by Kevin Ashton as a prerequisite for the Internet of things at that point, Ashton prefers the phrase Internet for Things. If all objects and people in life were equipped with identifiers, computers could manage. Besides using RFID, the tagging of things may be achieved through such technologies as near field communication, barcodes, QR codes, for instance, instant and ceaseless inventory control would become ubiquitous. A persons ability to interact with objects could be altered based on immediate or present needs. According to Gartner, Inc. there will be nearly 20.8 billion devices on the Internet of things by 2020, ABI Research estimates that more than 30 billion devices will be wirelessly connected to the Internet of things by 2020. As such, it is clear that the IoT will consist of a large number of devices being connected to the Internet. In an active move to new and emerging technological innovation. The ability to network embedded devices with limited CPU, memory, on the other hand, IoT systems could also be responsible for performing actions, not just sensing things

14.
Free and open-source software
–
Free and open-source software is computer software that can be classified as both free software and open-source software. This is in contrast to proprietary software, where the software is under restrictive copyright, the benefits of using FOSS can include decreasing software costs, increasing security and stability, protecting privacy, and giving users more control over their own hardware. Free, open-source operating systems such as Linux and descendents of BSD are widely utilized today, powering millions of servers, desktops, smartphones, Free software licenses and open-source licenses are used by many software packages. In the 1950s, 1960s, and 1970s to 1980s, it was common for users to have the source code for all programs they used. Software, including source code, was shared by individuals who used computers. Most companies had a model based on hardware sales, and provided or bundled software with hardware. Organizations of users and suppliers were formed to facilitate the exchange of software, see, for example, SHARE, by the late 1960s, the prevailing business model around software was changing. In United States vs. IBM, filed 17 January 1969, while some software might always be free, there would be a growing amount of software that was for sale only. Software development for the GNU operating system began in January 1984, an article outlining the project and its goals was published in March 1985 titled the GNU Manifesto. The manifesto included significant explanation of the GNU philosophy, Free Software Definition, the Linux kernel, started by Linus Torvalds, was released as freely modifiable source code in 1991. Initially, Linux was not released under a free or open-source software license, however, with version 0.12 in February 1992, he relicensed the project under the GNU General Public License. Much like Unix, Torvalds kernel attracted the attention of volunteer programmers, freeBSD and NetBSD were released as free software when the USL v. BSDi lawsuit was settled out of court in 1993. OpenBSD forked from NetBSD in 1995, also in 1995, The Apache HTTP Server, commonly referred to as Apache, was released under the Apache License 1.0. In 1997, Eric Raymond published The Cathedral and the Bazaar and this code is today better known as Mozilla Firefox and Thunderbird. Netscapes act prompted Raymond and others to look into how to bring the FSFs free software ideas, the new name they chose was open source, and quickly Bruce Perens, publisher Tim OReilly, Linus Torvalds, and others signed on to the rebranding. The Open Source Initiative was founded in February 1998 to encourage use of the new term, a Microsoft executive publicly stated in 2001 that open source is an intellectual property destroyer. I cant imagine something that could be worse than this for the software business and this view perfectly summarizes the initial response to FOSS by some software corporations. IBM, Oracle, Google and State Farm are just a few of the companies with a serious public stake in todays competitive open-source market, there has been a significant shift in the corporate philosophy concerning the development of free and open-source software

15.
Source code
–
In computing, source code is any collection of computer instructions, possibly with comments, written using a human-readable programming language, usually as ordinary text. The source code of a program is designed to facilitate the work of computer programmers. The source code is often transformed by an assembler or compiler into binary machine code understood by the computer, the machine code might then be stored for execution at a later time. Alternatively, source code may be interpreted and thus immediately executed, most application software is distributed in a form that includes only executable files. If the source code were included it would be useful to a user, programmer or a system administrator, the Linux Information Project defines source code as, Source code is the version of software as it is originally written by a human in plain text. The notion of source code may also be more broadly, to include machine code and notations in graphical languages. It is therefore so construed as to include code, very high level languages. Often there are several steps of program translation or minification between the source code typed by a human and an executable program. The earliest programs for stored-program computers were entered in binary through the front panel switches of the computer and this first-generation programming language had no distinction between source code and machine code. When IBM first offered software to work with its machine, the code was provided at no additional charge. At that time, the cost of developing and supporting software was included in the price of the hardware, for decades, IBM distributed source code with its software product licenses, until 1983. Most early computer magazines published source code as type-in programs, Source code can also be stored in a database or elsewhere. The source code for a piece of software may be contained in a single file or many files. Though the practice is uncommon, a source code can be written in different programming languages. For example, a program written primarily in the C programming language, in some languages, such as Java, this can be done at run time. The code base of a programming project is the larger collection of all the source code of all the computer programs which make up the project. It has become practice to maintain code bases in version control systems. Moderately complex software customarily requires the compilation or assembly of several, sometimes dozens or even hundreds, in these cases, instructions for compilations, such as a Makefile, are included with the source code

16.
Open-source software
–
Open-source software may be developed in a collaborative public manner. According to scientists who studied it, open-source software is a prominent example of open collaboration, a 2008 report by the Standish Group states that adoption of open-source software models has resulted in savings of about $60 billion per year to consumers. In the early days of computing, programmers and developers shared software in order to learn from each other, eventually the open source notion moved to the way side of commercialization of software in the years 1970-1980. In 1997, Eric Raymond published The Cathedral and the Bazaar and this source code subsequently became the basis behind SeaMonkey, Mozilla Firefox, Thunderbird and KompoZer. Netscapes act prompted Raymond and others to look into how to bring the Free Software Foundations free software ideas, the new term they chose was open source, which was soon adopted by Bruce Perens, publisher Tim OReilly, Linus Torvalds, and others. The Open Source Initiative was founded in February 1998 to encourage use of the new term, a Microsoft executive publicly stated in 2001 that open source is an intellectual property destroyer. I cant imagine something that could be worse than this for the software business, IBM, Oracle, Google and State Farm are just a few of the companies with a serious public stake in todays competitive open-source market. There has been a significant shift in the corporate philosophy concerning the development of FOSS, the free software movement was launched in 1983. In 1998, a group of individuals advocated that the free software should be replaced by open-source software as an expression which is less ambiguous. Software developers may want to publish their software with an open-source license, the Open Source Definition, notably, presents an open-source philosophy, and further defines the terms of usage, modification and redistribution of open-source software. Software licenses grant rights to users which would otherwise be reserved by law to the copyright holder. Several open-source software licenses have qualified within the boundaries of the Open Source Definition, the open source label came out of a strategy session held on April 7,1998 in Palo Alto in reaction to Netscapes January 1998 announcement of a source code release for Navigator. They used the opportunity before the release of Navigators source code to clarify a potential confusion caused by the ambiguity of the free in English. Many people claimed that the birth of the Internet, since 1969, started the open source movement, the Free Software Foundation, started in 1985, intended the word free to mean freedom to distribute and not freedom from cost. Since a great deal of free software already was free of charge, such software became associated with zero cost. The Open Source Initiative was formed in February 1998 by Eric Raymond and they sought to bring a higher profile to the practical benefits of freely available source code, and they wanted to bring major software businesses and other high-tech industries into open source. Perens attempted to open source as a service mark for the OSI. The Open Source Initiatives definition is recognized by governments internationally as the standard or de facto definition, OSI uses The Open Source Definition to determine whether it considers a software license open source

17.
Library (computing)
–
In computer science, a library is a collection of non-volatile resources used by computer programs, often to develop software. These may include data, documentation, help data, message templates, pre-written code. In IBMs OS/360 and its successors they are referred to as partitioned data sets, a library is also a collection of implementations of behavior, written in terms of a language, that has a well-defined interface by which the behavior is invoked. For instance, people who want to write a higher level program can use a library to make system calls instead of implementing those system calls over and over again, in addition, the behavior is provided for reuse by multiple independent programs. A program invokes the library-provided behavior via a mechanism of the language, for example, in a simple imperative language such as C, the behavior in a library is invoked by using Cs normal function-call. What distinguishes the call as being to a library, versus being to function in the same program, is the way that the code is organized in the system. This distinction can gain a hierarchical notion when a program grows large, in that case, there may be internal libraries that are reused by independent sub-portions of the large program. The value of a lies in the reuse of the behavior. When a program invokes a library, it gains the behavior implemented inside that library without having to implement that behavior itself, libraries encourage the sharing of code in a modular fashion, and ease the distribution of the code. The behavior implemented by a library can be connected to the program at different program lifecycle phases. If the code of the library is accessed during the build of the invoking program, an alternative is to build the executable of the invoking program and distribute that, independently of the library implementation. The library behavior is connected after the executable has been invoked to be executed, either as part of the process of starting the execution, in this case the library is called a dynamic library. A dynamic library can be loaded and linked when preparing a program for execution, alternatively, in the middle of execution, an application may explicitly request that a module be loaded. Most compiled languages have a library although programmers can also create their own custom libraries. Most modern software systems provide libraries that implement the majority of the system services, such libraries have commoditized the services which a modern application requires. As such, most code used by modern applications is provided in these system libraries, the earliest programming concepts analogous to libraries were intended to separate data definitions from the program implementation. JOVIAL brought the COMPOOL concept to popular attention in 1959, although it adopted the idea from the large-system SAGE software, COBOL also included primitive capabilities for a library system in 1959, but Jean Sammet described them as inadequate library facilities in retrospect. Another major contributor to the library concept came in the form of the subprogram innovation of FORTRAN

18.
Microcontroller
–
A microcontroller is a small computer on a single integrated circuit. In modern terminology, it is a system on a chip or SoC, a microcontroller contains one or more CPUs along with memory and programmable input/output peripherals. Program memory in the form of Ferroelectric RAM, NOR flash or OTP ROM is also included on chip. Microcontrollers are designed for embedded applications, in contrast to the used in personal computers or other general purpose applications consisting of various discrete chips. Mixed signal microcontrollers are common, integrating analog components needed to control non-digital electronic systems, some microcontrollers may use four-bit words and operate at frequencies as low as 4 kHz, for low power consumption. Other microcontrollers may serve performance-critical roles, where they may need to act more like a signal processor, with higher clock speeds. The first microprocessor was the 4-bit Intel 4004 released in 1971, with the Intel 8008, however, both processors required external chips to implement a working system, raising total system cost, and making it impossible to economically computerize appliances. One book credits TI engineers Gary Boone and Michael Cochran with the creation of the first microcontroller in 1971. The result of their work was the TMS1000, which became available in 1974. It combined read-only memory, read/write memory, processor and clock on one chip and was targeted at embedded systems and it combined RAM and ROM on the same chip. This chip would find its way into one billion PC keyboards. At that time Intels President, Luke J. Valenter, stated that the microcontroller was one of the most successful in the companys history, most microcontrollers at this time had concurrent variants. One had an erasable EPROM program memory, with a transparent quartz window in the lid of the package to allow it to be erased by exposure to ultraviolet light, often used for prototyping. The PROM was of type of memory as the EPROM. The erasable versions required ceramic packages with quartz windows, making them more expensive than the OTP versions. The same year, Atmel introduced the first microcontroller using Flash memory, other companies rapidly followed suit, with both memory types. Cost has plummeted over time, with the cheapest 8-bit microcontrollers being available for under 0.25 USD in quantity in 2009, nowadays microcontrollers are cheap and readily available for hobbyists, with large online communities around certain processors. In the future, MRAM could potentially be used in microcontrollers as it has infinite endurance, in 2002, about 55% of all CPUs sold in the world were 8-bit microcontrollers and microprocessors

19.
Embedded system
–
An embedded system is a computer system with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a device often including hardware. Embedded systems control many devices in use today. Ninety-eight percent of all microprocessors are manufactured as components of embedded systems, examples of properties of typically embedded computers when compared with general-purpose counterparts are low power consumption, small size, rugged operating ranges, and low per-unit cost. This comes at the price of limited processing resources, which make them more difficult to program. For example, intelligent techniques can be designed to power consumption of embedded systems. Modern embedded systems are based on microcontrollers, but ordinary microprocessors are also common. In either case, the processor used may be ranging from general purpose to those specialised in certain class of computations. A common standard class of dedicated processors is the signal processor. Since the embedded system is dedicated to tasks, design engineers can optimize it to reduce the size and cost of the product and increase the reliability. Some embedded systems are mass-produced, benefiting from economies of scale, complexity varies from low, with a single microcontroller chip, to very high with multiple units, peripherals and networks mounted inside a large chassis or enclosure. One of the very first recognizably modern embedded systems was the Apollo Guidance Computer, an early mass-produced embedded system was the Autonetics D-17 guidance computer for the Minuteman missile, released in 1961. When the Minuteman II went into production in 1966, the D-17 was replaced with a new computer that was the first high-volume use of integrated circuits. Since these early applications in the 1960s, embedded systems have come down in price and there has been a rise in processing power. An early microprocessor for example, the Intel 4004, was designed for calculators and other systems but still required external memory. By the early 1980s, memory, input and output system components had been integrated into the chip as the processor forming a microcontroller. Microcontrollers find applications where a computer would be too costly. A comparatively low-cost microcontroller may be programmed to fulfill the role as a large number of separate components

20.
Bluetooth Low Energy
–
Compared to Classic Bluetooth, Bluetooth Smart is intended to provide considerably reduced power consumption and cost while maintaining a similar communication range. Bluetooth Smart was originally introduced under the name Wibree by Nokia in 2006 and it was merged into the main Bluetooth standard in 2010 with the adoption of the Bluetooth Core Specification Version 4.0. Mobile operating systems including iOS, Android, Windows Phone and BlackBerry, as well as macOS, Linux, Windows 8 and Windows 10, the Bluetooth SIG predicts that by 2018 more than 90 percent of Bluetooth-enabled smartphones will support Bluetooth Smart. The Bluetooth SIG officially unveiled Bluetooth 5 on June 16,2016 during an event in London. One change on the side is that they dropped the point number. Bluetooth Smart is not backward-compatible with the previous Bluetooth protocol, the Bluetooth 4.0 specification permits devices to implement either or both of the LE and Classic systems. Bluetooth Smart uses the same 2.4 GHz radio frequencies as Classic Bluetooth, LE does, however, use a simpler modulation system. In 2011, the Bluetooth Special Interest Group announced the Bluetooth Smart logo so as to clarify compatibility between the new low energy devices and other Bluetooth devices, Bluetooth Smart Ready indicates a dual-mode device compatible with both Classic and low energy peripherals. Bluetooth Smart indicates a low energy-only device which requires either a Smart Ready or another Smart device in order to function, in the new branding information, the Bluetooth SIG has made one fundamental change. It is phasing out the Bluetooth Smart and Bluetooth Smart Ready logos and word marks and has reverted to using the Bluetooth logo, the logo uses a new blue color. The Bluetooth SIG identifies a number of markets for low energy technology, particularly in the home, health, sport. The company began developing a wireless technology adapted from the Bluetooth standard which would provide lower power usage, the results were published in 2004 using the name Bluetooth Low End Extension. Integration of Bluetooth Smart with version 4.0 of the Core Specification was completed in early 2010, the first smartphone to implement the 4.0 specification was the iPhone 4S, released in October 2011. A number of manufacturers released Bluetooth Smart Ready devices in 2012. Borrowing from the original Bluetooth specification, the Bluetooth SIG defines several profiles — specifications for how a device works in a particular application — for low energy devices, manufacturers are expected to implement the appropriate specifications for their device in order to ensure compatibility. A device may contain implementations of multiple profiles, Bluetooth 4.0 provides low power consumption with higher bit rates. In 2014, Cambridge Silicon Radio launched CSR Mesh, CSR Mesh protocol uses Bluetooth Smart to communicate with other Bluetooth Smart devices in the network. Each device can pass the information forward to other Bluetooth Smart devices creating a “mesh” effect, for example, switching off an entire building of lights from a single smartphone

21.
Booting
–
In computing, booting is the initialization of a computerized system. The system can be a computer or a computer appliance, the booting process can be hard, e. g. after electrical power to the CPU is switched from off to on, or soft, when those power-on self-tests can be avoided. On some systems a soft boot may optionally clear RAM to zero, both hard and soft booting can be initiated by hardware such as a button press, or by software command. Booting is complete when the normal, operative, runtime environment is attained, within the hard reboot process, it runs after completion of the self-tests, then loads and runs the software. A boot loader is loaded into memory from persistent memory, such as a hard disk drive or, in some older computers, from a medium such as punched cards, punched tape. The boot loader then loads and executes the processes that finalize the boot, the process of hibernating or sleeping does not involve booting. Minimally, some embedded systems do not require a noticeable sequence to begin functioning. All computing systems are state machines, and a reboot may be the method to return to a designated zero-state from an unintended, locked state. In addition to loading an operating system or stand-alone utility, the process can also load a storage dump program for diagnosing problems in an operating system. Boot is short for bootstrap or bootstrap load and derives from the phrase to pull oneself up by ones bootstraps, early computers used a variety of ad-hoc methods to get a small program into memory to solve this problem. The invention of read-only memory of various types solved this paradox by allowing computers to be shipped with a start up program that could not be erased, growth in the capacity of ROM has allowed ever more elaborate start up procedures to be implemented. There are many different methods available to load a short initial program into a computer and these methods reach from simple, physical input to removable media that can hold more complex programs. Early computers in the 1940s and 1950s were one-of-a-kind engineering efforts that could take weeks to program and program loading was one of problems that had to be solved. An early computer, ENIAC, had no program stored in memory, bootstrapping did not apply to ENIAC, whose hardware configuration was ready for solving problems as soon as power was applied. The program was stored as a bit image on a continuously running magnetic drum, core memory was probably cleared manually via the maintenance console, and startup from when power was fully up was very fast, only a few seconds. In its general design, the DIP compared roughly with a DEC PDP-8, thus, it was not the kind of single-button-pressure bootstrap that came later, nor a read-only memory in strict terms, since the magnetic drum involved could be written to. The first programmable computers for commercial sale, such as the UNIVAC I and they typically included instructions that performed a complete input or output operation. The left 18-bit half-word was then executed as an instruction, which usually read additional words into memory, the loaded boot program was then executed, which, in turn, loaded a larger program from that medium into memory without further help from the human operator

22.
ChibiOS/RT
–
ChibiOS/RT is a compact and fast real-time operating system supporting multiple architectures and released under the GPL3 license. It is developed by Giovanni Di Sirio, ChibiOS/RT is designed for embedded applications on 8,16 and 32 bit microcontrollers, size and execution efficiency are the main project goals. As reference, the size can range from a minimum of 1.2 Kib up to a maximum of 5.5 KiB with all the subsystems activated on a STM32 Cortex-M3 processor. The kernel is capable of over 220,000 created/terminated threads per second and is able to perform a Context Switch in 1.2 microseconds on an STM32 @72 MHz, similar metrics for all the supported platforms are included in the source distribution as test reports. Hardware Abstraction Layer with support for ADC, CAN, GPT, EXT, I²C, ICU, MAC, MMC/SD, PAL, PWM, RTC, SDC, Serial, SPI, support for the LwIP and uIP TCP/IP stacks. Support for the FatFs file system library, all system objects, such as threads, semaphores, timers, etc. can be created and deleted at runtime. There is no upper limit except for the available memory, in order to increase system reliability, the kernel architecture is entirely static, a memory allocator is not required, and there are no data structures with upper size limits like tables or arrays. The system APIs are designed to not have error conditions such as error codes or exceptions, ChibiOS/RT has also been ported to the Raspberry Pi and the following device drivers have been implemented, Port, Serial, GPT, I2C, SPI and PWM. It is also possible to run the kernel in a Win32 process in a software I/O emulation mode, an example is included for MinGW compiler. ChibiOS/RT is fully supported by the GUI toolkit µGFX, µGFX was formerly known as ChibiOS/GFX. Comparison of open-source operating systems A detailed explanation of multithreading in ChibiOS/RT ChibiOS/RT homepage and documentation ChibiOS/RT project page and support

23.
Thread (computing)
–
In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system. The implementation of threads and processes differs between operating systems, but in most cases a thread is a component of a process, multiple threads can exist within one process, executing concurrently and sharing resources such as memory, while different processes do not share these resources. In particular, the threads of a process share its executable code, systems with a single processor generally implement multithreading by time slicing, the central processing unit switches between different software threads. This context switching generally happens very often and rapidly enough that users perceive the threads or tasks as running in parallel, Threads made an early appearance in OS/360 Multiprogramming with a Variable Number of Tasks in 1967, in which context they were called tasks. The term thread has been attributed to Victor A. Vyssotsky, some threading implementations are called kernel threads, whereas light-weight processes are a specific type of kernel thread that share the same state and information. Furthermore, programs can have user-space threads when threading with timers, signals, or other methods to interrupt their own execution, in computer programming, single-threading is the processing of one command at a time. The opposite of single-threading is multithreading, while it has been suggested that the term single-threading is misleading, the term has been widely accepted within the functional programming community. Multithreading is mainly found in multitasking operating systems, multithreading is a widespread programming and execution model that allows multiple threads to exist within the context of one process. These threads share the resources, but are able to execute independently. The threaded programming model provides developers with an abstraction of concurrent execution. Multithreading can also be applied to one process to enable parallel execution on a multiprocessing system, multithreaded applications have the following advantages, Responsiveness, multithreading can allow an application to remain responsive to input. In a one-thread program, if the main execution thread blocks on a long-running task, on the other hand, in most cases multithreading is not the only way to keep a program responsive, with non-blocking I/O and/or Unix signals being available for gaining similar results. Lower resource consumption, using threads, an application can serve multiple clients concurrently using fewer resources than it would need when using multiple copies of itself. For example, the Apache HTTP server uses thread pools, a pool of threads for listening to incoming requests. GPU computing environments like CUDA and OpenCL use the model where dozens to hundreds of threads run in parallel across data on a large number of cores. Multithreading has the drawbacks, Synchronization, since threads share the same address space. In order for data to be manipulated, threads will often need to rendezvous in time in order to process the data in the correct order. Threads may also require mutually exclusive operations in order to prevent common data from being simultaneously modified or read while in the process of being modified, careless use of such primitives can lead to deadlocks

24.
Semaphore (programming)
–
A trivial semaphore is a plain variable that is changed depending on programmer-defined conditions. The variable is used as a condition to control access to some system resource. Semaphores are a tool in the prevention of race conditions, however. The semaphore concept was invented by Dutch computer scientist Edsger Dijkstra in 1962 or 1963 and it has also been used as the control mechanism for I/O controllers, for example in the Electrologica X8 computer. Suppose a library has 10 identical study rooms, to be used by one student at a time, to prevent disputes, students must request a room from the front desk if they wish to make use of a study room. If no rooms are free, students wait at the desk until someone relinquishes a room, when a student has finished using a room, the student must return to the desk and indicate that one room has become free. When a student requests a room, the clerk decreases this number, when a student releases a room, the clerk increases this number. Once access to a room is granted, the room can be used for as long as desired, in this scenario the front desk count-holder represents a counting semaphore, the rooms are the resources, and the students represent processes. The value of the semaphore in this scenario is initially 10, when a student requests a room, they are granted access, and the value of the semaphore is changed to 9. After the next student comes, it drops to 8, then 7, If someone requests a room and the resulting value of the semaphore would be negative, they are forced to wait until a room is freed. When used to access to a pool of resources, a semaphore tracks only how many resources are free. Some other mechanism may be required to select a free resource. The paradigm is especially powerful because the count may serve as a useful trigger for a number of different actions. The librarian above may turn the lights off in the hall when there are no students remaining. The success of the protocol requires applications follow it correctly, fairness and safety are likely to be compromised if even a single process acts incorrectly. Counting semaphores are equipped with two operations, historically denoted as P and V, operation V increments the semaphore S, and operation P decrements it. The value of the semaphore S is the number of units of the resource that are currently available, the P operation wastes time or sleeps until a resource protected by the semaphore becomes available, at which time the resource is immediately claimed. The V operation is the inverse, it makes a resource available again after the process has finished using it, one important property of semaphore S is that its value cannot be changed except by using the V and P operations

25.
Mutex
–
A simple example of why mutual exclusion is important in practice can be visualized using a singly linked list of four items, where the second and third are to be removed. The removal of a node that sits between 2 other nodes is performed by changing the next pointer of the node to point to the next node. This problem can be avoided by using the requirement of mutual exclusion to ensure that updates to the same part of the list cannot occur. The mutual-exclusion solution to this makes the shared resource available only while the process is in a specific code segment called the critical section. It controls access to the resource by controlling each process execution of that part of its program where the resource would be used. A successful solution to this problem must have at least these two properties, It must implement mutual exclusion, only one process can be in the section at a time. Deadlock freedom can be expanded to implement one or both of properties, Lockout-freedom guarantees that any process wishing to enter the critical section will be able to do so eventually. This is distinct from deadlock avoidance, which requires that some waiting process be able to get access to the critical section, but does not require that every process gets a turn. If two processes continually trade a resource between them, a process could be locked out and experience resource starvation, even though the system is not in deadlock. If a system is free of lockouts, it ensures that every process can get a turn at some point in the future, a k-bounded waiting property gives a more precise commitment than lockout-freedom. Lockout-freedom ensures every process can access the critical section eventually, it gives no guarantee about how long the wait will be, in practice, a process could be overtaken an arbitrary or unbounded number of times by other higher-priority processes before it gets its turn. Under a k-bounded waiting property, each process has a finite maximum wait time and this works by setting a limit to the number of times other processes can cut in line, so that no process can enter the critical section more than k times while another is waiting. Every process program can be partitioned into four sections, resulting in four states, program execution cycles through these four states in order, Non-Critical Section Operation is outside the critical section, the process is not using or requesting the shared resource. Trying The process attempts to enter the critical section, critical Section The process is allowed to access the shared resource in this section. Exit The process leaves the section and makes the shared resource available to other processes. If a process wishes to enter the section, it must first execute the trying section. After the process has executed its critical section and is finished with the shared resources, the process then returns to its non-critical section. There are both software and hardware solutions for enforcing mutual exclusion, some different solutions are discussed below

26.
Queue (abstract data type)
–
This makes the queue a First-In-First-Out data structure. In a FIFO data structure, the first element added to the queue will be the first one to be removed. This is equivalent to the requirement that once a new element is added, often a peek or front operation is also entered, returning the value of the front element without dequeuing it. A queue is an example of a data structure, or more abstractly a sequential collection. Queues provide services in science, transport, and operations research where various entities such as data, objects, persons, or events are stored. In these contexts, the queue performs the function of a buffer, Queues are common in computer programs, where they are implemented as data structures coupled with access routines, as an abstract data structure or in object-oriented languages as classes. Common implementations are circular buffers and linked lists, theoretically, one characteristic of a queue is that it does not have a specific capacity. Regardless of how elements are already contained, a new element can always be added. It can also be empty, at which point removing an element will be impossible until a new element has been added again, fixed length arrays are limited in capacity, but it is not true that items need to be copied towards the head of the queue. The simple trick of turning the array into a circle and letting the head. If n is the size of the array, then computing indices modulo n will turn the array into a circle, the array size must be declared ahead of time, but some implementations simply double the declared array size when overflow occurs. Most modern languages with objects or pointers can implement or come with libraries for dynamic lists, such data structures may have not specified fixed capacity limit besides memory constraints. Queue overflow results from trying to add an element onto a full queue and queue underflow happens when trying to remove an element from an empty queue, a bounded queue is a queue limited to a fixed number of items. There are several efficient implementations of FIFO queues, an efficient implementation is one that can perform the operations—enqueuing and dequeuing—in O time. Linked list A doubly linked list has O insertion and deletion at both ends, so is a choice for queues. A regular singly linked list only has efficient insertion and deletion at one end, however, a small modification—keeping a pointer to the last node in addition to the first one—will enable it to implement an efficient queue. A deque implemented using a dynamic array Queues may be implemented as a separate data type, or may be considered a special case of a double-ended queue. C++s Standard Template Library provides a queue templated class which is restricted to only push/pop operations, since J2SE5.0, Javas library contains a Queue interface that specifies queue operations, implementing classes include LinkedList and ArrayDeque

27.
Memory management
–
Memory management is a form of resource management applied to computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request and this is critical to any advanced computer system where more than a single process might be underway at any time. Several methods have been devised that increase the effectiveness of memory management, the quality of the virtual memory manager can have an extensive effect on overall system performance. Modern general-purpose computer systems manage memory at two levels, operating system level, and application level, application-level memory management is generally categorized as either automatic memory management, usually involving garbage collection, or manual memory management. The task of fulfilling an allocation request consists of locating a block of unused memory of sufficient size, Memory requests are satisfied by allocating portions from a large pool of memory called the heap or free store. At any given time, some parts of the heap are in use, while some are free, the allocators metadata can also inflate the size of small allocations. This is often managed by chunking, the memory management system must track outstanding allocations to ensure that they do not overlap and that no memory is ever lost as a memory leak. The specific dynamic memory allocation algorithm implemented can impact performance significantly, a study conducted in 1994 by Digital Equipment Corporation illustrates the overheads involved for a variety of allocators. The lowest average instruction path length required to allocate a single memory slot was 52, since the precise location of the allocation is not known in advance, the memory is accessed indirectly, usually through a pointer reference. This works well for simple embedded systems where no large objects need to be allocated, however, due to the significantly reduced overhead this method can substantially improve performance for objects that need frequent allocation / de-allocation and is often used in video games. All blocks of a particular size are kept in a linked list or tree. If a smaller size is requested than is available, the smallest available size is selected, one of the resulting parts is selected, and the process repeats until the request is complete. When a block is allocated, the allocator will start with the smallest sufficiently large block to avoid needlessly breaking blocks, when a block is freed, it is compared to its buddy. If they are free, they are combined and placed in the correspondingly larger-sized buddy-block list. Virtual memory is a method of decoupling the memory organization from the physical hardware, the applications operate memory via virtual addresses. Each time an attempt to access stored data is made, virtual memory data orders translate the virtual address to a physical address, in this way addition of virtual memory enables granular control over memory systems and methods of access. In virtual memory systems the system limits how a process can access the memory. Even though the memory allocated for specific processes is normally isolated, shared memory is one of the fastest techniques for inter-process communication

28.
Watchdog timer
–
A watchdog timer is an electronic timer that is used to detect and recover from computer malfunctions. During normal operation, the computer regularly resets the timer to prevent it from elapsing. If, due to a fault or program error, the computer fails to reset the watchdog. The timeout signal is used to initiate corrective action or actions, the corrective actions typically include placing the computer system in a safe state and restoring normal system operation. In such systems, the computer cannot depend on a human to reboot it if it hangs, a watchdog timer is usually employed in cases like these. Watchdog timers may also be used when running untrusted code in a sandbox, to limit the CPU time available to the code and thus prevent some types of denial-of-service attacks. The act of restarting a watchdog timer is referred to as kicking the dog or other similar terms. Alternatively, in microcontrollers that have a watchdog timer, the watchdog is sometimes kicked by executing a special machine language instruction. An example of this is the CLRWDT instruction found in the set of some PIC microcontrollers. In computers that are running operating systems, watchdog resets are usually invoked through a device driver, for example, in the Linux operating system, a user space program will kick the watchdog by interacting with the watchdog device driver, typically by writing a zero character to /dev/watchdog. The device driver, which serves to abstract the watchdog hardware from user programs, is also used to configure the time-out period and start. Watchdog timers come in many configurations, and many allow their configurations to be altered, microcontrollers often include an integrated, on-chip watchdog. In other computers the watchdog may reside in a chip that connects directly to the CPU. The watchdog and CPU may share a clock signal, as shown in the block diagram below. Two or more timers are sometimes cascaded to form a multistage watchdog timer, for example, the block diagram below shows a three-stage watchdog. In a multistage watchdog, only the first stage is kicked by the processor, upon first stage timeout, a corrective action is initiated and the next stage in the cascade is started. As each subsequent stage times out, it triggers a corrective action, upon final stage timeout, a corrective action is initiated, but no other stage is started because the end of the cascade has been reached. Watchdog timers may have fixed or programmable time intervals

29.
Computer network
–
A computer network or data network is a telecommunications network which allows nodes to share resources. In computer networks, networked computing devices exchange data with other using a data link. The connections between nodes are established using either cable media or wireless media, the best-known computer network is the Internet. Network computer devices that originate, route and terminate the data are called network nodes, nodes can include hosts such as personal computers, phones, servers as well as networking hardware. Two such devices can be said to be networked together when one device is able to exchange information with the other device, Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the networks size, topology and organizational intent. In most cases, application-specific communications protocols are layered over other more general communications protocols and this formidable collection of information technology requires skilled network management to keep it all running reliably. The chronology of significant computer-network developments includes, In the late 1950s, in 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. Licklider developed a group he called the Intergalactic Computer Network. In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of computer systems. The same year, at Massachusetts Institute of Technology, a group supported by General Electric and Bell Labs used a computer to route. Throughout the 1960s, Leonard Kleinrock, Paul Baran, and Donald Davies independently developed network systems that used packets to transfer information between computers over a network, in 1965, Thomas Marill and Lawrence G. Roberts created the first wide area network. This was an precursor to the ARPANET, of which Roberts became program manager. Also in 1965, Western Electric introduced the first widely used telephone switch that implemented true computer control, in 1972, commercial services using X.25 were deployed, and later used as an underlying infrastructure for expanding TCP/IP networks. In July 1976, Robert Metcalfe and David Boggs published their paper Ethernet, Distributed Packet Switching for Local Computer Networks, in 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s, by 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 100 Gbit/s were added, the ability of Ethernet to scale easily is a contributing factor to its continued use. Providing access to information on shared storage devices is an important feature of many networks, a network allows sharing of files, data, and other types of information giving authorized users the ability to access information stored on other computers on the network

30.
Protocol stack
–
The protocol stack is an implementation of a computer networking protocol suite. The terms are used interchangeably. Strictly speaking, the suite is the definition of the protocols, individual protocols within a suite are often designed with a single purpose in mind. This modularization makes design and evaluation easier, because each protocol module usually communicates with two others, they are commonly imagined as layers in a stack of protocols. The lowest protocol always deals with low-level, physical interaction of the hardware, every higher layer adds more features. User applications usually deal only with the topmost layers, in practical implementation, protocol stacks are often divided into three major sections, media, transport, and applications. A particular operating system or platform will often have two well-defined software interfaces, one between the media and transport layers, and one between the layers and applications. The media-to-transport interface defines how transport protocol software makes use of particular media, for example, this interface level would define how TCP/IP transport software would talk to Ethernet hardware. Examples of these interfaces include ODI and NDIS in the Microsoft Windows, the application-to-transport interface defines how application programs make use of the transport layers. For example, this level would define how a web browser program would talk to TCP/IP transport software. Examples of these interfaces include Berkeley sockets and System V STREAMS in the Unix world, T ~ ~ ~ T _____ Imagine three computers, A, B, and C. A and B both have radio equipment, and can communicate via the airwaves using a network protocol B and C are connected via a cable. However, neither of two protocols will be able to transport information from A to C, because these computers are conceptually on different networks. One, therefore, needs an inter-network protocol to connect them and it is easier to leave the base protocols alone, and design a protocol that can work on top of any of them This will make two stacks of two protocols each. The inter-network protocol will communicate with each of the protocol in their simpler language. A request on computer A to send a chunk of data to C is taken by the upper protocol and it, therefore, instructs the wireless protocol to transmit the data packet to B. On this computer, the lower layer handlers will pass the packet up to the inter-network protocol and this time, the cable protocol is used to send the data to C. There, the packet is again passed to the upper protocol

31.
CPU time
–
The CPU time is measured in clock ticks or seconds. Often, it is useful to measure CPU time as a percentage of the CPUs capacity, CPU time and CPU usage have two main uses. The first use is to quantify the overall busyness of the system, when the CPU usage is above 70%, the user may experience lag. Such high CPU usage indicates insufficient processing power, either the CPU needs to be upgraded, or the user experience reduced, for example, by switching to lower resolution graphics or reducing animations. The second use, with the advent of multi-tasking, is to quantify how the processor is shared between computer programs. High CPU usage by a program may indicate that it is highly demanding of processing power or that it may malfunction, for example. CPU time allows to measure how much processing power a single program requires, eliminating interference, in contrast, elapsed real time is the time taken from the start of a computer program until the end as measured by an ordinary clock. Elapsed real time includes I/O time and all types of waits incurred by the program. CPU time or CPU usage can be reported either for each thread, moreover, depending on what exactly the CPU was doing, the reported values can be subdivided in, User time is the amount of time the CPU was busy executing code in user space. System time is the amount of time the CPU was busy executing code in kernel space, Idle time is the amount of time the CPU was not busy, or, otherwise, the amount of time it executed the System Idle process. Idle time actually measures unused CPU capacity, steal time, on virtualized hardware, is the amount of time the operating system wanted to execute, but was not allowed to by the hypervisor. This can happen if the hardware runs multiple guest operating system. The Unix command top provides CPU time, priority, elapsed real time, the Unix command time prints CPU time and elapsed real time for a Unix process. Elapsed real time was 1.15 seconds, the following is the source code of the application nextPrimeNumber which was used in the above example. POSIX functions clock and getrusage can be used to get CPU time consumed by any process in a POSIX environment, if the process is multithreaded, the CPU time is the sum for all threads. With Linux starting from kernel 2.6.26 there is a parameter RUSAGE_THREAD which leads to resource usage statistics for the calling thread only, on multi-processor machines, a computer program can use two or more CPUs for processing using parallel processing scheduling. In such situations, the notion of total CPU time is used, elapsed real time is always greater than or equal to the CPU time for computer programs which use only one CPU for processing. If no wait is involved for I/O or other resources, elapsed real time, if a program uses parallel processing, total CPU time for that program would be more than its elapsed real time

32.
Analog-to-digital converter
–
In electronics, an analog-to-digital converter is a system that converts an analog signal, such as a sound picked up by a microphone or light entering a digital camera, into a digital signal. Typically the digital output is a twos complement binary number that is proportional to the input, due to the complexity and the need for precisely matched components, all but the most specialized ADCs are implemented as integrated circuits. A digital-to-analog converter performs the function, it converts a digital signal into an analog signal. The conversion involves quantization of the input, so it necessarily introduces a small amount of error, furthermore, instead of continuously performing the conversion, an ADC does the conversion periodically, sampling the input. The result is a sequence of values that have been converted from a continuous-time and continuous-amplitude analog signal to a discrete-time. An ADC is defined by its bandwidth and its signal-to-noise ratio, the bandwidth of an ADC is characterized primarily by its sampling rate. The dynamic range of an ADC is influenced by many factors, including the resolution, linearity and accuracy, aliasing and jitter. The dynamic range of an ADC is often summarized in terms of its number of bits. An ideal ADC has an ENOB equal to its resolution, ADCs are chosen to match the bandwidth and required signal-to-noise ratio of the signal to be quantized. If an ADC operates at a rate greater than twice the bandwidth of the signal, then perfect reconstruction is possible given an ideal ADC. The presence of quantization error limits the range of even an ideal ADC. However, if the range of the ADC exceeds that of the input signal. The resolution of the converter indicates the number of values it can produce over the range of analog values. The resolution determines the magnitude of the error and therefore determines the maximum possible average signal to noise ratio for an ideal ADC without the use of oversampling. The values are stored electronically in binary form, so the resolution is usually expressed in bits. In consequence, the number of discrete values available, or levels, is assumed to be a power of two, for example, an ADC with a resolution of 8 bits can encode an analog input to one in 256 different levels, since 28 =256. The values can represent the ranges from 0 to 255 or from −128 to 127, resolution can also be defined electrically, and expressed in volts. The minimum change in required to guarantee a change in the output code level is called the least significant bit voltage

33.
Digital-to-analog converter
–
In electronics, a digital-to-analog converter is a device that converts a digital signal into an analog signal. An analog-to-digital converter performs the reverse function, there are several DAC architectures, the suitability of a DAC for a particular application is determined by three main parameters, resolution, maximum sampling frequency and accuracy. Due to the complexity and the need for precisely matched components, all, digital-to-analog conversion can degrade a signal, so a DAC should be specified that has insignificant errors in terms of the application. DACs are commonly used in players to convert digital data streams into analog audio signals. They are also used in televisions and mobile phones to digital video data into analog video signals which connect to the screen drivers to display monochrome or color images. These two applications use DACs at opposite ends of the speed/resolution trade-off, the audio DAC is a low speed high resolution type while the video DAC is a high speed low to medium resolution type. Discrete DACs would typically be extremely high speed low resolution power hungry types, very high speed test equipment, especially sampling oscilloscopes, may also use discrete DACs. A DAC converts an abstract finite-precision number into a physical quantity, in particular, DACs are often used to convert finite-precision time series data to a continually varying physical signal. A conventional practical DAC converts the numbers into a constant function made up of a sequence of rectangular functions that is modeled with the zero-order hold. Other DAC methods produce a pulse-density modulated output that can be filtered to produce a smoothly varying signal. As per the Nyquist–Shannon sampling theorem, a DAC can reconstruct the signal from the sampled data provided that its bandwidth meets certain requirements. Digital sampling introduces quantization error that manifests as low-level noise added to the reconstructed signal, instead of impulses, a conventional practical DAC updates the analog voltage at uniform sampling intervals, which is then interpolated via a reconstruction filter to continuously varied levels. The effect of this is that the voltage is held in time at the current value until the next input number is latched. This is equivalent to a zero-order hold operation and has an effect on the response of the reconstructed signal. The fact that DACs output a sequence of piecewise constant values or rectangular pulses causes multiple harmonics above the Nyquist frequency, usually, these are removed with a low pass filter acting as a reconstruction filter in applications that require it. Other DAC methods produce a pulse-density modulated signal that can then be filtered in a way to produce a smoothly varying signal. DACs and ADCs are part of a technology that has contributed greatly to the digital revolution. To illustrate, consider a typical long-distance telephone call, the callers voice is converted into an analog electrical signal by a microphone, which is converted to a digital stream by an ADC

34.
Pulse-width modulation
–
Pulse-width modulation, or pulse-duration modulation, is a modulation technique used to encode a message into a pulsing signal. In addition, PWM is one of the two principal algorithms used in photovoltaic solar battery chargers, the other being maximum power point tracking. The average value of voltage fed to the load is controlled by turning the switch between supply and load on and off at a fast rate, the longer the switch is on compared to the off periods, the higher the total power supplied to the load. The PWM switching frequency has to be higher than what would affect the load. The term duty cycle describes the proportion of on time to the interval or period of time. Duty cycle is expressed in percent, 100% being fully on, the main advantage of PWM is that power loss in the switching devices is very low. When a switch is off there is no current. Power loss, being the product of voltage and current, is thus in both close to zero. PWM also works well with controls, which, because of their on/off nature. PWM has also used in certain communication systems where its duty cycle has been used to convey information over a communications channel. It was an inefficient scheme, but tolerable because the power was low. While the rheostat was one of methods of controlling power. This mechanism also needed to be able to drive motors for fans, pumps and robotic servos, PWM emerged as a solution for this complex problem. One early application of PWM was in the Sinclair X10, a 10 W audio amplifier available in kit form in the 1960s, at around the same time PWM started to be used in AC motor control. Pulse-width modulation uses a pulse wave whose pulse width is modulated resulting in the variation of the average value of the waveform. As f is a wave, its value is y max for 0 < t < D ⋅ T and y min for D ⋅ T < t < T. The above expression then becomes, y ¯ =1 T =1 T = D ⋅ y max + y min and this latter expression can be fairly simplified in many cases where y min =0 as y ¯ = D ⋅ y max. From this, it is obvious that the value of the signal is directly dependent on the duty cycle D

35.
Serial port
–
In computing, a serial port is a serial communication interface through which information transfers in or out one bit at a time. Throughout most of the history of computers, data was transferred through serial ports to devices such as modems, terminals. Modern computers without serial ports may require serial-to-USB converters to allow compatibility with RS-232 serial devices, serial ports are still used in applications such as industrial automation systems, scientific instruments, point of sale systems and some industrial and consumer products. Server computers may use a port as a control console for diagnostics. Network equipment often use serial console for configuration, serial ports are still used in these areas as they are simple, cheap and their console functions are highly standardized and widespread. A serial port requires very little supporting software from the host system, some computers, such as the IBM PC, use an integrated circuit called a UART. This IC converts characters to and from asynchronous serial form, implementing the timing and framing of data in hardware, very low-cost systems, such as some early home computers, would instead use the CPU to send the data through an output pin, using the bit-banging technique. Early home computers often had proprietary serial ports with pinouts and voltage levels incompatible with RS-232, low-cost processors now allow higher-speed, but more complex, serial communication standards such as USB and FireWire to replace RS-232. These make it possible to connect devices that would not have operated feasibly over slower serial connections, such as storage, sound. Many personal computer motherboards still have at least one serial port, small-form-factor systems and laptops may omit RS-232 connector ports to conserve space, but the electronics are still there. RS-232 has been standard for so long that the circuits needed to control a serial port became very cheap and often exist on a single chip, sometimes also with circuitry for a parallel port. The individual signals on a serial port are unidirectional and when connecting two devices the outputs of one device must be connected to the inputs of the other, devices are divided into two categories data terminal equipment and data circuit-terminating equipment. A line that is an output on a DTE device is an input on a DCE device and vice versa so a DCE device can be connected to a DTE device with a straight wired cable, conventionally, computers and terminals are DTE while modems and peripherals are DCE. If it is necessary to connect two DTE devices a cross-over null modem, in the form of either an adapter or a cable, generally, serial port connectors are gendered, only allowing connectors to mate with a connector of the opposite gender. With D-subminiature connectors, the male connectors have protruding pins, either type of connector can be mounted on equipment or a panel, or terminate a cable. Connectors mounted on DTE are likely to be male, and those mounted on DCE are likely to be female, however, this is far from universal, for instance, most serial printers have a female DB25 connector, but they are DTEs. The desire to supply serial interface cards with two ports required that IBM reduce the size of the connector to fit onto a single card back panel, a DE-9 connector also fits onto a card with a second DB-25 connector. Starting around the time of the introduction of the IBM PC-AT, serial ports were built with a 9-pin connector to save cost

36.
Serial Peripheral Interface Bus
–
The Serial Peripheral Interface bus is a synchronous serial communication interface specification used for short distance communication, primarily in embedded systems. The interface was developed by Motorola in the eighties and has become a de facto standard. Typical applications include Secure Digital cards and liquid crystal displays, SPI devices communicate in full duplex mode using a master-slave architecture with a single master. The master device originates the frame for reading and writing, multiple slave devices are supported through selection with individual slave select lines. Sometimes SPI is called a serial bus, contrasting with three-, two-. But SSI Protocol employs differential signaling and provides only a single communication channel. The SPI bus specifies five logic signals, SCLK, Serial Clock, MOSI, Master Output Slave Input, or Master Out Slave In. MISO, Master Input Slave Output, or Master In Slave Out, SDIO, Serial Data I/O SS, Slave Select. Master Output → Slave Input, MOSI, SIMO, SDI, DI, DIN, SI, Master Input ← Slave Output, MISO, SOMI, SDO, DO, DOUT, SO, MRST. Serial Data I/O, SDIO, SIO Slave Select, SS, S̅S̅, SSEL, CS, C̅S̅, CE, nSS, /SS, SS#. The MOSI/MISO convention requires that, on using the alternate names, SDI on the master be connected to SDO on the slave. Slave Select is the functionality as chip select and is used instead of an addressing concept. Pin names are capitalized as in Slave Select, Serial Clock. The SPI bus can operate with a master device and with one or more slave devices. If a single device is used, the SS pin may be fixed to logic low if the slave permits it. Some slaves require a falling edge of the select signal to initiate an action. An example is the Maxim MAX1242 ADC, which starts conversion on a high→low transition, with multiple slave devices, an independent SS signal is required from the master for each slave device. Most slave devices have tri-state outputs so their MISO signal becomes high impedance when the device is not selected, devices without tri-state outputs cannot share SPI bus segments with other devices, only one such slave could talk to the master

37.
File system
–
In computing, a file system or filesystem is used to control how data is stored and retrieved. Without a file system, information placed in a storage medium would be one large body of data with no way to tell where one piece of information stops, by separating the data into pieces and giving each piece a name, the information is easily isolated and identified. Taking its name from the way paper-based information systems are named, the structure and logic rules used to manage the groups of information and their names is called a file system. There are many different kinds of file systems, each one has different structure and logic, properties of speed, flexibility, security, size and more. Some file systems have been designed to be used for specific applications, for example, the ISO9660 file system is designed specifically for optical discs. File systems can be used on different types of storage devices that use different kinds of media. The most common device in use today is a hard disk drive. Other kinds of media that are used include flash memory, magnetic tapes, in some cases, such as with tmpfs, the computers main memory is used to create a temporary file system for short-term use. Some file systems are used on local storage devices, others provide file access via a network protocol. Some file systems are virtual, meaning that the files are computed on request or are merely a mapping into a different file system used as a backing store. The file system access to both the content of files and the metadata about those files. It is responsible for arranging storage space, reliability, efficiency, before the advent of computers the term file system was used to describe a method of storing and retrieving paper documents. By 1961 the term was being applied to computerized filing alongside the original meaning, by 1964 it was in general use. A file system consists of two or three layers, sometimes the layers are explicitly separated, and sometimes the functions are combined. The logical file system is responsible for interaction with the user application and it provides the application program interface for file operations — OPEN, CLOSE, READ, etc. and passes the requested operation to the layer below it for processing. The logical file system manage open file table entries and per-process file descriptors and this layer provides file access, directory operations, security and protection. The second optional layer is the file system. This interface allows support for multiple concurrent instances of physical file systems, the third layer is the physical file system

38.
Terminal server
–
A terminal server enables organizations to connect devices with an RS-232, RS-422 or RS-485 serial interface to a local area network. Products marketed as terminal servers can be very simple devices that do not offer any security functionality, such as data encryption and user authentication. The primary application scenario is to enable devices to access network server applications, or vice versa. Usually companies which need a server with these advanced functions want to remotely control, monitor, diagnose. A console server is a device or service that provides access to the console of a computing device via networking technologies. Digital Equipment Corporations DECserver 100,200 and 300 are early examples of this technology, in fact, these later terminal server products also included much larger flash memory and full support for the Telnet part of the TCP/IP protocol suite. Many other companies entered the market with devices pre-loaded with software fully compatible with LAT. A terminal server is used many ways but from a sense if a user has a serial device and they need to move data over the LAN. Raw TCP socket connection, A raw TCP socket connection which can be initiated from the server or from the remote host/server. This can be point-to-point or shared, where serial devices can be shared amongst multiple devices, TCP sessions can be initiated from the TCP server application or from the terminal server. Console management - reverse Telnet, reverse SSH, In console management terminology and they run Telnet or SSH on their client and attach to the terminal server, then connect to the serial device. In this application, terminal servers are also called console servers because they are used to connect to console ports which are found on products like routers, PBXes, switches, the idea is to gain access to those devices via their console port. Connect serial-based applications with a COM/TTY port driver, Many software applications have been written to communicate with devices that are connected to a servers serial COM ports. In this application, serial ports can be connected to network servers or workstations running COM port redirector software operating as a virtual COM port, Many terminal server vendors include COM port redirector software with their terminal servers. This application need is most common in Windows environments, but also exists in Linux, Serial tunneling between two serial devices, Serial tunneling enables users to establish a link across Ethernet to a serial port on another terminal server. Back to back, This application is designed to solve a wiring problem and this application is ideal where a device exists with an application written to gather information from that device. This application allows them to eliminate the wiring and it can also be used with industrial devices so that those devices can be run transparently across the network. Virtual modem, Virtual modem is another example of a back-to-back application and it may be used to replace modems but still use an AT command set

Piecewise constant output of a conventional DAC lacking a reconstruction filter. In a practical DAC, a filter or the finite bandwidth of the device smooths out the step response into a continuous curve.

Pulse-width modulation (PWM), or pulse-duration modulation (PDM), is a modulation technique used to encode a message …

Fig. 2: A simple method to generate the PWM pulse train corresponding to a given signal is the intersective PWM: the signal (here the red sine wave) is compared with a sawtooth waveform (blue). When the latter is less than the former, the PWM signal (magenta) is in high state (1). Otherwise it is in the low state (0).

Fig. 3 : Principle of the delta PWM. The output signal (blue) is compared with the limits (green). These limits correspond to the reference signal (red), offset by a given value. Every time the output signal (blue) reaches one of the limits, the PWM signal changes state.

Fig. 4 : Principle of the sigma-delta PWM. The top green waveform is the reference signal, on which the output signal (PWM, in the bottom plot) is subtracted to form the error signal (blue, in top plot). This error is integrated (middle plot), and when the integral of the error exceeds the limits (red lines), the output changes state.

Fig. 5 : Three types of PWM signals (blue): leading edge modulation (top), trailing edge modulation (middle) and centered pulses (both edges are modulated, bottom). The green lines are the sawtooth waveform (first and second cases) and a triangle waveform (third case) used to generate the PWM waveforms using the intersective method.

In computing, source code is any collection of computer instructions, possibly with comments, written using a …

A more complex Java source code example. Written in object-oriented programming style, it demonstrates boilerplate code. With prologue comments indicated in red, inline comments indicated in green, and program statements indicated in blue.