.NET Framework began as proprietary software, although the firm worked to standardize the software stack almost immediately, even before its first release. Despite the standardization efforts, developers, mainly those in the free and open-source software communities, expressed their unease with the selected terms and the prospects of any free and open-source implementation, especially regarding software patents. Since then, Microsoft has changed .NET development to more closely follow a contemporary model of a community-developed software project, including issuing an update to its patent promising to address the concerns.

Microsoft began developing .NET Framework in the late 1990s, originally under the name of Next Generation Windows Services (NGWS), as part of the .NET strategy. By late 2000, the first beta versions of .NET 1.0 were released.

While Microsoft and their partners hold patents for CLI and C#, ECMA and ISO require that all patents essential to implementation be made available under "reasonable and non-discriminatory terms". The firms agreed to meet these terms, and to make the patents available royalty-free. However, this did not apply for the part of .NET Framework not covered by ECMA-ISO standards, which included Windows Forms, ADO.NET, and ASP.NET. Patents that Microsoft holds in these areas may have deterred non-Microsoft implementations of the full framework.[6]

On October 3, 2007, Microsoft announced that the source code for .NET Framework 3.5 libraries was to become available under the Microsoft Reference Source License (Ms-RSL[a]).[7] The source code repository became available online on January 16, 2008 and included BCL, ASP.NET, ADO.NET, Windows Forms, WPF, and XML. Scott Guthrie of Microsoft promised that LINQ, WCF, and WF libraries were being added.[8]

Microsoft .NET Framework v4.5 logo

On November 12, 2014, Microsoft announced .NET Core, in an effort to include cross-platform support for .NET, the source release of Microsoft's CoreCLR implementation, source for the "entire […] library stack" for .NET Core, and the adoption of a conventional ("bazaar"-like) open-source development model under the consolation stewardship of the .NET Foundation. Miguel de Icaza describes .NET Core as a "redesigned version of .NET that is based on the simplified version of the class libraries",[9] and Microsoft's Immo Landwerth explained that .NET Core would be "the foundation of all future .NET platforms". At the time of the announcement, the initial release of the .NET Core project had been seeded with a subset of the libraries' source code and coincided with the relicensing of Microsoft's existing .NET reference source away from the restrictions of the Ms-RSL. Landwerth acknowledged the disadvantages of the formerly selected shared license, explaining that it made codename Rotor "a non-starter" as a community-developed open source project because it did not meet the criteria of an Open Source Initiative (OSI) approved license.[10][11][12]

In November 2014, Microsoft also produced an update to its patent grants, which further extends the scope beyond its prior pledges. Prior projects like Mono existed in a legal grey area because Microsoft's earlier grants applied only to the technology in "covered specifications", including strictly the 4th editions each of ECMA-334 and ECMA-335. The new patent promise, however, places no ceiling on the specification version, and even extends to any .NET runtime technologies documented on MSDN that have not been formally specified by the ECMA group, if a project chooses to implement them. This allows Mono and other projects to maintain feature parity with modern .NET features that have been introduced since the 4th edition was published without being at risk of patent litigation over the implementation of those features. The new grant does maintain the restriction that any implementation must maintain minimum compliance with the mandatory parts of the CLI specification.[13]

On March 31, 2016, Microsoft announced at Microsoft Build that they will completely relicense Mono under an MIT License even in scenarios where formerly a commercial license was needed.[14] Microsoft also supplemented its prior patent promise for Mono, stating that they won't assert any "applicable patents" against parties that are "using, selling, offering for sale, importing, or distributing Mono."[15][16] It was announced that the Mono Project was contributed to the .NET Foundation. These developments followed the acquisition of Xamarin, which began in February 2016 and was finished on March 18, 2016.[17]

Microsoft's press release highlights that the cross-platform commitment now allows for a fully open-source, modern server-side .NET stack. However, Microsoft does not plan to release the source for WPF or Windows Forms.[18][19]

d.^ .NET Framework 3.5 is not automatically installed with Windows 8, 8.1 or 10. It must be installed either from a Windows installation media or from the Internet on demand. Control Panel always attempts the latter.[41]

Common Language Infrastructure (CLI) provides a language-neutral platform for application development and execution. By implementing the core aspects of .NET Framework within the scope of CLI, these functions will not be tied to one language but will be available across the many languages supported by the framework.

Compiled CIL code is stored in CLI assemblies. As mandated by the specification, assemblies are stored in Portable Executable (PE) file format, common on Windows platform for all dynamic-link library (DLL) and executableEXE files. Each assembly consists of one or more files, one of which must contain a manifest bearing the metadata for the assembly. The complete name of an assembly (not to be confused with the file name on disk) contains its simple text name, version number, culture, and public key token. Assemblies are considered equivalent if they share the same complete name.

A private key can also be used by the creator of the assembly for strong naming. The public key token identifies which private key an assembly is signed with. Only the creator of the key pair (typically the person signing the assembly) can sign assemblies that have the same strong name as a prior version assembly, since the creator possesses the private key. Strong naming is required to add assemblies to Global Assembly Cache.

Starting with Visual Studio 2015, .NET Native compilation technology allows for the compilation of .NET code of Universal Windows Platform apps directly to machine code rather than CIL code, but the app must be written in either C# or Visual Basic.NET.[42]

.NET Framework includes a set of standard class libraries. The class library is organized in a hierarchy of namespaces. Most of the built-in application programming interfaces (APIs) are part of either System.* or Microsoft.* namespaces. These class libraries implement many common functions, such as file reading and writing, graphic rendering, database interaction, and XML document manipulation. The class libraries are available for all CLI compliant languages. The class library is divided into two parts (with no clear boundary): Base Class Library (BCL) and Framework Class Library (FCL).

BCL includes a small subset of the entire class library and is the core set of classes that serve as the basic API of CLR.[43] For .NET Framework most classes considered being part of BCL reside in mscorlib.dll, System.dll and System.Core.dll. BCL classes are available in .NET Framework as well as its alternative implementations including .NET Compact Framework, Microsoft Silverlight, .NET Core and Mono.

With the introduction of alternative implementations (e.g., Silverlight), Microsoft introduced the concept of Portable Class Libraries (PCL) allowing a consuming library to run on more than one platform. With the further proliferation of .NET platforms, the PCL approach failed to scale (PCLs are defined intersections of API surface between two or more platforms).[44] As the next evolutionary step of PCL, the .NET Standard Library was created retroactively based on the System.Runtime.dll based APIs found in UWP and Silverlight. New .NET platforms are encouraged to implement a version of the standard library allowing them to re-use extant third-party libraries to run without new versions of them. The .NET Standard Library allows an independent evolution of the library and app model layers within the .NET architecture.[45]

NuGet is the package manager for all .NET platforms. It is used to retrieve third-party libraries into a .NET project with a global library feed at NuGet.org.[46] Private feeds can be maintained separately, e.g., by a build server or a file system directory.

Atop the class libraries, multiple app models are used to create apps. .NET Framework supports Console, Windows Forms, Windows Presentation Foundation, ASP.NET and ASP.NET Core apps by default. Other app models are offered by alternative implementations of the .NET Framework. Console, UWP and ASP.NET Core are available on .NET Core. Mono is used to power Xamarin app models for Android, iOS, and macOS. The retroactive architectural definition of app models showed up in early 2015 and was also applied to prior technologies like Windows Forms or WPF.

Microsoft introduced C++/CLI in Visual Studio 2005, which is a language and means of compiling Visual C++ programs to run within the .NET Framework. Some parts of the C++ program still run within an unmanaged Visual C++ Runtime, while specially modified parts are translated into CIL code and run with the .NET Framework's CLR.

Assemblies compiled using the C++/CLI compiler are termed mixed-mode assemblies, since they contain native and managed code in the same DLL.[47] Such assemblies are more complex to reverse engineer, since .NET decompilers such as .NET Reflector reveal only the managed code.

.NET Framework introduces a Common Type System (CTS) that defines all possible data types and programming constructs supported by CLR and how they may or may not interact with each other conforming to CLI specification. Because of this feature, .NET Framework supports the exchange of types and object instances between libraries and applications written using any conforming .NET language.

CTS and the CLR used in .NET Framework also enforce type safety. This prevents ill-defined casts, wrong method invocations, and memory size issues when accessing an object. This also makes most CLI languages statically typed (with or without type inference). However, starting with .NET Framework 4.0, the Dynamic Language Runtime extended the CLR, allowing dynamically typed languages to be implemented atop the CLI.

While Microsoft has never implemented the full framework on any system except Microsoft Windows, it has engineered the framework to be cross-platform,[48] and implementations are available for other operating systems (see Silverlight and § Alternative implementations). Microsoft submitted the specifications for CLI (which includes the core class libraries, CTS, and CIL),[49][50][51]C#,[52] and C++/CLI[53] to both Ecma International (ECMA) and International Organization for Standardization (ISO), making them available as official standards. This makes it possible for third parties to create compatible implementations of the framework and its languages on other platforms.

.NET Framework has its own security mechanism with two general features: Code Access Security (CAS), and validation and verification. CAS is based on evidence that is associated with a specific assembly. Typically the evidence is the source of the assembly (whether it is installed on the local machine or has been downloaded from the Internet). CAS uses evidence to determine the permissions granted to the code. Other code can demand that calling code be granted a specified permission. The demand causes CLR to perform a call stack walk: every assembly of each method in the call stack is checked for the required permission; if any assembly is not granted the permission a security exception is thrown.

ManagedCIL bytecode is easier to reverse-engineer than native code, unless obfuscated.[54][55] .NET decompiler programs enable developers with no reverse-engineering skills to view the source code behind unobfuscated .NET assemblies. In contrast, apps compiled to native machine code are much harder to reverse-engineer, and source code is almost never produced successfully, mainly because of compiler optimizations and lack of reflection.[56] This creates concerns in the business community over the possible loss of trade secrets and the bypassing of license control mechanisms. To mitigate this, Microsoft has included Dotfuscator Community Edition with Visual Studio .NET since 2002.[b] Third-party obfuscation tools are also available from vendors such as VMware, V.i. Labs, Turbo, and Red Gate Software. Method-level encryption tools for .NET code are available from vendors such as SafeNet.

CLR frees the developer from the burden of managing memory (allocating and freeing up when done); it handles memory management itself by detecting when memory can be safely freed. Instantiations of .NET types (objects) are allocated from the managed heap; a pool of memory managed by CLR. As long as a reference to an object exists, which may be either direct, or via a graph of objects, the object is considered to be in use. When no reference to an object exists, and it cannot be reached or used, it becomes garbage, eligible for collection.

.NET Framework includes a garbage collector (GC) which runs periodically, on a separate thread from the application's thread, that enumerates all the unusable objects and reclaims the memory allocated to them. It is a non-deterministic, compacting, mark-and-sweep garbage collector. GC runs only when a set amount of memory has been used or there is enough pressure for memory on the system. Since it is not guaranteed when the conditions to reclaim memory are reached, GC runs are non-deterministic. Each .NET application has a set of roots, which are pointers to objects on the managed heap (managed objects). These include references to static objects and objects defined as local variables or method parameters currently in scope, and objects referred to by CPU registers.[57] When GC runs, it pauses the application and then, for each object referred to in the root, it recursively enumerates all the objects reachable from the root objects and marks them as reachable. It uses CLI metadata and reflection to discover the objects encapsulated by an object, and then recursively walk them. It then enumerates all the objects on the heap (which were initially allocated contiguously) using reflection. All objects not marked as reachable are garbage.[57] This is the mark phase.[58] Since the memory held by garbage is of no consequence, it is considered free space. However, this leaves chunks of free space between objects which were initially contiguous. The objects are then compacted together to make free space on the managed heap contiguous again.[57][58] Any reference to an object invalidated by moving the object is updated by GC to reflect the new location.[58] The application is resumed after garbage collection ends. The latest version of .NET framework uses concurrent garbage collection along with user code, making pauses unnoticeable, because it is done in the background.[59]

The garbage collector used by .NET Framework is also generational.[60] Objects are assigned a generation. Newly created objects are tagged Generation 0. Objects that survive one garbage collection are tagged Generation 1. Generation 1 objects that survive another collection are Generation 2. The framework uses up to Generation 2 objects.[60] Higher generation objects are garbage collected less often than lower generation objects. This raises the efficiency of garbage collection, as older objects tend to have longer lifetimes than newer objects.[60] By ignoring older objects in most collection runs, fewer checks and compaction operations are needed in total.[60]

When an application is first launched, the .NET Framework compiles the CIL code into executable code using its just-in-time compiler, and caches the executable program into the .NET Native Image Cache.[61][62] Due to caching, the application launches faster for subsequent launches, although the first launch is usually slower. To speed up the first launch, developers may use the Native Image Generator utility to manually ahead-of-time compile and cache any .NET application.[62]

The garbage collector, which is integrated into the environment, can introduce unanticipated delays of execution over which the developer has little direct control. "In large applications, the number of objects that the garbage collector needs to work with can become very large, which means it can take a very long time to visit and rearrange all of them."[63]

.NET Framework provides support for calling Streaming SIMD Extensions (SSE) via managed code from April 2014 in Visual Studio 2013 Update 2. However, Mono has provided support for SIMD Extensions as of version 2.2 within the Mono.Simd namespace in 2009.[64] Mono's lead developer Miguel de Icaza has expressed hope that this SIMD support will be adopted by CLR's ECMA standard.[65] Streaming SIMD Extensions have been available in x86 CPUs since the introduction of the Pentium III. Some other architectures such as ARM and MIPS also have SIMD extensions. In case the CPU lacks support for those extensions, the instructions are simulated in software.[66]

.NET Framework is the predominant implementation of .NET technologies. Other implementations for parts of the framework exist. Although the runtime engine is described by an ECMA-ISO specification, other implementations of it may be encumbered by patent issues; ISO standards may include the disclaimer, "Attention is drawn to the possibility that some of the elements of this document may be the subject of patent rights. ISO shall not be held responsible for identifying any or all such patent rights."[67] It is harder to develop alternatives to FCL, which is not described by an open standard and may be subject to copyright restrictions. Also, parts of FCL have Windows-specific functions and behavior, so implementation on non-Windows platforms can be problematic.

Some alternative implementations of parts of the framework are listed here.

.NET Micro Framework is a .NET platform for extremely resource-constrained devices. It includes a small version of CLR and supports development in C# (though some developers were able to use VB.NET,[68] albeit with an amount of hacking, and with limited functionalities) and debugging (in an emulator or on hardware), both using Microsoft Visual Studio. It also features a subset of .NET Framework Class Library (about 70 classes with about 420 methods), a GUI framework loosely based on WPF, and additional libraries specific to embedded applications.

.NET Core is an alternative Microsoft implementation of the managed code framework; it has similarities with .NET Framework and even shares some API, but is designed based on different sets of principles: It is cross-platform and free and open-source.

Mono is an implementation of CLI and FCL, and provides added functions. It is dual-licensed as free and proprietary software. It includes support for ASP.NET, ADO.NET, and Windows Forms libraries for a wide range of architectures and operating systems. It also includes C# and VB.NET compilers.

Portable.NET (part of DotGNU) provides an implementation of CLI, parts of FCL, and a C# compiler. It supports a variety of CPUs and operating systems. The project was discontinued, with the last stable release in 2009.

^"Microsoft's Empty Promise". Free Software Foundation. 16 July 2009. Archived from the original on August 5, 2009. Retrieved August 3, 2009. However, there are several libraries that are included with Mono, and commonly used by applications like Tomboy, that are not required by the standard. And just to be clear, we're not talking about Windows-specific libraries like ASP.NET and Windows Forms. Instead, we're talking about libraries under the System namespace that provide common functionality programmers expect in modern programming languages

1.
Microsoft
–
Its best known software products are the Microsoft Windows line of operating systems, Microsoft Office office suite, and Internet Explorer and Edge web browsers. Its flagship hardware products are the Xbox video game consoles and the Microsoft Surface tablet lineup, as of 2016, it was the worlds largest software maker by revenue, and one of the worlds most valuable companies. Microsoft was founded by Paul Allen and Bill Gates on April 4,1975, to develop and it rose to dominate the personal computer operating system market with MS-DOS in the mid-1980s, followed by Microsoft Windows. The companys 1986 initial public offering, and subsequent rise in its share price, since the 1990s, it has increasingly diversified from the operating system market and has made a number of corporate acquisitions. In May 2011, Microsoft acquired Skype Technologies for $8.5 billion, in June 2012, Microsoft entered the personal computer production market for the first time, with the launch of the Microsoft Surface, a line of tablet computers. The word Microsoft is a portmanteau of microcomputer and software, Paul Allen and Bill Gates, childhood friends with a passion for computer programming, sought to make a successful business utilizing their shared skills. In 1972 they founded their first company, named Traf-O-Data, which offered a computer that tracked and analyzed automobile traffic data. Allen went on to pursue a degree in science at Washington State University. The January 1975 issue of Popular Electronics featured Micro Instrumentation and Telemetry Systemss Altair 8800 microcomputer, Allen suggested that they could program a BASIC interpreter for the device, after a call from Gates claiming to have a working interpreter, MITS requested a demonstration. Since they didnt actually have one, Allen worked on a simulator for the Altair while Gates developed the interpreter and they officially established Microsoft on April 4,1975, with Gates as the CEO. Allen came up with the name of Micro-Soft, as recounted in a 1995 Fortune magazine article. In August 1977 the company formed an agreement with ASCII Magazine in Japan, resulting in its first international office, the company moved to a new home in Bellevue, Washington in January 1979. Microsoft entered the OS business in 1980 with its own version of Unix, however, it was MS-DOS that solidified the companys dominance. For this deal, Microsoft purchased a CP/M clone called 86-DOS from Seattle Computer Products, branding it as MS-DOS, following the release of the IBM PC in August 1981, Microsoft retained ownership of MS-DOS. Since IBM copyrighted the IBM PC BIOS, other companies had to engineer it in order for non-IBM hardware to run as IBM PC compatibles. Due to various factors, such as MS-DOSs available software selection, the company expanded into new markets with the release of the Microsoft Mouse in 1983, as well as with a publishing division named Microsoft Press. Paul Allen resigned from Microsoft in 1983 after developing Hodgkins disease, while jointly developing a new OS with IBM in 1984, OS/2, Microsoft released Microsoft Windows, a graphical extension for MS-DOS, on November 20,1985. Once Microsoft informed IBM of NT, the OS/2 partnership deteriorated, in 1990, Microsoft introduced its office suite, Microsoft Office

2.
User interface
–
The user interface, in the industrial design field of human–computer interaction, is the space where interactions between humans and machines occur. Examples of this concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls. The design considerations applicable when creating user interfaces are related to or involve such disciplines as ergonomics and psychology. Generally, the goal of user interface design is to produce a user interface makes it easy, efficient. This generally means that the needs to provide minimal input to achieve the desired output. Other terms for user interface are man–machine interface and when the machine in question is a computer human–computer interface, the user interface or human–machine interface is the part of the machine that handles the human–machine interaction. Membrane switches, rubber keypads and touchscreens are examples of the part of the Human Machine Interface which we can see. In complex systems, the interface is typically computerized. The term human–computer interface refers to this kind of system, in the context of computing the term typically extends as well to the software dedicated to control the physical elements used for human-computer interaction. The engineering of the interfaces is enhanced by considering ergonomics. The corresponding disciplines are human factors engineering and usability engineering, which is part of systems engineering, tools used for incorporating human factors in the interface design are developed based on knowledge of computer science, such as computer graphics, operating systems, programming languages. Nowadays, we use the graphical user interface for human–machine interface on computers. There is a difference between a user interface and an interface or a human–machine interface. A human-machine interface is typically local to one machine or piece of equipment, an operator interface is the interface method by which multiple equipment that are linked by a host control system is accessed or controlled. The system may expose several user interfaces to serve different kinds of users, for example, a computerized library database might provide two user interfaces, one for library patrons and the other for library personnel. The user interface of a system, a vehicle or an industrial installation is sometimes referred to as the human–machine interface. HMI is a modification of the original term MMI, in practice, the abbreviation MMI is still frequently used although some may claim that MMI stands for something different now. Another abbreviation is HCI, but is commonly used for human–computer interaction

3.
Cryptography
–
Cryptography or cryptology is the practice and study of techniques for secure communication in the presence of third parties called adversaries. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, Applications of cryptography include ATM cards, computer passwords, and electronic commerce. Cryptography prior to the age was effectively synonymous with encryption. The originator of an encrypted message shared the decoding technique needed to recover the information only with intended recipients. The cryptography literature often uses Alice for the sender, Bob for the intended recipient and it is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. The growth of technology has raised a number of legal issues in the information age. Cryptographys potential for use as a tool for espionage and sedition has led governments to classify it as a weapon and to limit or even prohibit its use. In some jurisdictions where the use of cryptography is legal, laws permit investigators to compel the disclosure of encryption keys for documents relevant to an investigation, Cryptography also plays a major role in digital rights management and copyright infringement of digital media. Until modern times, cryptography referred almost exclusively to encryption, which is the process of converting ordinary information into unintelligible text, decryption is the reverse, in other words, moving from the unintelligible ciphertext back to plaintext. A cipher is a pair of algorithms that create the encryption, the detailed operation of a cipher is controlled both by the algorithm and in each instance by a key. The key is a secret, usually a short string of characters, historically, ciphers were often used directly for encryption or decryption without additional procedures such as authentication or integrity checks. There are two kinds of cryptosystems, symmetric and asymmetric, in symmetric systems the same key is used to encrypt and decrypt a message. Data manipulation in symmetric systems is faster than asymmetric systems as they generally use shorter key lengths, asymmetric systems use a public key to encrypt a message and a private key to decrypt it. Use of asymmetric systems enhances the security of communication, examples of asymmetric systems include RSA, and ECC. Symmetric models include the commonly used AES which replaced the older DES, in colloquial use, the term code is often used to mean any method of encryption or concealment of meaning. However, in cryptography, code has a specific meaning. It means the replacement of a unit of plaintext with a code word, English is more flexible than several other languages in which cryptology is always used in the second sense above. RFC2828 advises that steganography is sometimes included in cryptology, the study of characteristics of languages that have some application in cryptography or cryptology is called cryptolinguistics

4.
Computer network
–
A computer network or data network is a telecommunications network which allows nodes to share resources. In computer networks, networked computing devices exchange data with other using a data link. The connections between nodes are established using either cable media or wireless media, the best-known computer network is the Internet. Network computer devices that originate, route and terminate the data are called network nodes, nodes can include hosts such as personal computers, phones, servers as well as networking hardware. Two such devices can be said to be networked together when one device is able to exchange information with the other device, Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the networks size, topology and organizational intent. In most cases, application-specific communications protocols are layered over other more general communications protocols and this formidable collection of information technology requires skilled network management to keep it all running reliably. The chronology of significant computer-network developments includes, In the late 1950s, in 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. Licklider developed a group he called the Intergalactic Computer Network. In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of computer systems. The same year, at Massachusetts Institute of Technology, a group supported by General Electric and Bell Labs used a computer to route. Throughout the 1960s, Leonard Kleinrock, Paul Baran, and Donald Davies independently developed network systems that used packets to transfer information between computers over a network, in 1965, Thomas Marill and Lawrence G. Roberts created the first wide area network. This was an precursor to the ARPANET, of which Roberts became program manager. Also in 1965, Western Electric introduced the first widely used telephone switch that implemented true computer control, in 1972, commercial services using X.25 were deployed, and later used as an underlying infrastructure for expanding TCP/IP networks. In July 1976, Robert Metcalfe and David Boggs published their paper Ethernet, Distributed Packet Switching for Local Computer Networks, in 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s, by 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 100 Gbit/s were added, the ability of Ethernet to scale easily is a contributing factor to its continued use. Providing access to information on shared storage devices is an important feature of many networks, a network allows sharing of files, data, and other types of information giving authorized users the ability to access information stored on other computers on the network

5.
Mobile computing
–
Mobile computing is human–computer interaction by which a computer is expected to be transported during normal usage, which allows for transmission of data, voice and video. Mobile computing involves mobile communication, mobile hardware, and mobile software, communication issues include ad hoc networks and infrastructure networks as well as communication properties, protocols, data formats and concrete technologies. Hardware includes mobile devices or device components, Mobile software deals with the characteristics and requirements of mobile applications. Portability, Facilitates movement of device within the computing environment. Individuality, Adapting the technology to suit individual needs, or Portability, Devices/nodes connected within the mobile computing system should facilitate mobility. These devices may have limited device capabilities and limited power supply, connectivity, This defines the quality of service of the network connectivity. Interactivity, The nodes belonging to a computing system are connected with one another to communicate and collaborate through active transactions of data. Mobile phones including a key set primarily intended but not restricted to for vocal communications, as smartphones, cell phones, feature phones. The existence of these classes is expected to be long lasting and these networks are usually available within range of commercial cell phone towers. High speed network wireless LANs are inexpensive but have limited range. Security standards, When working mobile, one is dependent on public networks, security is a major concern while concerning the mobile computing standards on the fleet. One can easily attack the VPN through a number of networks interconnected through the line. Power consumption, When a power outlet or portable generator is not available, mobile computers must rely entirely on battery power, combined with the compact size of many mobile devices, this often means unusually expensive batteries must be used to obtain the necessary battery life. Transmission interferences, Weather, terrain, and the range from the nearest signal point can all interfere with signal reception, reception in tunnels, some buildings, and rural areas is often poor. Potential health hazards, People who use mobile devices while driving are often distracted from driving and are thus assumed more likely to be involved in traffic accidents, cell phones may interfere with sensitive medical devices. Questions concerning mobile phone radiation and health have been raised, human interface with device, Screens and keyboards tend to be small, which may make them hard to use. Alternate input methods such as speech or handwriting recognition require training, many commercial and government field forces deploy a rugged portable computer with their fleet of vehicles. This requires the units to be anchored to the vehicle for safety, device security

6.
Embedded system
–
An embedded system is a computer system with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a device often including hardware. Embedded systems control many devices in use today. Ninety-eight percent of all microprocessors are manufactured as components of embedded systems, examples of properties of typically embedded computers when compared with general-purpose counterparts are low power consumption, small size, rugged operating ranges, and low per-unit cost. This comes at the price of limited processing resources, which make them more difficult to program. For example, intelligent techniques can be designed to power consumption of embedded systems. Modern embedded systems are based on microcontrollers, but ordinary microprocessors are also common. In either case, the processor used may be ranging from general purpose to those specialised in certain class of computations. A common standard class of dedicated processors is the signal processor. Since the embedded system is dedicated to tasks, design engineers can optimize it to reduce the size and cost of the product and increase the reliability. Some embedded systems are mass-produced, benefiting from economies of scale, complexity varies from low, with a single microcontroller chip, to very high with multiple units, peripherals and networks mounted inside a large chassis or enclosure. One of the very first recognizably modern embedded systems was the Apollo Guidance Computer, an early mass-produced embedded system was the Autonetics D-17 guidance computer for the Minuteman missile, released in 1961. When the Minuteman II went into production in 1966, the D-17 was replaced with a new computer that was the first high-volume use of integrated circuits. Since these early applications in the 1960s, embedded systems have come down in price and there has been a rise in processing power. An early microprocessor for example, the Intel 4004, was designed for calculators and other systems but still required external memory. By the early 1980s, memory, input and output system components had been integrated into the chip as the processor forming a microcontroller. Microcontrollers find applications where a computer would be too costly. A comparatively low-cost microcontroller may be programmed to fulfill the role as a large number of separate components

7.
Smartphone
–
A smartphone is a mobile phone with an advanced mobile operating system that combines features of a personal computer operating system with other features useful for mobile or handheld use. Smartphones can access the Internet and can run a variety of third-party software components and they typically have a color display with a graphical user interface that covers more than 76% of the front surface. In 1999, the Japanese firm NTT DoCoMo released the first smartphones to achieve mass adoption within a country, smartphones became widespread in the late 2000s. Most of those produced from 2012 onward have high-speed mobile broadband 4G LTE, motion sensors, in the third quarter of 2012, one billion smartphones were in use worldwide. Global smartphone sales surpassed the sales figures for regular cell phones in early 2013, devices that combined telephony and computing were first conceptualized by Nikola Tesla in 1909 and Theodore Paraskevakos in 1971 and patented in 1974, and were offered for sale beginning in 1993. Paraskevakos was the first to introduce the concepts of intelligence, data processing and they were installed at Peoples Telephone Company in Leesburg, Alabama and were demonstrated to several telephone companies. The original and historic working models are still in the possession of Paraskevakos, the first mobile phone to incorporate PDA features was a prototype developed by Frank Canova in 1992 while at IBM and demonstrated that year at the COMDEX computer industry trade show. It included PDA features and other mobile applications such as maps, stock reports. A refined version was marketed to consumers in 1994 by BellSouth under the name Simon Personal Communicator, the Simon was the first commercially available device that could be properly referred to as a smartphone, although it was not called that in 1994. The term smart phone appeared in print as early as 1995, in the mid-late 1990s, many mobile phone users carried a separate dedicated PDA device, running early versions of operating systems such as Palm OS, BlackBerry OS or Windows CE/Pocket PC. These operating systems would later evolve into mobile operating systems, in March 1996, Hewlett-Packard released the OmniGo 700LX, a modified HP 200LX palmtop PC that supported a Nokia 2110 phone with ROM-based software to support it. It had a 640×200 resolution CGA compatible four-shade gray-scale LCD screen and could be used to place and receive calls and it was also 100% DOS5.0 compatible, allowing it to run thousands of existing software titles, including early versions of Windows. In August 1996, Nokia released the Nokia 9000 Communicator, a cellular phone based on the Nokia 2110 with an integrated PDA based on the PEN/GEOS3.0 operating system from Geoworks. The two components were attached by a hinge in what known as a clamshell design, with the display above. The PDA provided e-mail, calendar, address book, calculator and notebook applications, text-based Web browsing, when closed, the device could be used as a digital cellular phone. In June 1999 Qualcomm released the pdQ Smartphone, a CDMA digital PCS Smartphone with an integrated Palm PDA, subsequent landmark devices included, The Ericsson R380 by Ericsson Mobile Communications. The first device marketed as a smartphone, it combined the functions of a phone and PDA. The Kyocera 6035 introduced by Palm, Inc, combining a PDA with a mobile phone, it operated on the Verizon network, and supported limited Web browsing

8.
Android (operating system)
–
Android is a mobile operating system developed by Google, based on the Linux kernel and designed primarily for touchscreen mobile devices such as smartphones and tablets. In addition to devices, Google has further developed Android TV for televisions, Android Auto for cars. Variants of Android are also used on notebooks, game consoles, digital cameras, beginning with the first commercial Android device in September 2008, the operating system has gone through multiple major releases, with the current version being 7.0 Nougat, released in August 2016. Android applications can be downloaded from the Google Play store, which features over 2.7 million apps as of February 2017, Android has been the best-selling OS on tablets since 2013, and runs on the vast majority of smartphones. In September 2015, Android had 1.4 billion monthly active users, Android is popular with technology companies that require a ready-made, low-cost and customizable operating system for high-tech devices. The success of Android has made it a target for patent, Android Inc. was founded in Palo Alto, California in October 2003 by Andy Rubin, Rich Miner, Nick Sears, and Chris White. Rubin described the Android project as tremendous potential in developing smarter mobile devices that are aware of its owners location. The early intentions of the company were to develop an operating system for digital cameras. Despite the past accomplishments of the founders and early employees, Android Inc. operated secretly and that same year, Rubin ran out of money. Steve Perlman, a friend of Rubin, brought him $10,000 in cash in an envelope. In July 2005, Google acquired Android Inc. for at least $50 million and its key employees, including Rubin, Miner and White, joined Google as part of the acquisition. Not much was known about Android at the time, with Rubin having only stated that they were making software for mobile phones, at Google, the team led by Rubin developed a mobile device platform powered by the Linux kernel. Google marketed the platform to handset makers and carriers on the promise of providing a flexible, upgradeable system, Google had lined up a series of hardware components and software partners and signaled to carriers that it was open to various degrees of cooperation. Speculation about Googles intention to enter the communications market continued to build through December 2006. In September 2007, InformationWeek covered an Evalueserve study reporting that Google had filed several patent applications in the area of mobile telephony, the first commercially available smartphone running Android was the HTC Dream, also known as T-Mobile G1, announced on September 23,2008. Since 2008, Android has seen numerous updates which have improved the operating system, adding new features. Each major release is named in order after a dessert or sugary treat, with the first few Android versions being called Cupcake, Donut, Eclair. In 2010, Google launched its Nexus series of devices, a lineup in which Google partnered with different device manufacturers to produce new devices and introduce new Android versions

9.
Hewlett-Packard
–
The Hewlett-Packard Company or shortened to Hewlett-Packard was an American multinational information technology company headquartered in Palo Alto, California. The company was founded in a garage in Palo Alto by William Bill Redington Hewlett and David Dave Packard. HP was the worlds leading PC manufacturer from 2007 to Q22013 and it specialized in developing and manufacturing computing, data storage, and networking hardware, designing software and delivering services. HP also had services and consulting business around its products and partner products.4 billion in 2008, in November 2009, HP announced the acquisition of 3Com, with the deal closing on April 12,2010. On April 28,2010, HP announced the buyout of Palm, on September 2,2010, HP won its bidding war for 3PAR with a $33 a share offer, which Dell declined to match. On October 6,2014, Hewlett-Packard announced plans to split the PC and printers business from its enterprise products, the split closed on November 1,2015, and resulted in two publicly traded companies, HP Inc. and Hewlett Packard Enterprise. William Redington Hewlett and David Packard graduated with degrees in engineering from Stanford University in 1935. The company originated in a garage in nearby Palo Alto during a fellowship they had with a past professor, Terman was considered a mentor to them in forming Hewlett-Packard. In 1939, Packard and Hewlett established Hewlett-Packard in Packards garage with a capital investment of US$538. Hewlett and Packard tossed a coin to decide whether the company they founded would be called Hewlett-Packard or Packard-Hewlett, HP incorporated on August 18,1947, and went public on November 6,1957. Of the many projects they worked on, their very first financially successful product was an audio oscillator. This allowed them to sell the Model 200A for $54.40 when competitors were selling less stable oscillators for over $200, the Model 200 series of generators continued until at least 1972 as the 200AB, still tube-based but improved in design through the years. They worked on technology and artillery shell fuses during World War II. Hewlett-Packards HP Associates division, established around 1960, developed semiconductor devices primarily for internal use, instruments and calculators were some of the products using these devices. HP partnered in the 1960s with Sony and the Yokogawa Electric companies in Japan to develop several high-quality products, the products were not a huge success, as there were high costs in building HP-looking products in Japan. HP and Yokogawa formed a joint venture in 1963 to market HP products in Japan, HP bought Yokogawa Electrics share of Hewlett-Packard Japan in 1999. HP spun off a company, Dynac, to specialize in digital equipment. The name was picked so that the HP logo hp could be turned upside down to be a reverse image of the logo dy of the new company

10.
Intel
–
Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California that was founded by Gordon Moore and Robert Noyce. It is the worlds largest and highest valued semiconductor chip makers based on revenue, and is the inventor of the x86 series of microprocessors, Intel supplies processors for computer system manufacturers such as Apple, Lenovo, HP, and Dell. Intel Corporation was founded on July 18,1968, by semiconductor pioneers Robert Noyce and Gordon Moore, the companys name was conceived as portmanteau of the words integrated and electronics. The fact that intel is the term for intelligence information made the name appropriate. Intel was a developer of SRAM and DRAM memory chips. Although Intel created the worlds first commercial microprocessor chip in 1971, during the 1990s, Intel invested heavily in new microprocessor designs fostering the rapid growth of the computer industry. The Open Source Technology Center at Intel hosts PowerTOP and LatencyTOP, and supports other projects such as Wayland, Intel Array Building Blocks, and Threading Building Blocks. Client Computing Group – 55% of 2016 revenues – produces hardware components used in desktop, data Center Group – 29% of 2016 revenues – produces hardware components used in server, network, and storage platforms. Internet of Things Group – 5% of 2016 revenues – offers platforms designed for retail, transportation, industrial, buildings, non-Volatile Memory Solutions Group – 4% of 2016 revenues – manufactures NAND flash memory products primarily used in solid-state drives. Intel Security Group – 4% of 2016 revenues – produces software, particularly security, programmable Solutions Group – 3% of 2016 revenues – manufactures programmable semiconductors. In 2016, Dell accounted for 15% of Intels total revenues, Lenovo accounted for 13% of total revenues, in the 1980s, Intel was among the top ten sellers of semiconductors in the world. In 1991, Intel became the biggest chip maker by revenue and has held the position ever since, other top semiconductor companies include TSMC, Advanced Micro Devices, Samsung, Texas Instruments, Toshiba and STMicroelectronics. Competitors in PC chip sets include Advanced Micro Devices, VIA Technologies, Silicon Integrated Systems, however, the cross-licensing agreement is canceled in the event of an AMD bankruptcy or takeover. Some smaller competitors such as VIA Technologies produce low-power x86 processors for small factor computers, however, the advent of such mobile computing devices, in particular, smartphones, has in recent years led to a decline in PC sales. Since over 95% of the worlds smartphones currently use processors designed by ARM Holdings, ARM is also planning to make inroads into the PC and server market. Intel has been involved in disputes regarding violation of antitrust laws. Intel was founded in Mountain View, California in 1968 by Gordon E. Moore, a chemist, and Robert Noyce, arthur Rock helped them find investors, while Max Palevsky was on the board from an early stage. Moore and Noyce had left Fairchild Semiconductor to found Intel, Rock was not an employee, but he was an investor and was chairman of the board

11.
Computer hardware
–
Computer hardware is the collection of physical components that constitute a computer system. By contrast, software is instructions that can be stored and run by hardware, hardware is directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system, the template for all modern computers is the Von Neumann architecture, detailed in a 1945 paper by Hungarian mathematician John von Neumann. The meaning of the term has evolved to mean a stored-program computer in which an instruction fetch and this is referred to as the Von Neumann bottleneck and often limits the performance of the system. For the third year, U. S. business-to-business channel sales increased. The impressive growth was the fastest sales increase since the end of the recession, sales growth accelerated in the second half of the year peaking in fourth quarter with a 6.9 percent increase over the fourth quarter of 2012. There are a number of different types of system in use today. The personal computer, also known as the PC, is one of the most common types of computer due to its versatility, laptops are generally very similar, although they may use lower-power or reduced size components, thus lower performance. The computer case is a plastic or metal enclosure that houses most of the components, a case can be either big or small, but the form factor of motherboard for which it is designed matters more. A power supply unit converts alternating current electric power to low-voltage DC power for the components of the computer. Laptops are capable of running from a battery, normally for a period of hours. The motherboard is the component of a computer. It is usually cooled by a heatsink and fan, or water-cooling system, most newer CPUs include an on-die Graphics Processing Unit. The clock speed of CPUs governs how fast it executes instructions, many modern computers have the option to overclock the CPU which enhances performance at the expense of greater thermal output and thus a need for improved cooling. The chipset, which includes the bridge, mediates communication between the CPU and the other components of the system, including main memory. Random-Access Memory, which stores the code and data that are being accessed by the CPU. For example, when a web browser is opened on the computer it takes up memory, RAM usually comes on DIMMs in the sizes 2GB, 4GB, and 8GB, but can be much larger. Read-Only Memory, which stores the BIOS that runs when the computer is powered on or otherwise begins execution, the BIOS includes boot firmware and power management firmware

12.
Web browser
–
A web browser is a software application for retrieving, presenting and traversing information resources on the World Wide Web. An information resource is identified by a Uniform Resource Identifier that may be a web page, image, hyperlinks present in resources enable users easily to navigate their browsers to related resources. Although browsers are primarily intended to use the World Wide Web, the most popular web browsers are Google Chrome, Microsoft Edge, Safari, Opera and Firefox. The first web browser was invented in 1990 by Sir Tim Berners-Lee, Berners-Lee is the director of the World Wide Web Consortium, which oversees the Webs continued development, and is also the founder of the World Wide Web Foundation. His browser was called WorldWideWeb and later renamed Nexus, the first commonly available web browser with a graphical user interface was Erwise. The development of Erwise was initiated by Robert Cailliau, andreesens browser sparked the internet boom of the 1990s. The introduction of Mosaic in 1993 – one of the first graphical web browsers – led to an explosion in web use, Microsoft responded with its Internet Explorer in 1995, also heavily influenced by Mosaic, initiating the industrys first browser war. Bundled with Windows, Internet Explorer gained dominance in the web browser market, Internet Explorer usage share peaked at over 95% by 2002. Opera debuted in 1996, it has never achieved widespread use and it is also available on several other embedded systems, including Nintendos Wii video game console. In 1998, Netscape launched what was to become the Mozilla Foundation in an attempt to produce a competitive browser using the open source software model, as of August 2011, Firefox has a 28% usage share. Apples Safari had its first beta release in January 2003, as of April 2011, the most recent major entrant to the browser market is Chrome, first released in September 2008. Chromes take-up has increased year by year, by doubling its usage share from 8% to 16% by August 2011. This increase seems largely to be at the expense of Internet Explorer, in December 2011, Chrome overtook Internet Explorer 8 as the most widely used web browser but still had lower usage than all versions of Internet Explorer combined. Chromes user-base continued to grow and in May 2012, Chromes usage passed the usage of all versions of Internet Explorer combined, by April 2014, Chromes usage had hit 45%. Internet Explorer was deprecated in Windows 10, with Microsoft Edge replacing it as the web browser. The ways that web browser makers fund their development costs has changed over time, the first web browser, WorldWideWeb, was a research project. In addition to being freeware, Netscape Navigator and Opera were also sold commercially, Internet Explorer, on the other hand, was bundled free with the Windows operating system, and therefore it was funded partly by the sales of Windows to computer manufacturers and direct to users. Internet Explorer also used to be available for the Mac, in this respect, IE may have contributed to Windows and Microsoft applications sales in another way, through lock-in to Microsofts browser

Mobile computing is human–computer interaction by which a computer is expected to be transported during normal usage, …

The Galaxy Nexus, capable of web browsing, e-mail access, video playback, document editing, file transfer, image editing, among many other tasks common on smartphones. A smartphone is a tool of mobile computing.