Thursday, April 30, 2009

11) HISTORY OF THE GUI

by engr. AFAN BAHADUR KHAN



The graphical user interface (GUI), understood as the use of graphic icons and a pointing device to control a computer, has over the last four decades a steady history of incremental refinements built on some constant core principles. Several vendors have created their own windowing systems based on independent code but sharing the same basic elements that define the WIMP paradigm. There have been important technological achievements and enhancements to the general interaction were given in small steps over previous systems and there have been a few significant breakthroughs in terms of use, but the same organizational metaphors and interaction idioms are still in use.


Initial developments


Early dynamic information devices such as radar displays, where input devices where used for direct control of computer-created data, set the basis for later improvements of graphical interfaces.

The concept of a windowing system was introduced by the first real-time graphic display systems for computers: the SAGE Project and Ivan Sutherland's Sketchpad.


Augmentation of Human Intellect (NLS)




Doug Engelbart's Augmentation of Human Intellect project at SRI in the 1960s developed the On-Line System (NLS), which incorporated a mouse-driven cursor and multiple windows used to work on hypertext. Engelbart had been inspired, in part, by the memex desk based information machine suggested by Vannevar Bush in 1945. Much of the early research was based on how young humans learn.


Xerox PARC




Engelbart's work directly led to the advances at Xerox PARC. Several people went from SRI to Xerox PARC in the early 1970s. In 1973 Xerox PARC developed the Xerox Alto personal computer. It was the first computer to use the desktop metaphor and graphical user interface (GUI). It was not a commercial product, but several thousand units were built and were heavily used at PARC and at several universities for many years. The Alto greatly influenced the design of personal computers in the following decades, notably the Macintosh and the first Sun workstations.

In 1974, work began on Gypsy, the first bitmap What-You-See-Is-What-You-Get (WYSIWYG) cut & paste editor. In 1975, Xerox engineers demonstrated a Graphical User Interface "including icons and the first use of pop-up menus".



The 80s: Early commercial developments



Apple Lisa and Macintosh (and later, the Apple IIgs)






Short for Graphical User Interface, the GUI was first developed at Xerox PARC by Alan Kay, Douglas Engelbart, and a group of other researchers. A GUI uses windows, icons, and menus to carry out commands such as opening files, deleting files, moving files, etc. and although many GUI Operating Systems are operated by using a mouse, the keyboard can also be used by using keyboard shortcuts or arrow keys.

Beginning in 1979, started by Steve Jobs and led by Jef Raskin, the Lisa and Macintosh teams at Apple Computer (which included former members of the Xerox PARC group) continued to develop such ideas. The Macintosh, released in 1984, was the first commercially successful product to use a GUI. A desktop metaphor was used, in which files looked like pieces of paper; directories looked like file folders; there were a set of desk accessories like a calculator, notepad, and alarm clock that the user could place around the screen as desired; and the user could delete files and folders by dragging them to a trash can on the screen. Drop down menus were also introduced.

There is still some controversy over the amount of influence that Xerox's PARC work, as opposed to previous academic research, had on the GUIs of Apple's Lisa and Macintosh, but it is clear that the influence was extensive, because first versions of Lisa GUIs even lacked icons. These prototype GUIs are at least mouse driven, but completely ignored the WIMP concept.Note also that Apple was invited by PARC to view their research, and a number of PARC employees subsequently moved to Apple to work on the Lisa and Macintosh GUI. However, the Apple work extended PARC's considerably, adding manipulatable icons and a fixed menu bar and direct manipulation of objects in the file system.





In 1986 the Apple IIgs was launched, a very advanced model of the Apple II successful series, based on 16-bit technology (in fact, virtually two machines into one). It came with a new operating system, the Apple GS/OS, which features a Finder-like GUI, very similar to that of the Macintosh series, able to deal with the advanced graphic abilities of its Video Graphics Chip (VGC).


Graphical Environment Manager (GEM)





Digital Research (DRI) created the Graphical Environment Manager as an add-on program for personal computers. GEM was developed to work with existing CP/M and MS-DOS operating systems on business computers such as IBM-compatibles. It was developed from DRI software, known as GSX, designed by a former PARC employee. The similarity to the Macintosh desktop led to a copyright lawsuit from Apple Computer, and a settlement which involved some changes to GEM. This was to be the first of a series of 'look and feel' lawsuits related to GUI design in the 1980s.

GEM received widespread use in the consumer market from 1985, when it was made the default user interface built in to the TOS operating system of the Atari ST line of personal computers. It was also bundled by other computer manufacturers and distributors, such as Amstrad. Later, it was distributed with the best-selled Digital Research version of DOS for IBM PC compatibles, the DR-DOS 6.0. The GEM desktop faded from the market with the withdrawal of the Atari ST line in 1992 and with the popularity of the Microsoft Windows 3.0 in the PC front by the same years.



DeskMate






Tandy's DeskMate appeared in the early 1980's on its TRS-80 machines and was ported to its Tandy 1000 range in 1984. Like most PC GUIs of the time it depended on MS-DOS. The application was popular at the time and included a number of programs like Draw, Text and Calendar as well as attracting outside investment such as Lotus 1-2-3 for DeskMate.




Amiga Intuition and the Workbench





The Amiga computer was launched by Commodore in 1985 with a GUI called Workbench based on an internal engine which drives all the input events called Intuition, and developed almost entirely by RJ Mical. The first versions used a blue/orange/white/black default palette, which was selected for high contrast on televisions and composite monitors. Workbench presented directories as drawers to fit in with the "workbench" theme. Intuition was the widget and graphics library that made the GUI work. It was driven by user events through the mouse, keyboard, and other input devices.

Due to a mistake made by the Commodore sales department, the first floppies of AmigaOS which were released with Amiga1000 named the whole OS "Workbench". Since then, users and CBM itself referred to "Workbench" as the nickname for the whole AmigaOS (including Amiga DOS, Extras, etc.). This common consent ended with release of version 2.0 of AmigaOS, which re-introduced proper names to the installation floppies of AmigaDOS, Workbench, Extras, etc.).

Early versions of AmigaOS did treat the Workbench as just another window on top of a blank screen, but this is due to the ability of AmigaOS to have invisible screens with a chromakey or a genlock - one of the most advanced features of Amiga platform - even without losing the visibility of Workbench itself. In later AmigaOS versions Workbench could be set as a borderless desktop.

Amiga users were able to boot their computer into a command line interface (aka. CLI/shell). This was a keyboard-based environment without the Workbench GUI. Later they could invoke it with the CLI/SHELL command LoadWB which performs the task to load Workbench GUI.

Like most GUIs of the day Amiga's Intuition followed Xerox, and sometimes Apple's lead, but a CLI was included which dramatically extended the functionality of the platform, but Cli/Shell of Amiga is not just a simple text based interface like in MS-DOS but it is another graphic process driven by Intuition engine and with same gadgets included in Amiga graphics.library and serving the GUI process and CLI/Shell interface integrates itself with the Workbench, sharing the same privileges with the GUI.



MS-DOS file managers and utility suites





Because most of the very early IBM PC and compatibles lacks any common true graphical capability (they only shared the 80-column basic text mode compatible with the original MDA display adapter), a series of file managers arose, including Microsoft's DOS Shell, which features typical GUI elements as menus, push buttons, lists with scrollbars and mouse pointer. The name Text user interface was later invented to name this kind of interface. Many MS-DOS text mode applications, like the default text editor for MS-DOS 5.0 (and related tools, like QBasic), also shared the same philosophy. The IBM DOS Shell included with IBM DOS 5.0 (circa 1992) supported both text display modes and actual graphics display modes, making it both a TUI and a GUI, depending on the chosen mode.

Advanced file managers for MS-DOS were able to redefine character shapes with EGA and better display adapters, giving some basic low resolution icons and graphical interface elements, including an arrow (instead of a coloured cell block) for the mouse pointer. When the display adapter lacks the ability to change the character's shapes, they default to the CP437 character set found in the adapter's ROM. Some popular utility suites for MS-DOS, as Norton Utilities (pictured) and PC Tools used these techniques as well.


DESQview was a text mode multitasking program introduced in July 1985. Running on top of MS-DOS, it allowed users to run multiple DOS programs concurrently in windows. It was the first program to bring multitasking and windowing capabilities to a DOS environment in which existing DOS programs could be used. DESQview was not a true GUI but offered certain components of one, such as resizable, overlapping windows and mouse pointing.




Applications under MS-DOS with proprietary true GUIs




To take the maximum advantage possible in lack of a true common GUI under MS-DOS, the most of the graphical applications which worked with EGA, VGA and better graphic cards had proprietary built-in GUIs, before the MS-Windows age. One of the best known was Deluxe Paint, a popular painting software with a typical WIMP interface.

The original Adobe Acrobat Reader executable file for MS-DOS was able to run on both the standard Windows 3.x GUI and the standard DOS command prompt. When it was launched from the command prompt, it provides its own true GUI (on VGA), which provides the full of its functionality to read PDF files.


Microsoft Windows (16-bit versions)





Windows 1.0 was a GUI for the MS-DOS operating system that had been the OS of choice for IBM PC and compatible computers since 1981. Windows 2.0 followed, but it wasn't until the 1990 launch of Windows 3.0, based on Common User Access that its popularity truly exploded. The GUI has seen minor redesigns since, mainly the networking enabled Windows 3.11 and its Win32s 32-bit patch. The 16-bit line of MS Windows were discontinued with the introduction of Windows 95 and Windows NT 32-bit based architecture in the 1990's.





The main window of a given application can occupy the full screen in maximized status. The users must then to switch between maximized applications using the Alt+Tab keyboard shortcut; no alternative with the mouse except for de-maximize. When none of the running application windows is maximized, switching can be done by clicking on a partially visible window, as is the common way in other GUIs.

In 1988, Apple sued Microsoft for copyright infringement of the LISA and Apple Macintosh GUI. The court case lasted 4 years before almost all of Apple's claims were denied on a contractual technicality. Subsequent appeals by Apple were also denied. Microsoft and Apple apparently entered a final, private settlement of the matter in 1997.


GEOS





GEOS was launched in 1986. Originally written for the 8-bit home computer Commodore 64 and shortly after, the Apple II series it was later ported to IBM PC systems. It came with several application programs like a calendar and word processor, and a cut-down version served as the basis for America Online's DOS client. Compared to the competing Windows 3.0 GUI it could run reasonably well on simpler hardware. But it was targeted at 8-bit machines and the 16-bit computer age was dawning.


The X Window System



The standard windowing system in the Unix world is the X Window System (commonly X11 or X), first released in the mid-1980s. The W Window System (1983) was the precursor to X; X was developed at MIT as Project Athena. Its original purpose was to allow users of the newly emerging graphic terminals to access remote graphics workstations without regard to the workstation's operating system or the hardware. Due largely to the availability of the source code used to write X, it has become the standard layer for management of graphical and input/output devices and for the building of both local and remote graphical interfaces on virtually all Unix, Linux and other Unix-like operating systems.



A Unix based X Window System desktop (circa 1990).

X allows a graphical terminal user to make use of remote resources on the network as if they were all located locally to the user by running a single module of software called the X server. The software running on the remote machine is called the client application. X's network transparency protocols allow the display and input portions of any application to be separated from the remainder of the application and 'served up' to any of a large number of remote users. X is available today as free software.




The 1990s: Mainstream usage of the desktop



The widespread adoption of the PC platform at homes and small business popularized computers among people with no formal training. This created a fast growing market, opening an opportunity for commercial exploitation and of easy-to-use interfaces and making economically viable the incremental refinement of the existing GUIs for home systems.

Also, the spreading of Highcolor and Truecolor capabilities of display adapters providing thousands and millions of colors, along with faster CPUs and accelerated graphic cards, cheaper RAM, storage devices up to an order of magnitude larger (from megabytes to gigabytes) and larger bandwidth for telecom networking at lower cost helped to create an environment in which the common user was able to run complicated GUIs which began to favor aesthetics.



Windows 95 and "a computer in every home" (the 32-bit versions)



After Windows 3.11, Microsoft began to develop a new consumer-oriented version of the operating system. Windows 95 was intended to integrate Microsoft's formerly separate MS-DOS and Windows products and includes an enhanced version of DOS, often referred to as MS-DOS 7.0. It also featured a significant redesign of the GUI, dubbed "Cairo", which was eventually used in Windows NT 4.0. Both Win95 and WinNT were 32-bit based technologies, which could exploit the abilities of the Intel 80386 CPU, as the preemptive multitasking and up to 4GiB of linear address memory space. In the marketplace, Windows 95 was an unqualified success, promoting a general upgrade to 32-bit technology, and within a year or two of its release had become the most successful operating system ever produced.






Windows 95 saw the beginning of the Browser wars when the World Wide Web began receiving a great deal of attention in the popular culture and mass media. Microsoft at first did not see potential in the Web and Windows 95 was shipped with Microsoft's own online service called The Microsoft Network, which was dial-up only and was used primarily for its own content, not internet access. As versions of Netscape Navigator and Internet Explorer were released at a rapid pace over the following few years, Microsoft used its desktop dominance to push its browser and shape the ecology of the web mainly as a monoculture.




Windows 95 (and its 32-bits professional counterpart Windows NT) evolved through the years into Windows 98, Windows ME, Windows 2000 and Windows XP, sharing the same basic GUI themes (in the XP, the user can even switch to the classical Windows 95/NT look). With Windows 98, the Active Desktop theme was introduced, allowing a HTML approach for the desktop, but this feature was coldly received by customers, who frequently disabled it. At the end, Windows Vista definitively discontinued it, but has a new SideBar on the desktop.


Mac OS





The Macintosh's GUI has been infrequently revised since 1984, with major updates including System 7, and underwent its largest revision with the introduction of the "Aqua" interface in 2001's Mac OS X. It was a new operating system built primarily on technology from NeXTStep with UI elements of the original Mac OS grafted on. Mac OS X uses a technology called Quartz for graphics rendering and drawing on-screen. Some interface features of Mac OS X are inherited from NeXTStep (such as the Dock, the automatic wait cursor, or double-buffered windows giving a solid appearance and flicker-free window redraws), while others are inherited from the old Mac OS operating system (the single system-wide menu-bar). Mac OS X v10.3 introduced features to improve usability including Exposé which is designed to make finding open windows easier.

With Mac OS X v10.4, new features were added, including Dashboard (a virtual alternate desktop for mini specific-purpose applications) and a search tool called Spotlight, which provides users with an option for searching through files instead of browsing through folders.

In 2007 with the release of 10.5 Leopard the look of the OS was revised again. Brushed metal was removed in favor of the Unified theme made up of grey gradients. It is very similar to the style iTunes has had since version 6.0. It also incorporates Coverflow into the Finder In addition the dock has been changed to become a reflective shelf with the application icons sitting on it. The menu bar has the option of partial transparency, and all windows have an increased drop shadow.




GUIs built on the X Window System




In the early days of X Window development, Sun Microsystems and AT&T attempted to push for a GUI standard called OPEN LOOK in competition with Motif. OPEN LOOK was a well-designed standard developed from scratch in conjunction with Xerox, while Motif was a collective effort that fell into place, with a look and feel patterned after Windows 3.11. Many who worked on OPEN LOOK at the time appreciated its design coherence.[citation needed] Motif prevailed in the UNIX GUI battles and became the basis for the Common Desktop Environment (CDE). CDE was based on VUE (Visual User Environment), a proprietary desktop from Hewlett-Packard that in turn was based on the Motif look and feel.





In the late 1990s, there was significant growth in the Unix world, especially among the free software community. New graphical desktop movements grew up around Linux and similar operating systems, based on the X Window System. A new emphasis on providing an integrated and uniform interface to the user brought about new desktop environments, such as KDE, GNOME and XFCE which are supplanting CDE in popularity on both Unix and Unix-like operating systems. The XFCE, KDE and GNOME look and feel each tend to undergo more rapid change and less codification than the earlier OPEN LOOK and Motif environments.

In the latter part of the first decade of the 21st century X Windows GUIs such as Compiz Fusion, Beryl and KDE 4 began to incorporate the translucency and drop shadow effects first seen on Mac OS X.


The Amiga Workbench in the 1990s





Later releases added improvements over the original Workbench, like support for high-color Workbench screens, context menus, and embossed 2D icons with pseudo-3D aspect. But often Amiga users preferred alternative interfaces to standard Workbench, such as Directory Opus or ScalOS interface. An interesting article about these replacements is available here (in French language).




The use of improved, third-party GUI engines became common amongst users who preferred more attractive interfaces – such as Magic User Interface (MUI), and ReAction. These object-oriented graphic engines driven by "classes" of graphic objects and functions were then standardized into the Amiga environment and changed Amiga Workbench to a complete and modern guided interface, with new standard gadgets, animated buttons, true 24-bit-color icons, increased use of wallpapers for screens and windows, alpha channel, transparencies and shadows as any modern GUI requires. Modern derivatives of Workbench are Ambient for MorphOS, ScalOS, Workbench for AmigaOS 4 and Wanderer/Zune for AROS. There is a brief article on Ambient and descriptions of MUI icons, menus and gadget here (aps.fr) and images of Zune stay at main AROS site.

Use of object oriented graphic engines (ReAction) dramatically changes the look and feel of a GUI to match actual styleguides.



RISC OS





Early versions of what came to be called RISC OS were known as Arthur, which was released in 1987 by Acorn Computers. RISC OS was a colour GUI operating system which used three-button mice, a taskbar (called the icon bar), and a file navigator similar to that of Mac OS. Acorn created RISC OS in the 1980s for their ARM-CPU based computers. The GUI of RISC OS has developed over versions of RISC OS from 1987 to the present day with version 4.39 having a great ability to customise the interface.



OS/2





Originally collaboratively developed by Microsoft and IBM to replace DOS, OS/2 version 1.0 (released in 1987) had no GUI at all. Version 1.1 (released 1988) included Presentation Manager (PM), which looked a lot like the later Windows 3.0 UI. After the split with Microsoft, IBM developed the Workplace Shell (WPS) for version 2.0 (released in 1992), a quite radical, object-oriented approach to GUIs. Microsoft later imitated much of this in Windows 95.



NeXTSTEP





The NeXTSTEP user interface was used in the NeXT line of computers. NeXTSTEP's first major version was released in 1989. It used Display PostScript for its graphical underpinning. The NeXTSTEP interface's most significant feature was the Dock, carried with some modification into Mac OS X, and had other minor interface details that some found made it easier and more intuitive to use than previous GUIs. NeXTSTEP's GUI was the first to feature opaque dragging of windows in its user interface, on a comparatively weak machine by today's standards, ideally aided by high performance graphics hardware.


BeOS





BeOS was developed on custom AT&T Hobbit-based computers before switching to PowerPC hardware by a team lead by former Apple executive Jean-Louis Gassée as an alternative to the Macintosh OS and GUI. BeOS was later ported to Intel hardware. It used an object-oriented kernel written by Be, and did not use the X Window System, but a different GUI written from scratch. Much effort was spent by the developers to make it an efficient platform for multimedia applications. Be Inc. was acquired by PalmSource, Inc. (Palm Inc. at the time) in 2001. The BeOS GUI still lives in Haiku, an open source reimplementation of the BeOS.



Current trends " 3D User Interface"


As of 2008, a recent trend in desktop technology is the inclusion of 3D effects in window management. It's based in experimental research in User Interface Design trying to expand the expressive power of the existing toolkits in order to enhance the physical cues that allow for direct manipulation. New effects common to several projects are scale resizing and zooming, several windows transformations and animations (wobbling windows, smooth minimization to system tray...), composition of images (used for window drop shadows and transparency) and enhancing the global organization of open windows (zooming to virtual desktops, desktop cube, Exposé, etc.) The proof-of-concept BumpTop desktop combines a physical representation of documents with tools for document classification only possible in the simulated environment, like instant reordering and automated grouping of related documents.




Compiz running on Fedora Core 6 with AIGLX


These effects are popularized thanks to the widespread use of 3D video cards (mainly due to gaming) which allow for complex visual processing with low CPU use, using the 3D acceleration in most modern graphics cards to render the application clients in a 3D scene. The application window is drawn off-screen in a pixel buffer and the graphics card renders it into the 3D scene.

This can have the advantage of moving some of the window rendering to the GPU on the graphics card and thus reducing the load on the main CPU, but the facilities that allow this must be available on the graphics card to be able to take advantage of this.

Examples of 3D user interface software include XGL and Compiz from Novell, and AIGLX bundled with RedHat Fedora. Quartz Extreme for Mac OS X and Windows Vista's Aero interface use 3D rendering for shading and transparency effects as well as Expose and Windows Flip and Flip 3D, respectively. AmigaOS 4.1 uses Cairo 2D vector based interface integrated with 3D hardware accelerated Porter-Duff image composition engine, while its counterpart clone MorphOS 2.0 features Ambient, a complete 3D GUI based on a subset of OpenGL. Vista uses Direct3D to accomplish this, whereas the other interfaces use OpenGL.

10) HISTORY OF PROGRAMMING LANGUAGES

by engr. AFAN BAHADUR KHAN



This article discusses the major developments in the history of programming languages. For a detailed timeline of events, see the timeline of programming languages.

Before 1940



The first programming languages predate the modern computer. At first, the languages were codes.

During a nine-month period in 1842-1843, Ada Lovelace translated Italian mathematician Luigi Menabrea's memoir on Charles Babbage's newest proposed machine, the Analytical Engine. With the article, she appended a set of notes which specified in complete detail a method for calculating Bernoulli numbers with the Engine, recognized by some historians as the world's first computer program. But some biographers debate the extent of her original contributions versus those of her husband.

The Jacquard loom used holes in punched cards to represent sewing loom arm movements in order to generate decorative patterns automatically.

Herman Hollerith realized that he could encode information on punch cards when he observed that railroad train conductors would encode the appearance of the ticket holders on the train tickets using the position of punched holes on the tickets. Hollerith then proceeded to encode the 1890 census data on punch cards.

The first computer codes were specialized for the applications. In the first decades of the twentieth century, numerical calculations were based on decimal numbers. Eventually it was realized that logic could be represented with numbers, as well as with words. For example, Alonzo Church was able to express the lambda calculus in a formulaic way. The Turing machine was an abstraction of the operation of a tape-marking machine, for example, in use at the telephone companies. However, unlike the lambda calculus, Turing's code does not serve well as a basis for higher-level languages — its principal use is in rigorous analyses of algorithmic complexity.

Like many "firsts" in history, the first modern programming language is hard to identify. From the start, the restrictions of the hardware defined the language. Punch cards allowed 80 columns, but some of the columns had to be used for a sorting number on each card. Fortran included some keywords which were the same as English words, such as "IF", "GOTO" (go to) and "CONTINUE". The use of a magnetic drum for memory meant that computer programs also had to be interleaved with the rotations of the drum. Thus the programs were more hardware dependent than today.

To some people the answer depends on how much power and human-readability is required before the status of "programming language" is granted. Jacquard looms and Charles Babbage's Difference Engine both had simple, extremely limited languages for describing the actions that these machines should perform. One can even regard the punch holes on a player piano scroll as a limited domain-specific programming language, albeit not designed for human consumption.



The 1940s



In the 1940s the first recognizably modern, electrically powered computers were created. The limited speed and memory capacity forced programmers to write hand tuned assembly language programs. It was soon discovered that programming in assembly language required a great deal of intellectual effort and was error-prone.

In 1948, Konrad Zuse published a paper about his programming language Plankalkül. However, it was not implemented in his time and his original contributions were isolated from other developments.

Some important languages that were developed in this period include:

1943 - Plankalkül (Konrad Zuse)
1943 - ENIAC coding system
1949 - C-10



The 1950s and 1960s



In the 1950s the first three modern programming languages whose descendants are still in widespread use today were designed:

FORTRAN (1955), the "FORmula TRANslator, invented by John W. Backus et al.;
LISP, the "LISt Processor", invented by John McCarthy et al.;
COBOL, the COmmon Business Oriented Language, created by the Short Range Committee, heavily influenced by Grace Hopper.
Another milestone in the late 1950s was the publication, by a committee of American and European computer scientists, of "a new language for algorithms"; the Algol 60 Report (the "ALGOrithmic Language"). This report consolidated many ideas circulating at the time and featured two key language innovations:

arbitrarily nested block structure: meaningful chunks of code could be grouped into statement blocks without having to be turned into separate, explicitly named procedures;
lexical scoping: a block could have its own variables that code outside the chunk cannot access, let alone manipulate.
Another innovation, related to this, was in how the language was described:

a mathematically exact notation, Backus-Naur Form (BNF), was used to describe the language's syntax. Nearly all subsequent programming languages have used a variant of BNF to describe the context-free portion of their syntax.
Algol 60 was particularly influential in the design of later languages, some of which soon became more popular. The Burroughs large systems were designed to be programmed in an extended subset of Algol.

Algol's key ideas were continued, producing Algol 68:

syntax and semantics became even more orthogonal, with anonymous routines, a recursive typing system with higher-order functions, etc.;
not only the context-free part, but the full language syntax and semantics were defined formally, in terms of Van Wijngaarden grammar, a formalism designed specifically for this purpose.
Algol 68's many little-used language features (e.g. concurrent and parallel blocks) and its complex system of syntactic shortcuts and automatic type coercions made it unpopular with implementers and gained it a reputation of being difficult. Niklaus Wirth actually walked out of the design committee to create the simpler Pascal language.

Overview:

1951 - Regional Assembly Language
1952 - Autocode
1954 - FORTRAN
1955 - FLOW-MATIC (forerunner to COBOL)
1957 - COMTRAN (forerunner to COBOL)
1958 - LISP
1958 - ALGOL 58
1959 - FACT (forerunner to COBOL)
1959 - COBOL
1962 - APL
1962 - Simula
1964 - BASIC
1964 - PL/I



1967-1978: establishing fundamental paradigms



The period from the late 1960s to the late 1970s brought a major flowering of programming languages. Most of the major language paradigms now in use were invented in this period:

Simula, invented in the late 1960s by Nygaard and Dahl as a superset of Algol 60, was the first language designed to support object-oriented programming.
C, an early systems programming language, was developed by Dennis Ritchie and Ken Thompson at Bell Labs between 1969 and 1973.
Smalltalk (mid 1970s) provided a complete ground-up design of an object-oriented language.
Prolog, designed in 1972 by Colmerauer, Roussel, and Kowalski, was the first logic programming language.
ML built a polymorphic type system (invented by Robin Milner in 1973) on top of Lisp, pioneering statically typed functional programming languages.
Each of these languages spawned an entire family of descendants, and most modern languages count at least one of them in their ancestry.

The 1960s and 1970s also saw considerable debate over the merits of "structured programming", which essentially meant programming without the use of GOTO. This debate was closely related to language design: some languages did not include GOTO, which forced structured programming on the programmer. Although the debate raged hotly at the time, nearly all programmers now agree that, even in languages that provide GOTO, it is bad style to use it except in rare circumstances. As a result, later generations of language designers have found the structured programming debate tedious and even bewildering.

Some important languages that were developed in this period include:

1970 - Pascal
1970 - Forth
1972 - C
1972 - Smalltalk
1972 - Prolog
1973 - ML
1978 - SQL (initially only a query language, later extended with programming constructs)


The 1980s: consolidation, modules, performance



The 1980s were years of relative consolidation. C++ combined object-oriented and systems programming. The United States government standardized Ada, a systems programming language intended for use by defense contractors. In Japan and elsewhere, vast sums were spent investigating so-called "fifth generation" languages that incorporated logic programming constructs. The functional languages community moved to standardize ML and Lisp. Rather than inventing new paradigms, all of these movements elaborated upon the ideas invented in the previous decade.

However, one important new trend in language design was an increased focus on programming for large-scale systems through the use of modules, or large-scale organizational units of code. Modula, Ada, and ML all developed notable module systems in the 1980s. Module systems were often wedded to generic programming constructs---generics being, in essence, parameterized modules (see also parametric polymorphism).

Although major new paradigms for programming languages did not appear, many researchers expanded on the ideas of prior languages and adapted them to new contexts. For example, the languages of the Argus and Emerald systems adapted object-oriented programming to distributed systems.

The 1980s also brought advances in programming language implementation. The RISC movement in computer architecture postulated that hardware should be designed for compilers rather than for human assembly programmers. Aided by processor speed improvements that enabled increasingly aggressive compilation techniques, the RISC movement sparked greater interest in compilation technology for high-level languages.

Language technology continued along these lines well into the 1990s.

Some important languages that were developed in this period include:

1983 - Ada
1983 - C++
1985 - Eiffel
1987 - Perl
1989 - FL (Backus)


The 1990s: the Internet age


The 1990s saw no fundamental novelty, but much recombination as well as maturation of old ideas. A big driving philosophy was programmer productivity. Many "rapid application development" languages emerged, which usually came with an IDE, garbage collection, and were descendants of older languages. All such languages were object-oriented. These included Object Pascal, Visual Basic, and C#. Java was a more conservative language that also featured garbage collection and received much attention. More radical and innovative than the RAD languages were the new scripting languages. These did not directly descend from other languages and featured new syntaxes and more liberal incorporation of features. Many consider these scripting languages to be more productive than even the RAD languages, but often because of choices that make small programs simpler but large programs more difficult to write and maintain. Nevertheless, scripting languages came to be the most prominent ones used in connection with the Web.

Some important languages that were developed in this period include:

1990 - Haskell
1991 - Python
1991 - Java
1993 - Ruby
1993 - Lua
1994 - ANSI Common Lisp
1995 - JavaScript
1995 - PHP
2000 - C#
2008 - JavaFX Script



Current trends



Programming language evolution continues, in both industry and research. Some of the current trends include:

Mechanisms for adding security and reliability verification to the language: extended static checking, information flow control, static thread safety.

Alternative mechanisms for modularity: mixins, delegates, aspects.

Component-oriented software development.

Metaprogramming, reflection or access to the abstract syntax tree
Increased emphasis on distribution and mobility.

Integration with databases, including XML and relational databases.

Support for Unicode so that source code (program text) is not restricted to those characters contained in the ASCII character set; allowing, for example, use of non-Latin-based scripts or extended punctuation.

XML for graphical interface (XUL, XAML).

9) HISTORY OF SOFTWARE ENGINEERING

by engr. AFAN BAHADUR KHAN



In the history of software engineering the software engineering has evolved steadily from its founding days in the 1940s until today in the 2000s. Applications have evolved continuously. The ongoing goal to improve technologies and practices, seeks to improve the productivity of practitioners and the quality of applications to users.


Overview


There are a number of areas where the evolution of software engineering is notable:

Emergence as a profession: By the early 1980s, software engineering had already emerged as a bona fide profession, to stand beside computer science and traditional engineering. See also software engineering professionalism.
Role of women: In the 1940s, 1950s, and 1960s, men often filled the more prestigious and better paying hardware engineering roles, but often delegated the writing of software to women. Grace Hopper, Jamie Fenton and many other unsung women filled many programming jobs during the first several decades of software engineering. Today, many fewer women work in software engineering than in other professions, a situation whose cause is not clearly identified and is often attributed to sexual discrimination, cyberculture or bias in education. Many academic and professional organizations are trying hard to solve this imbalance.
Processes: Processes have become a big part of software engineering and are hailed for their potential to improve software and sharply criticized for their potential to constrict programmers.
Cost of hardware: The relative cost of software versus hardware has changed substantially over the last 50 years. When mainframes were expensive and required large support staffs, the few organizations buying them also had the resources to fund large, expensive custom software engineering projects. Computers are now much more numerous and much more powerful, which has several effects on software. The larger market can support large projects to create commercial off the shelf software, as done by companies such as Microsoft. The cheap machines allow each programmer to have a terminal capable of fairly rapid compilation. The programs in question can use techniques such as garbage collection, which make them easier and faster for the programmer to write. On the other hand, many fewer organizations are interested in employing programmers for large custom software projects, instead using commercial off the shelf software as much as possible.



The Pioneering Era




The most important development was that new computers were coming out almost every year or two, rendering existing ones obsolete. Software people had to rewrite all their programs to run on these new machines. Programmers did not have computers on their desks and had to go to the "machine room". Jobs were run by signing up for machine time or by operational staff. Jobs were run by putting punched cards for input into the machine's card reader and waiting for results to come back on the printer.

The field was so new that the idea of management by schedule was non-existent. Making predictions of a project's completion date was almost impossible. Computer hardware was application-specific. Scientific and business tasks needed different machines. Due to the need to frequently translate old software to meet the needs of new machines, high-order languages like FORTRAN, COBOL, and ALGOL were developed. Hardware vendors gave away systems software for free as hardware could not be sold without software. A few companies sold the service of building custom software but no software companies were selling packaged software.

The notion of reuse flourished. As software was free, user organizations commonly gave it away. Groups like IBM's scientific user group SHARE offered catalogs of reusable components. Academia did not yet teach the principles of computer science. Modular programming and data abstraction were already being used in programming.



1945 to 1965: The origins



The term software engineering first appeared in the late 1950s and early 1960s. Programmers have always known about civil, electrical, and computer engineering and debated what engineering might mean for software.

The NATO Science Committee sponsored two conferences on software engineering in 1968 (Garmisch, Germany — see conference report) and 1969, which gave the field its initial boost. Many believe these conferences marked the official start of the profession of software engineering.


1965 to 1985: The software crisis



Software engineering was spurred by the so-called software crisis of the 1960s, 1970s, and 1980s, which identified many of the problems of software development. Many software projects ran over budget and schedule. Some projects caused property damage. A few projects caused loss of life[citation needed]. The software crisis was originally defined in terms of productivity, but evolved to emphasize quality. Some used the term software crisis to refer to their inability to hire enough qualified programmers.

Cost and Budget Overruns: The OS/360 operating system was a classic example. This decade-long[citation needed] project from the 1960s eventually produced one of the most complex software systems at the time. OS/360 was one of the first large (1000 programmers) software projects. Fred Brooks claims in The Mythical Man Month that he made a multi-million dollar mistake of not developing a coherent architecture before starting development.
Property Damage: Software defects can cause property damage. Poor software security allows hackers to steal identities, costing time, money, and reputations.
Life and Death: Software defects can kill. Some embedded systems used in radiotherapy machines failed so catastrophically that they administered lethal doses of radiation to patients. The most famous of these failures is the Therac 25 incident.
Peter G. Neumann has kept a contemporary list of software problems and disasters. The software crisis has been slowly fizzling out, because it is unrealistic to remain in crisis mode for more than 20 years. SEs are accepting that the problems of SE are truly difficult and only hard work[citation needed] over many decades can solve them.



1985 to 1989: No silver bullet



For decades, solving the software crisis was paramount to researchers and companies producing software tools. Seemingly, they trumpeted every new technology and practice from the 1970s to the 1990s as a silver bullet to solve the software crisis. Tools, discipline, formal methods, process, and professionalism were touted as silver bullets:

Tools: Especially emphasized were tools: Structured programming, object-oriented programming, CASE tools, Ada, Java, documentation, standards, and Unified Modeling Language were touted as silver bullets.
Discipline: Some pundits argued that the software crisis was due to the lack of discipline of programmers.
Formal methods: Some believed that if formal engineering methodologies would be applied to software development, then production of software would become as predictable an industry as other branches of engineering. They advocated proving all programs correct.
Process: Many advocated the use of defined processes and methodologies like the Capability Maturity Model.
Professionalism: This led to work on a code of ethics, licenses, and professionalism.
In 1986, Fred Brooks published the No Silver Bullet article, arguing that no individual technology or practice would ever make a 10-fold improvement in productivity within 10 years.

Debate about silver bullets raged over the following decade. Advocates for Ada, components, and processes continued arguing for years that their favorite technology would be a silver bullet. Skeptics disagreed. Eventually, almost everyone accepted that no silver bullet would ever be found. Yet, claims about silver bullets pop up now and again, even today.

Some interpret no silver bullet to mean that software engineering failed. The search for a single key to success never worked. All known technologies and practices have only made incremental improvements to productivity and quality. Yet, there are no silver bullets for any other profession, either. Others interpret no silver bullet as proof that software engineering has finally matured and recognized that projects succeed due to hard work.

However, it could also be said that there are, in fact, a range of silver bullets today, including lightweight methodologies (see "Project management"), spreadsheet calculators, customized browsers, in-site search engines, database report generators, integrated design-test coding-editors with memory/differences/undo, and specialty shops that generate niche software, such as information websites, at a fraction of the cost of totally customized website development. Nevertheless, the field of software engineering appears too complex and diverse for a single "silver bullet" to improve most issues, and each issue accounts for only a small portion of all software problems.


1990 to 1999: Prominence of the Internet



The rise of the Internet led to very rapid growth in the demand for international information display/e-mail systems on the world wide web. Programmers were required to handle illustrations, maps, photographs, and other images, plus simple animation, at a rate never before seen, with few well-known methods to optimize image display/storage (such as the use of thumbnail images).

The growth of browser usage, running on the HTML language, changed the way in which information-display and retrieval was organized. The wide-spread network connections led to the growth and prevention of international computer viruses on MS Windows computers, and the vast proliferation of spam e-mail became a major design issue in e-mail systems, flooding communication channels and requiring semi-automated pre-screening. Keyword-search systems evolved into web-based search engines, and many software systems had to be re-designed, for international searching, depending on Search Engine Optimization (SEO) techniques. Human natural-language translation systems were needed to attempt to translate the information flow in multiple foreign languages, with many software systems being designed for multi-language usage, based on design concepts from human translators. Typical computer-user bases went from hundreds, or thousands of users, to, often, many-millions of international users.



2000 to Present: Lightweight Methodologies


With the expanding demand for software in many smaller organizations, the need for inexpensive software solutions led to the growth of simpler, faster methodologies that developed running software, from requirements to deployment, quicker & easier. The use of rapid-prototyping evolved to entire lightweight methodologies, such as Extreme Programming (XP), which attempted to simplify many areas of software engineering, including requirements gathering and reliability testing for the growing, vast number of small software systems. Very large software systems still used heavily-documented methodologies, with many volumes in the documentation set; however, smaller systems had a simpler, faster alternative approach to managing the development and maintenance of software calculations and algorithms, information storage/retrieval and display.



Current trends in software engineering


Software engineering is a young discipline, and is still developing. The directions in which software engineering is developing include:


1- Aspects


Aspects help software engineers deal with quality attributes by providing tools to add or remove boilerplate code from many areas in the source code. Aspects describe how all objects or functions should behave in particular circumstances. For example, aspects can add debugging, logging, or locking control into all objects of particular types. Researchers are currently working to understand how to use aspects to design general-purpose code. Related concepts include generative programming and templates.


2- Agile


Agile software development guides software development projects that evolve rapidly with changing expectations and competitive markets. Proponents of this method believe that heavy, document-driven processes (like TickIT, CMM and ISO 9000) are fading in importance. Some people believe that companies and agencies export many of the jobs that can be guided by heavy-weight processes. Related concepts include Extreme Programming, Scrum, and Lean software development.

3- Experimental



Experimental software engineering is a branch of software engineering interested in devising experiments on software, in collecting data from the experiments, and in devising laws and theories from this data. Proponents of this method advocate that the nature of software is such that we can advance the knowledge on software through experiments only.


4- Model-driven


Model Driven Design develops textual and graphical models as primary design artifacts. Development tools are available that use model transformation and code generation to generate well-organized code fragments that serve as a basis for producing complete applications.


5- Software Product Lines


Software Product Lines is a systematic way to produce families of software systems, instead of creating a succession of completely individual products. This method emphasizes extensive, systematic, formal code reuse, to try to industrialize the software development process.
The Future of Software Engineering conference (FOSE), held at ICSE 2000, documented the state of the art of SE in 2000 and listed many problems to be solved over the next decade. The FOSE tracks at the ICSE 2000 and the ICSE 2007 conferences also help identify the state of the art in software engineering.


Software engineering today



The profession is trying to define its boundary and content. The Software Engineering Body of Knowledge SWEBOK has been tabled as an ISO standard during 2006 (ISO/IEC TR 19759).

In 2006, Money Magazine and Salary.com rated software engineering as the best job in America in terms of growth, pay, stress levels, flexibility in hours and working environment, creativity, and how easy it is to enter and advance in the field.

8) HISTORY OF LAPTOPS

by engr. AFAN BAHADUR KHAN



Before laptop/notebook computers were technically feasible, similar ideas had been proposed, most notably Alan Kay's Dynabook concept, developed at Xerox PARC in the early 1970s. What was probably the first portable computer was the Xerox NoteTaker, again developed at Xerox PARC, in 1976. However, only ten prototypes were built.



Osborne 1




The first commercially available portable computer was the Osborne 1 in 1981, which used the CP/M operating system. Although it was large and heavy compared to today's laptops, with a tiny 5" CRT monitor, it had a near-revolutionary impact on business, as professionals were able to take their computer and data with them for the first time. This and other "luggables" were inspired by what was probably the first portable computer, the Xerox NoteTaker. The Osborne was about the size of a portable sewing machine, and more importantly, could be carried on commercial aircraft. However, it was not possible to run the Osborne on batteries: it had to be plugged into mains.




Kaypro II



In 1982 Kaypro introduced the Kaypro II, a CP/M-based competitor to the Osborne 1. The Kaypro II featured a display nearly twice as big as the Osborne's, at 9", and double-density floppy drives with twice the storage capacity. Following in the standard set by the Osborne 1, the Kaypro II also included a software bundle when purchased new.


Bondwell 2


Although it wasn't released until 1985, well after the decline of CP/M as a major operating system, the Bondwell 2 is one of only a handful of CP/M laptops. It used a Z-80 CPU running at 4 MHz, had 64 K RAM and, unusual for a CP/M machine, a 3.5" floppy disk drive built in. It had a 80×25 character-based LCD mounted on a hinge similar to modern laptops, one of the first computers to use this form factor.


Other CP/M laptops



The other CP/M laptops were the Epson PX-4 (or HX-40) and PX-8 (Geneva), The NEC PC-8401A, and the NEC PC-8500. These four units, however, utilized modified CP/M systems in ROM, and did not come standard with any floppy or hard disks.



Compaq Portable



A more enduring success was the Compaq Portable, the first product from Compaq, introduced in 1983, by which time the IBM Personal Computer had become the standard platform. Although scarcely more portable than the Osborne machines, and also requiring AC power to run, it ran MS-DOS and was the first true legal IBM clone (IBM's own later Portable Computer, which arrived in 1984, was notably less IBM PC-compatible than the Compaq.


Epson HX-20



Another significant machine announced in 1981, although first sold widely in 1983, was the Epson HX-20. A simple handheld computer, it featured a full-transit 68-key keyboard, rechargeable nickel-cadmium batteries, a small (120×32-pixel) dot-matrix LCD display with 4 lines of text, 20 characters per line text mode, a 24 column dot matrix printer, a Microsoft BASIC interpreter, and 16 KB of RAM (expandable to 32 KB).



GRiD Compass



However, arguably the first true laptop was the GRiD Compass 1101, designed by Bill Moggridge in 1979-1980, and released in 1982. Enclosed in a magnesium case, it introduced the now familiar clamshell design, in which the flat display folded shut against the keyboard. The computer could be run from batteries, and was equipped with a 320×200-pixel electroluminescent display and 384 kilobyte bubble memory. It was not IBM-compatible, and its high price (US$8,000–10,000) limited it to specialized applications. However, it was used heavily by the U.S. military, and by NASA on the Space Shuttle during the 1980s. The GRiD's manufacturer subsequently earned significant returns on its patent rights as its innovations became commonplace. GRiD Systems Corp. was later bought by the Tandy (now RadioShack) Corporation.



Ampere


The Ampere, a sleek clamshell design by Ryu Oosake, also debuted in 1983. It offered a MC68008 microprocessor dedicated to running an APL interpreter residing in ROM.



Sharp and Gavilan



Two other noteworthy early laptops were the Sharp PC-5000 and the Gavilan SC, announced in 1983 but first sold in 1984. The Gavilan was notably the first computer to be marketed as a "laptop". It was also equipped with a pioneering touchpad-like pointing device, installed on a panel above the keyboard. Like the GRiD Compass, the Gavilan and the Sharp were housed in clamshell cases, but they were partly IBM-compatible, although primarily running their own system software. Both had LCD displays, and could connect to optional external printers. The Dulmont Magnum, launched internationally in 1984, was an Australian portable similar in layout to the Gavilan, which used the Intel 80186 processor.


Kyotronic 85



The year 1983 also saw the launch of what was probably the biggest-selling early laptop, the Kyocera Kyotronic 85. Owing much to the design of the previous Epson HX-20, and although at first a slow seller in Japan, it was quickly licensed by Tandy Corporation, Olivetti, and NEC, who recognised its potential and marketed it respectively as the TRS-80 Model 100 line (or Tandy 100), Olivetti M-10, and NEC PC-8201. The machines ran on standard AA batteries. The Tandy's built-in programs, including a BASIC interpreter, a text editor, and a terminal program, were supplied by Microsoft, and are thought to have been written in part by Bill Gates himself. The computer was not a clamshell, but provided a tiltable 8 line × 40-character LCD screen above a full-travel keyboard. With its internal modem, it was a highly portable communications terminal. Due to its portability, good battery life (and ease of replacement), reliability (it had no moving parts), and low price (as little as US$300), the model was highly regarded, becoming a favorite among journalists. It weighed less than 2 kg with dimensions of 30×21.5×4.5 centimeters (12×8½×1¾ in). Initial specifications included 8 kilobytes of RAM (expandable to 24 KB) and a 3 MHz processor. The machine was in fact about the size of a paper notebook, but the term had yet to come into use and it was generally described as a "portable" computer.



Kaypro 2000



Possibly the first commercial IBM-compatible laptop was the Kaypro 2000, introduced in 1985. With its brushed aluminum clamshell case, it was remarkably similar in design to modern laptops. It featured a 25 line by 80 character LCD display, a detachable keyboard, and a pop-up 90 mm (3.5 inch) floppy drive.



IBM PC Convertible



Also among the first commercial IBM-compatible laptops was the IBM PC Convertible, introduced in 1986.



Toshiba T1000 and T1200



Two Toshiba models, the T1000 and T1200, were introduced in 1987. Although limited floppy-based DOS machines, with the operating system stored in ROM, the Toshiba models were small and light enough to be carried in a backpack, and could be run off lead-acid batteries. These also introduced the now-standard "resume" feature to DOS-based machines: the computer could be paused between sessions, without having to be restarted each time.


US Air Force


The first laptops successful on a large scale came in large part due to a Request For Proposal (RFP) by the U.S. Air Force in 1987. This contract would eventually lead to the purchase of over 200,000 laptops. Competition to supply this contract was fiercely contested and the major PC companies of the time; IBM Corporation, Toshiba, Compaq, NEC, and Zenith Data Systems (ZDS), rushed to develop laptops in an attempt to win this deal. ZDS, which had earlier won a landmark deal with the IRS for its Z-171, was awarded this contract for its SupersPort series. The SupersPort series was originally launched with an Intel 8086 processor, dual floppy disk drives, a backlit, blue and white STN LCD screen, and a NiCd battery pack. Later models featured an Intel 80286 processor and a 20 MB hard disk drive. On the strength of this deal, ZDS became the world's largest laptop supplier in 1987 and 1988. ZDS partnered with Tottori Sanyo in the design and manufacturing of these laptops. This relationship is notable because it was the first deal between a major brand and an Asian original equipment manufacturer.


Cambridge Z88



Another notable computer was the Cambridge Z88, designed by Clive Sinclair, introduced in 1988. About the size of an A4 sheet of paper as well, it ran on standard batteries, and contained basic spreadsheet, word processing, and communications programs. It anticipated the future miniaturization of the portable computer, and as a ROM-based machine with a small display, can — like the TRS-80 Model 100 — also be seen as a forerunner of the personal digital assistant.



Compaq SLT286



By the end of the 1980s, laptop computers were becoming popular among business people. The COMPAQ SLT286 debuted at the end of 1988, being the first battery-powered laptop to sport an internal hard disk drive and a VGA compatible LCD screen.



NEC UltraLite



The NEC UltraLite, released in mid-1989, was perhaps the first notebook computer, weighing just over 2 kg; in lieu of a floppy or hard drive, it contained a 2 megabyte RAM drive, but this reduced its utility as well as its size.



Compaq LTE



Additional light-weight notebook computers to include hard drives were those of the Compaq LTE series, introduced toward the end of 1989. Truly the size of a notebook, they also featured grayscale backlit displays with CGA resolution.


Apple Macintosh Portable




The first Apple Computer machine designed to be used on the go was the 1989 Macintosh Portable (although an LCD screen had been an option for the transportable Apple IIc in 1984). Unlike the Compaq LTE laptop released earlier in the year the Macintosh Portable was actually a "luggable" not a laptop, but the Mac Portable was praised for its clear active matrix display and long battery life, but was a poor seller due to its bulk. In the absence of a true Apple laptop, several compatible machines such as the Outbound Laptop were available for Mac users; however, for copyright reasons, the user had to supply a set of Mac ROMs, which usually meant having to buy a new or used Macintosh as well.


Apple Powerbook



The Apple PowerBook series, introduced in October 1991, pioneered changes that are now de facto standards on laptops, such as room for a palm rest, and the inclusion of a pointing device (a trackball). The following year, IBM released its ThinkPad 700C, featuring a similar design (though with a distinctive red TrackPoint pointing device).

Later PowerBooks introduced the first 256-color displays (PowerBook 165c, 1993), and first true touchpad, first 16-bit sound recording, and first built-in Ethernet network adapter (PowerBook 500, 1994).




IBM RS/6000 N40



In 1994, IBM released the RS/6000 N40 laptop based on a PowerPC microprocessor running the AIX operating system, a variant of UNIX. It was manufactured by Tadpole Technology (now Tadpole Computer), who also manufactured laptops based on SPARC and Alpha microprocessors, the SPARCbook and ALPHAbook lines, respectively.


Windows 95 operating system



The summer of 1995 was a significant turning point in the history of notebook computing. In August of that year Microsoft introduced Windows 95. It was the first time that Microsoft had placed much of the power management control in the operating system. Prior to this point each brand used custom BIOS, drivers and in some cases, ASICs, to optimize the battery life of its machines. This move by Microsoft was controversial in the eyes of notebook designers because it greatly reduced their ability to innovate; however, it did serve its role in simplifying and stabilizing certain aspects of notebook design.


Intel Pentium processor


Windows 95 also ushered in the importance of the CD-ROM drive in mobile computing, and initiated the shift to the Intel Pentium processor as the base platform for notebooks. The Gateway Solo was the first notebook introduced with a Pentium processor and a CD-ROM. Also featuring a removable hard disk drive and floppy drive, the Solo was the first three-spindle (optical, floppy, and hard disk drive) notebook computer, and was extremely successful within the consumer segment of the market. In roughly the same time period the Dell Latitude, Toshiba Satellite, and IBM ThinkPad were reaching great success with Pentium-based two-spindle (hard disk and floppy disk drive) systems directed toward the corporate market.



Improved technology



As technology improved during the 1990s, the usefulness and popularity of laptops increased. Correspondingly prices went down. Several developments specific to laptops were quickly implemented, improving usability and performance. Among them were:


Improved battery technology. The heavy lead-acid batteries were replaced with lighter and more efficient technologies, first nickel cadmium or NiCd, then nickel metal hydride (NiMH) and then lithium ion battery and lithium polymer.
Power-saving processors. While laptops in 1991 were limited to the 80286 processor because of the energy demands of the more powerful 80386, the introduction of the Intel 386SL processor, designed for the specific power needs of laptops, marked the point at which laptop needs were included in CPU design. The 386SL integrated a 386SX core with a memory controller and this was paired with an I/O chip to create the SL chipset. It was more integrated than any previous solution although its cost was higher. It was heavily adopted by the major notebook brands of the time. Intel followed this with the 486SL chipset which used the same architecture. However, Intel had to abandon this design approach as it introduced its Pentium series. Early versions of the mobile Pentium required TAB mounting (also used in LCD manufacturing) and this initially limited the number of companies capable of supplying notebooks. However, Intel did eventually migrate to more standard chip packaging. One limitation of notebooks has always been the difficulty in upgrading the processor which is a common attribute of desktops. Intel did try to solve this problem with the introduction of the MMC for mobile computing. The MMC was a standard module upon which the CPU and external cache memory could sit. It gave the notebook buyer the potential to upgrade his CPU at a later date, eased the manufacturing process somewhat, and was also used in some cases to skirt U.S. import duties as the CPU could be added to the chassis after it arrived in the U.S. Intel stuck with MMC for a few generations but ultimately could not maintain the appropriate speed and data integrity to the memory subsystem through the MMC connector.
Improved liquid crystal displays, in particular active-matrix TFT (Thin-Film Transistor) LCD technology. Early laptop screens were black and white, blue and white, or grayscale, STN (Super Twist Nematic) passive-matrix LCDs prone to heavy shadows, ghosting and blurry movement (some portable computer screens were sharper monochrome plasma displays, but these drew too much current to be powered by batteries). Color STN screens were used for some time although their viewing quality was poor. By about 1991, two new color LCD technologies hit the mainstream market in a big way; Dual STN and TFT. The Dual STN screens solved many of the viewing problems of STN at a very affordable price and the TFT screens offered excellent viewing quality although initially at a steep price. DSTN continued to offer a significant cost advantage over TFT until the mid-90s before the cost delta dropped to the point that DSTN was no longer used in notebooks. Improvements in production technology meant displays became larger, sharper, had higher native resolutions, faster response time and could display color with great accuracy, making them an acceptable substitute for a traditional CRT monitor.


I
mproved storage technology. Early laptops and portables had only floppy disk drives. As thin, high-capacity hard disk drives with higher reliability and shock resistance and lower power consumption became available, users could store their work on laptop computers and take it with them. The 3.5" HDD was created initially as a response to the needs of notebook designers that needed smaller, lower power consumption products. With continuing pressure to shrink the notebook size even further, the 2.5" HDD was introduced. One Laptop Per Child (OLPC) and other new laptops use Flash RAM (non volatile, non mechanical memory device) instead of the mechanical hard disk.
Improved connectivity. Internal modems and standard Serial port|serial, parallel, and PS/2 ports on IBM PC-compatible laptops made it easier to work away from home; the addition of network adapters and, from 1997, USB, as well as, from 1999, Wi-Fi, made laptops as easy to use with peripherals as a desktop computer. Many newer laptops are also available with built-in 3G Broadband wireless modems.

7) HISTORY OF PERSONAL COMPUTERS

by engr. AFAN BAHADUR KHAN



This article covers the history of the personal computer. A personal computer is intended for individual use, as opposed to a mainframe where the end user's requests are filtered through operating staff, or a time sharing system in which one large processor is shared by many individuals. After the development of the microprocessor, individual personal computers were low enough in cost that they eventually became affordable consumer goods. Early personal computers - generally called microcomputers - were sold often in electronic kit form and in limited numbers, and were of interest mostly to hobbyists and technicians.


ETYMOLOGY



An early use of the term "personal computer" appeared in a November 3, 1962, New York Times article reporting John W. Mauchly's vision of future computing as detailed at a recent meeting of the American Institute of Industrial Engineers. Mauchly stated, "There is no reason to suppose the average boy or girl cannot be master of a personal computer".

Six years later a manufacturer took the risk of referring to their product this way, when Hewlett Packard advertised their "Powerful Computing Genie" as "The New Hewlett Packard 9100A personal computer". This advertisement was deemed too extreme for the target audience and replaced with a much drier ad for the HP 9100A programmable calculator.

Over the next seven years the phrase had gained enough recognition that when Byte magazine published its first edition, it referred to its readers as "[in] the personal computing field", and Creative Computing defined the personal computer as a "non-(time)shared system containing sufficient processing power and storage capabilities to satisfy the needs of an individual user." Two years later, when what Byte was to call the "1977 Trinity" of preassembled small computers hit the markets, the Apple II and the PET 2001 were advertised as personal computers, while the TRS-80 was a described as a microcomputer used for household tasks including "personal financial management". By 1979 over half a million microcomputers were sold and the youth of the day had a new concept of the personal computer.




THE BEGINNINGS OF THE PERSONAL COMPUTER INDUSTRY



Kenbak-1


The Kenbak-1 is considered by the Computer History Museum to be the world's first personal computer. It was designed and invented by John Blankenbaker of Kenbak Corporation in 1970, and was first sold in early 1971. Unlike a modern personal computer, the Kenbak-1 was built of small-scale integrated circuits, and did not use a microprocessor. The system first sold for US$750. Only around 40 machines were ever built and sold. In 1973, production of the Kenbak-1 stopped as Kenbak Corporation folded.

With only 256 bytes of memory, an 8-bit word size, and input and output restricted to lights and switches, the Kenbak-1 was most useful for learning the principles of programming but not capable of running application programs.



Datapoint 2200




A programmable terminal called the Datapoint 2200 is the earliest known device that bears some significant resemblance to the modern personal computer, with a screen, keyboard, and program storage. It was made by CTC (now known as Datapoint) in 1970 and was a complete system in a small case bearing the approximate footprint of an IBM Selectric typewriter. The system's CPU was constructed from a variety of discrete components, although the company had commissioned Intel to develop a single-chip processing unit; there was a falling out between CTC and Intel, and the chip Intel had developed wasn't used. Intel soon released a modified version of that chip as the Intel 8008, the world's first 8-bit microprocessor. The needs and requirements of the Datapoint 2200 therefore determined the nature of the 8008, upon which all successive processors used in IBM-compatible PCs were based. Additionally, the design of the Datapoint 2200's multi-chip CPU and the final design of the Intel 8008 were so similar that the two are largely software-compatible; therefore, the Datapoint 2200, from a practical perspective, can be regarded as if it were indeed powered by an 8008, which makes it a strong candidate for the title of "first microcomputer" as well.



Micral N


The French company R2E was formed by two former engineers of the Intertechnique company to sell their Intel 8008-based microcomputer design. The system was originally developed at the Institut National de la Recherche Agronomique to automate hygrometric measurements. The system ran at 500 kHz and included 16 kB of memory, and sold for 8500 Francs, about $1300US.

A bus, called Pluribus, was introduced that allowed connection of up to 14 boards. Boards for digital I/O, analog I/O, memory, floppy disk were available from R2E. The Micral operating system was initially called Sysmic, and was later renamed Prologue.

R2E was absorbed by Groupe Bull in 1978. Although Groupe Bull continued the production of Micral computers, it was not interested in the personal computer market, and Micral computers were mostly confined to highway toll gates (where they remained in service until 1992) and similar niche markets.




Xerox Alto and Star




The Xerox Alto, developed at Xerox PARC in 1973, was the first computer to use a mouse, the desktop metaphor, and a graphical user interface (GUI), concepts first introduced by Douglas Engelbart while at SRI International. It was the first example of what would today be recognized as a complete personal computer.

In 1981, Xerox Corporation introduced the Xerox Star workstation, officially known as the "8010 Star Information System". Drawing upon its predecessor, the Xerox Alto, it was the first commercial system to incorporate various technologies that today have become commonplace in personal computers, including a bit-mapped display, a windows-based graphical user interface, icons, folders, mouse, Ethernet networking, file servers, print servers and e-mail. It also included a programming language system called Smalltalk.

While its use was limited to the engineers at Xerox PARC, the Alto had features years ahead of its time. Both the Xerox Alto and the Xerox Star would inspire the Apple Lisa and the Apple Macintosh.


IBM 5100



IBM 5100 was a desktop computer introduced in September 1975, six years before the IBM PC. It was the evolution of a prototype called the SCAMP (Special Computer APL Machine Portable) that IBM demonstrated in 1973. In January 1978 IBM announced the IBM 5110, its larger cousin. The 5100 was withdrawn in March 1982.

When the PC was introduced in 1981, it was originally designated as the IBM 5150, putting it in the "5100" series, though its architecture wasn't directly descended from the IBM 5100.



Altair 8800





Development of the single-chip microprocessor was the gateway to the popularization of cheap, easy to use, and truly personal computers. It was only a matter of time before one such design was able to hit a sweet spot in terms of pricing and performance, and that machine is generally considered to be the Altair 8800, from MITS, a small company that produced electronics kits for hobbyists.

The Altair was introduced in a Popular Electronics magazine article in the January 1975 issue. In keeping with MITS's earlier projects, the Altair was sold in kit form, although a relatively complex one consisting of four circuit boards and many parts. Priced at only $400, the Altair tapped into pent-up demand and surprised its creators when it generated thousands of orders in the first month. Unable to keep up with demand, MITS eventually sold the design after about 10,000 kits had shipped.

The introduction of the Altair spawned an entire industry based on the basic layout and internal design. New companies like Cromemco started up to supply add-on kits, while Microsoft was founded to supply a BASIC interpreter for the systems. Soon after a number of complete "clone" designs, typified by the IMSAI 8080, appeared on the market. This led to a wide variety of systems based on the S-100 bus introduced with the Altair, machines of generally improved performance, quality and ease-of-use.

The Altair, and early clones, were relatively difficult to use. The machines contained no operating system in ROM, so starting it up required a machine language program to be entered by hand via front-panel switches, one location at a time. The program was typically a small driver for an attached paper tape reader, which would then be used to read in another "real" program. Later systems added bootstrapping code to improve this process, and the machines became almost universally associated with the CP/M operating system, loaded from floppy disk.

The Altair created a new industry of microcomputers and computer kits, with many others following, such as a wave of small business computers in the late 1970s based on the Intel 8080, Zilog Z80 and Intel 8085 microprocessor chips. Most ran the CP/M-80 operating system developed by Gary Kildall at Digital Research. CP/M-80 was the first popular microcomputer operating system to be used by many different hardware vendors, and many software packages were written for it, such as WordStar and dBase II.



Homebrew Computer Club



Although the Altair spawned an entire business, another side effect it had was to demonstrate that the microprocessor had so reduced the cost and complexity of building a microcomputer that anyone with an interest could build their own. Many such hobbyists met and traded notes at the meetings of the Homebrew Computer Club (HCC) in Silicon Valley. Although the HCC was relatively short-lived, its influence on the development of the modern PC was enormous.

Members of the group complained that microcomputers would never become commonplace if they still had to be built up, from parts like the original Altair, or even in terms of assembling the various add-ons that turned the machine into a useful system. What they felt was needed was an all-in-one system. Out of this desire came the Sol-20 computer, which placed an entire S-100 system - QWERTY keyboard, CPU, display card, memory and ports - into an attractive single box. The systems were packaged with a cassette tape interface for storage and a 12" monochrome monitor. Complete with a copy of BASIC, the system sold for US$2,100. About 10,000 Sol-20 systems were sold.

Although the Sol-20 was the first all-in-one system that we would recognize today, the basic concept was already rippling through other members of the group, and interested external companies.



PET





Chuck Peddle designed the Commodore PET (short for Personal Electronic Transactor) around his MOS 6502 processor. It was essentially a single-board computer with a new display chip (the MOS 6545) driving a small built-in monochrome monitor with 40×25 character graphics. The processor card, keyboard, monitor and cassette drive were all mounted in a single metal case. In 1982, Byte referred to the PET design as "the world's first personal computer".

The PET shipped in two models; the 2001-4 with 4 kB of RAM, or the 2001-8 with 8 kB. The machine also included a built-in Datassette for data storage located on the front of the case, which left little room for the keyboard. The 2001 was announced in June 1977 and the first 100 units were shipped in mid October 1977. However they remained back-ordered for months, and to ease deliveries they eventually canceled the 4 kB version early the next year.

Although the machine was fairly successful, there were frequent complaints about the tiny calculator-like keyboard, often referred to as a "Chiclet keyboard" due to the keys' resemblance to the popular gum candy. This was addressed in upgraded "dash N" and "dash B" versions of the 2001, which put the cassette outside the case, and included a much larger keyboard with a full stroke non-click motion. Internally a newer and simpler motherboard was used, along with an upgrade in memory to 8, 16, or 32 KB, known as the 2001-N-8, 2001-N-16 or 2001-N-32, respectively.

The PET was the least successful of the 1977 Trinity machines, with under 1 million sales.



Apple II






Steve Wozniak (known as "Woz"), a regular visitor to Homebrew Computer Club meetings, designed the single-board Apple I computer and first demonstrated it there. With specifications in hand and an order for 100 machines at US$666.66 each from the Byte Shop, Woz and his friend Steve Jobs founded Apple Computer.

About 200 of the machines sold before the company announced the Apple II as a complete computer. It had color graphics, full QWERTY keyboard, and internal slots for expansion mounted in a high build quality streamlined plastic case. The monitor and I/O devices were sold separately. The original Apple II operating system was only the built-in BASIC interpreter contained in ROM. Apple DOS was added to support the diskette drive; the last version was "Apple DOS 3.3".

Its higher price and lack of floating point BASIC, along with a lack of retail distribution sites, caused it to lag in sales behind the other Trinity machines until 1979, when it surpassed the PET; it was again pushed into 4th when Atari introduced its popular Atari 8-bit systems.

In spite of slow sales, the Apple II's lifetime was much greater than other machines and it ended up being the best seller among them; more than 4 million Apple IIs were shipped by the end of its production in 1993.



TRS-80





Tandy Corporation introduced the TRS-80, retroactively known as the Model 1 as improved models were introduced. The Model 1 combined the motherboard and keyboard into one unit with a separate monitor and power supply. Although the PET and especially the Apple II offered certain features that were greatly advanced in comparison, Tandy 3000+ Radio Shack storefronts ensured that it would have widespread distribution that neither Apple nor Commodore could touch.

The Model 1 used a Zilog Z80 processor clocked at 1.77 MHz (the later models were shipped with a Z80A). The basic model originally shipped with 4 kB of RAM, and later 16 kB. Its other strong features were its full stroke QWERTY keyboard, small size, well written Floating BASIC and inclusion of a monitor and tape deck all for US$599, a savings of US$600 over the Apple II. Its major drawback was the massive RF interference it caused in surrounding electronics, which caused it to run afoul of newer FCC regulations - a problem solved only by the Model I's retirement in favor of the TRS-80 Model III.

About 1.5 million of the TRS-80 line were sold before their cancellation in 1985.



HOME COMPUTER " Atari " 400/800


Atari was a well-known brand in the late 1970s, both due to their hit arcade games like PONG, as well as the hugely successful Atari VCS game console. Realizing that the VCS would have a limited lifetime in the market before a technically advanced competitor came along, Atari decided they would be that competitor, and started work on a new console design that was much more advanced.

While these designs were being worked on, the Trinity machines hit the market with considerable fanfare. Atari's management decided to re-purpose the work as home computers instead. Their knowledge of the home market through the VCS resulted in machines that were almost indestructible and just as easy to use as a games machine - simply plug in a cartridge and go. The new machines were first introduced as the 400 and 800 in 1978, but production problems meant widespread sales did not start until the next year.

At the time, the machines offered what was then much higher performance than contemporary designs and a number of graphics and sound features that no other microcomputer could match. They became very popular as a result, quickly eclipsing the Trinity machines in sales. In spite of a promising start with about 600,000 sold by 1981, the looming price war left Atari in a bad position. They were unable to compete effectively with Commodore, and only about 2 million machines were produced by the end of their production run.



TI-99


Texas Instruments (TI), at the time the world's largest chip manufacturer, decided to enter the home computer market with the Texas Instruments TI-99/4A. Announced long before its arrival, most industry observers expected the machine to wipe out all competition - on paper its performance was untouchable, and TI had enormous cash reserves and development capability.

When it was released in late 1979 TI took a somewhat slow approach to introducing it, initially focusing on schools. Contrary to earlier predictions, the TI-99's limitations meant it was not the giant-killer everyone expected, and a number of its design features were highly controversial. A total of 2.8 million units were shipped before the TI-99/4A was discontinued in March 1984.



VIC-20 and Commodore 64




Realizing that the PET could not easily compete with color machines like the Apple II and Atari, Commodore introduced the VIC-20 to address the home market. Limitations due to tiny 4 kB memory and its relatively limited display in comparison to those machines was offset by a low and ever falling price. Millions of VIC-20s were sold.

The best-selling personal computer of all time was released by Commodore International in 1982: the Commodore 64 (C64) sold over 17 million units before its end. The C64 name derived from its 64kb of RAM and it also came with a side mount ROM cartridge slot. It used the 6510 microprocessor CPU; MOS Technology, Inc. was then owned by Commodore.



The IBM PC





IBM responded to the success of the Apple II with the IBM PC, released in August, 1981. Like the Apple II and S-100 systems, it was based on an open, card-based architecture, which allowed third parties to develop for it. It used the Intel 8088 CPU running at 4.77 MHz, containing 29000 transistors. The first model used an audio cassette for external storage, though there was an expensive floppy disk option. The cassette option was never popular and was removed in the PC XT of 1983.[20] The XT added a 10MB hard drive in place of one of the two floppy disks and increased the number of expansion slots from 5 to 8. While the original PC design could accommodate only up to 64k on the main board, the architecture was able to accommodate up to 640KB of RAM, with the rest on cards. Later revisions of the design increased the limit to 256K on the main board.

The IBM PC typically came with PC-DOS, an operating system based upon Gary Kildall's CP/M-80 operating system. In 1980, IBM approached Digital Research, Kildall's company, for a version of CP/M for its upcoming IBM PC. Kildall's wife and business partner, Dorothy McEwen, met with the IBM representatives who were unable to negotiate a standard non-disclosure agreement with her. IBM turned to Bill Gates, who was already providing the ROM BASIC interpreter for the PC. Gates offered to provide 86-DOS, developed by Tim Paterson of Seattle Computer Products. IBM rebranded it as PC-DOS, while Microsoft sold variations and upgrades as MS-DOS.

The impact of the Apple II and the IBM PC was fully demonstrated when Time Magazine named the home computer the "Machine of the Year", or Person of the Year for 1982 (January 3, 1983, "The Computer Moves In"). It was the first time in the history of the magazine that an inanimate object was given this award.



APPLE LISA & MACINTOSH





In 1983 Apple Computer introduced the first mass-marketed microcomputer with a graphical user interface, the Lisa. The Lisa ran on a Motorola 68000 microprocessor and came equipped with 1 megabyte of RAM, a 12-inch (300 mm) black-and-white monitor, dual 5¼-inch floppy disk drives and a 5 megabyte Profile hard drive. The Lisa's slow operating speed and high price (US$10,000), however, led to its commercial failure. It also led to the decision by Steve Jobs to move to the Apple Macintosh team.

Drawing upon its experience with the Lisa, in 1984 Apple launched the Macintosh. Its debut was announced by a single broadcast during the 1984 Super Bowl XVIII of the now famous television commercial "1984" created by Ridley Scott and based on George Orwell's novel 1984. The intention of the ad was to equate Big Brother with the IBM PC and a nameless female action hero (portrayed by Anya Major), with the Macintosh.

The Mac was the first successful mouse-driven computer with a graphical user interface or 'WIMP' (Windows, Icons, Menus, and Pointers). Based on the Motorola 68000 microprocessor, the Macintosh included many of the Lisa's features at a price of US$2,495. The Macintosh was initially introduced with 128 kb of RAM and later that year a 512 kb RAM model became available. To reduce costs compared the Lisa, the year-younger Macintosh had a simplified motherboard design, no internal hard drive, and a single 3.5" floppy drive. Applications that came with the Macintosh included MacPaint, a bit-mapped graphics program, and MacWrite, which demonstrated WYSIWYG word processing.

While not an immediate success upon its release, the Macintosh was a successful personal computer for years to come. This is particularly due to the introduction of desktop publishing in 1985 through Apple's partnership with Adobe. This partnership introduced the LaserWriter printer and Aldus PageMaker (now Adobe PageMaker) to users of the personal computer. After Steve Jobs resigned from Apple in 1985 to start NeXT, a number of different models of Macintosh, including the Macintosh Plus and Macintosh II, were released to a great degree of success. The entire Macintosh line of computers was IBM's major competition up until the early 1990s.


GUIs spread



In the Commodore world, GEOS was available on the Commodore 64 and Commodore 128. Later, a version was available for PCs running DOS. It could be used with a mouse or a joystick as a pointing device, and came with a suite of GUI applications. Commodore's later product line, the Amiga platform, ran a GUI operating system by default. The Amiga laid the blueprint for future development of personal computers with its groundbreaking graphics and sound capabilities. Byte Magazine called it "the first multimedia computer... so far ahead of its time that almost nobody could fully articulate what it was all about.







In 1985, the Atari ST, also based on the Motorola 68000 microprocessor, was introduced with the first color GUI in the Atari TOS. It could be modified to emulate the Macintosh using the third-party Spectre GCR device.

In 1987, Acorn launched the Archimedes range of high-performance home computers in Europe and Australasia. Based around their own 32-bit ARM RISC processor, the systems initially shipped with a GUI OS called Arthur. In 1989, Arthur was superseded by a multi-tasking GUI-based operating system called RISC OS. By default, the mice used on these computers had three buttons.