Pages

Tuesday, June 22, 2010

Dance Dance Revolution (Game)

Dance Dance Revolution, abbreviated DDR, and previously known as Dancing Stage in Europe and Australasia, is a music video game series produced by Konami. Introduced in Japan in 1998 as part of the Bemani series, and released in North America and Europe in 1999, Dance Dance Revolution is the pioneering series of the rhythm and dance genre in video games. Players stand on a "dance platform" or stage and hit colored arrows laid out in a cross with their feet to musical and visual cues. Players are judged by how well they time their dance to the patterns presented to them and are allowed to choose more music to play to if they receive a passing score.
Dance Dance Revolution has been given much critical acclaim for its originality and stamina in the video game market. There have been dozens of arcade-based releases across several countries and hundreds of home video game console releases. The series has promoted a music library of original songs produced by Konami's in-house artists and an eclectic set of licensed music from many different genres. The series has also inspired many clones of its gameplay and a global fan base of millions that have created simulators of the game to which they contribute original music and "simfiles", collections of dance patterns to a specific song. DDR is generally considered the first "machine dance" game, followed by games such as Pump It Up by Andamiro and In the Groove by Roxor. DDR celebrated its 10th anniversary on November 21, 2008.

Gameplay

The dance stage, divided into 9 sections, 4 of them in the cardinal directions contain pressure sensors for the detection of steps.
The core gameplay involves the player moving his or her feet to a set pattern, stepping in time to the general rhythm or beat of a song. Arrows are divided by 1/4 notes, 1/8 notes, and so on (with differing color schemes for each), up to about 1/32 notes. During normal gameplay, arrows scroll upwards from the bottom of the screen and pass over a set of stationary arrows near the top (referred to as the "guide arrows" or "receptors", officially known as the Step Zone). When the scrolling arrows overlap the stationary ones, the player must step on the corresponding arrows on the dance platform, and the player is given a judgement for their accuracy (Marvelous, Perfect, Great, Good, Almost (close miss), Boo (complete miss)). Longer green and yellow arrows referred to as "freeze arrows" must be held down for their entire length, either producing a "O.K." if successful, or a "N.G." (no good) if not. Dance Dance Revolution X contains songs with Shock Arrows, walls of arrows with lightning effects which must be avoided, which are scored in the same way as Freezes (O.K./N.G.). If they are stepped on, a N.G. is awarded, the lifebar decreases, and the steps become hidden for a short period of time.
Successfully hitting the arrows in time with the music fills the "Dance Gauge", or life bar, while failure to do so drains it. If the Dance Gauge is fully depleted during gameplay, the player fails the song, usually resulting in a game over. Otherwise, the player is taken to the Results Screen, which rates the player's performance with a letter grade and a numerical score, among other statistics. The player may then be given a chance to play again, depending on the settings of the particular machine (the limit is usually 3-5 songs per game). In some of the home versions, there is usually an option for event mode, where an unlimited number of songs can be played. On some DDR games, there is an option to use two pads at once, making it harder to play but increasing the number of moves to incorporate into songs.

Difficulty
Depending on the version of the game, dance steps are broken into various levels of difficulty, often by color. Difficulty is loosely separated into 3-5 categories depending on timeline:
DDR 1st Mix only started out with Basic (even though not mentioned) and it began using the foot + name rating. The highest difficulties were 6-foot (Genuine) on Singles and 7-foot (Paramount) on doubles. DDR 2nd Mix added the Another difficulty and increased the highest difficulty to 8-foot (Exhorbitant). DDR 3rd Mix added the SSR (Step Step Revolution) mode, which can only be accessed via input code and is played on Flat (all arrows are the same color) by default. The SSR mode was eliminated in 3rdMix Plus and USA, and the Maniac routines were folded back into the regular game. The highest difficulty was increased to 9-foot (Catastrophic). DDR 4th Mix removed the names of the song and made it simple by removing those names and organizing the difficulty by order. DDR 4th Mix Plus replaced some stepcharts with newer and harder ones (which will later on be known as Challenge Steps on later console versions).
Beginning in DDRMAX, a "Groove Radar" was introduced, showing how difficult a particular sequence is in various categories, such as the maximum density of steps, how many jumps are in the steps, freeze arrows, etc. Excluding the U.S. Home Version, the step difficulty was removed in favor of the Groove Radar. DDRMAX2 re-added the foot ratings. DDRMAX2 added an official Oni/Challege difficulty which can only be accessed in Oni/Challenging Mode (Kakumei is the only Oni chart that can only be accessed by getting a AA on MaxX Unlimited as an Extra Stage). Also, that mix increased the maximum difficulty from a 9-footer to a 10-footer. Some songs were re-ranked in difficulty such as Drop Out and End of the Century being 8-footers to now 9-footers. On DDR Extreme, flashing 10-footers existed only on songs that producers felt were higher than the 10-footer rating. In addition, Beginner is a new difficulty added for beginners and the Oni/Challenge can be freely accessible, except for Extra Stage.
DDR SuperNOVA, while still has the foot ratings, removed the flashing 10-foot that existed on certain songs for unknown reasons. Later on, DDR SuperNOVA2 ditched the foot rating and replaced it with bars. However, all songs from the previous games remain identical, with very few changes to certain song difficulties such as Xepher Challenge being changed from a 10-bar to a 9-bar.

On Dance Dance Revolution X, the foot/bar rating system was given its first major overhaul, now ranking songs on a scale of 1-20, the first 10 represented by yellow bars, and the second 10 represented by additional red blocks shown in place on top of yellow bars. All songs from previous versions were re-rated on the new scale, including the flashing 10s, whose true difficulty in comparison to other flashing 10s is also now known as a result for the first time. The best way to calculate the new ratings of songs is to roughly multiply the previous difficulty rating to numbers between 1.3 to 1.5 and round it up. However, there are some dramatic changes in the way songs are rated; Bag (Expert - 10) is listed as Level 12, The Least 100 Seconds (Expert - 8) and Paranoia Hades (Difficult - 8) are listed as Level 14, and Arrabbiata (Expert - 9) is listed as Level 16.
The highest known difficulty on the new scale is 19, which are the Challenge charts of Valkyrie Dimension from the arcade version of Dance Dance Revolution X2, in both single and double.

Saturday, June 19, 2010

Street Fighter (Game)

Street Fighter is a 1987 arcade game developed by Capcom. It is the first competitive fighting game produced by the company and the inaugural game in the Street Fighter series. While it did not achieve the same worldwide popularity as its sequel Street Fighter II when it was first released, the original Street Fighter introduced some of the conventions made standard in later games, such as the six button controls and the use of command based special techniques.
A port for the TurboGrafx-CD console was released under the title Fighting Street in 1988. This same version was later re-released for the Wii's Virtual Console in North America on November 2, 2009, and in the PAL region on November 5, 2009.
Gameplay
The player competes in a series of one-on-one matches against a series of computer-controlled opponents or in a single match against another player. Each match consists of three rounds in which the player must defeat an opponent in less than 30 seconds. If a match ends before a fighter is knocked out, then the fighter with the greater amount of energy left will be declared the round's winner. The player must win two rounds in order to defeat the opponent and proceed to the next battle. If the third round ends in a tie, then the computer-controlled opponent will win by default or both players will lose. During the single-player mode, the player can continue after losing and fight against the opponent they lost the match to. Likewise, a second player can interrupt a single-player match and challenge the first player to a new match.
In the deluxe version of the arcade game, the player's controls consist of a standard eight-way joystick, and two large, unique mechatronic pads for punches and kicks that returned an analog value depending on how hard the player actuated the control. An alternate version was released that replaces the two punching pads with array of six attack buttons, three punch buttons and three kick buttons of different speed and strength (Light, Medium and Heavy).
The player uses the joystick to move towards or away from an opponent, as well to jump, crouch and defend against an opponent's attacks. By using the attack buttons/pads in combination with the joystick, the player can perform a variety of attacks from a standing, jumping or crouching positions. There's also three special techniques which can only be performed by inputting a specific series of joystick and button inputs. These techniques are the "Psycho Fire" (波動拳 Hadōken?, "Surge Fist"), the "Dragon Punch" (昇龍拳 Shoryūken?, "Rising Dragon Fist") and the "Hurricane Kick" (竜巻旋風脚 Tatsumaki Senpū Kyaku?, "Tornado Whirlwind Kick"). Unlike the subsequent Street Fighter sequels and other later fighting games, the specific commands for these special moves are not given in the arcade game's instruction card, which instead encouraged the player to discover these techniques on their own.

Characters
The player takes control of a Japanese martial artist named Ryu, who competes in an international martial arts tournament to prove his strength. The second player takes control of Ryu's former training partner and rival Ken, who challenges Ryu in the game's 2-player matches. Normally the player takes control of Ryu in the single-player mode, however, if the player controlling Ken defeats Ryu in a 2-player match, then the winning player will play the remainder of the game as Ken. The difference between the characters is aesthetic, as both of them have the same moves and techniques.
The single-player mode consists of a series of battles against ten opponents from five different nations. At the beginning of the game, the player can choose the country where their first match will take place: the available choices are Japan or the US, as well as China or England (depending on the game's configuration). The player will then proceed to fight against two fighters from the chosen country before proceeding to the next country. In addition to the regular battles, there also two types of bonus games which player can play for additional points: a brick breaking bonus game and a table breaking bonus game. After defeating the initial eight characters, the player will travel to Thailand to fight against the final two opponents.
The first eight computer controlled opponents are: Retsu, an expelled Shorinji Kempo instructor, and Geki, a claw-wielding ninja. From the United States, Joe, a kickboxer and underground martial art champion, and Mike, a former heavyweight boxer who once killed an opponent in the ring. From China, Lee, an expert in Chinese martial arts, and Gen, an elderly professional killer who has developed his own murderous martial art style. From England, Birdie, a tall bouncer who uses a combination of wrestling and boxing techniques, and Eagle, a well-dressed bodyguard of a wealthy family who uses Kali sticks. After the first eight challengers are defeated, the player is taken to Thailand for the final two adversaries: Adon, a deadly Muay Thai master, and his mentor Sagat, the reputed "Emperor of Muay Thai" and the game's final opponent.

Development
Street Fighter was directed by Takashi Nishiyama (who is credited as "Piston Takashi" in the game) and planned by Hiroshi Matsumoto (credited as "Finish Hiroshi"), who both previously worked on the overhead beat 'em up Avengers. The two men would leave Capcom after the production of the game and were employed by SNK, developing most of their fighting game series (including sequels to Fatal Fury and Art of Fighting). The duo would later work for Dimps and work on Street Fighter IV with Capcom. Keiji Inafune, best known for his artwork in Capcom's Mega Man franchise, got his start at the company by designing and illustrating the character portraits in Street Fighter.

Arcade Variants
Two different arcade cabinets were sold for the game: a "Regular" version (which was sold as a tabletop cabinet in Japan and as an upright overseas) that featured the same six button configuration later used in Street Fighter II and a "Deluxe" cabinet that featured two-pressure sensitive rubber pads. The pressure-sensitive pads determine the strength and speed of the player's attacks based on how hard they were pressed.
In the American and Worldwide versions of the game, Ryu's and Ken's voices were dubbed so that they yelled the names of their moves in English (i.e.: Psycho Fire, Dragon Punch, Hurricane Kick), while all subsequent localized releases left the Japanese voices intact. Street Fighter IV contains both English and Japanese voice acting, although characters from Asia still use Japanese names for certain special moves and super/ultra combos amidst otherwise English dialogue.

Home Versions
Street Fighter was ported under the title Fighting Street in 1988 for the TurboGrafx-CD. This version features an arranged soundtrack. Due to the lack of a six-button controller available for the TurboGrafx-16 at the time this version was released, the strength level of the attacks were determined by how long either of the action buttons were held. This version was published by NEC Avenue in North America and Hudson Soft in Japan and was developed by Alfa System. The cover artwork featured Mount Rushmore, which was one of the locations in the game. This version was released for the Wii's Virtual Console in Japan on October 6, 2009, in North America on November 2, 2009 and in the PAL regions on November 6, 2009.
Versions of Street Fighter for the Commodore 64, ZX Spectrum, Amstrad CPC, MS-DOS, Amiga and Atari ST were published by U.S. Gold in 1988 in Europe. These ports were developed by Tiertex. The Commodore 64 actually got two versions, published on the same tape/disk - the NTSC (US) version developed by Capcom USA, and the PAL (UK) version by Tiertex. Shortly afterward, Tiertex developed their own unofficial sequel titled Human Killing Machine, which was entirely unrelated to the subsequent official sequel or indeed any other game in the series. This edition of Street Fighter was featured in two compilations: Arcade Muscle and Multimixx 3, both of which featured other U.S. Gold-published ports of Capcom games such as Bionic Commando and 1943: The Battle of Midway.
An emulation of the original arcade version is featured in Capcom Arcade Hits Volume 1 (along with Street Fighter II: Champion Edition) for Windows PC, Capcom Classics Collection Remixed for the PlayStation Portable and Capcom Classics Collection Vol. 2 (along with Super Street Fighter II Turbo) for the PlayStation 2 and Xbox.

Tuesday, June 15, 2010

DirectX

Microsoft DirectX is a collection of application programming interfaces (APIs) for handling tasks related to multimedia, especially game programming and video, on Microsoft platforms. Originally, the names of these APIs all began with Direct, such as Direct3D, DirectDraw, DirectMusic, DirectPlay, DirectSound, and so forth. The name DirectX was coined as shorthand term for all of these APIs (the X standing in for the particular API names) and soon became the name of the collection. When Microsoft later set out to develop a gaming console, the X was used as the basis of the name Xbox to indicate that the console was based on DirectX technology. The X initial has been carried forward in the naming of APIs designed for the Xbox such as XInput and the Cross-platform Audio Creation Tool (XACT), while the DirectX pattern has been continued for Windows APIs such as Direct2D and DirectWrite.
Direct3D (the 3D graphics API within DirectX) is widely used in the development of video games for Microsoft Windows, Microsoft Xbox, and Microsoft Xbox 360. Direct3D is also used by other software applications for visualization and graphics tasks such as CAD/CAM engineering. As Direct3D is the most widely publicized component of DirectX, it is common to see the names "DirectX" and "Direct3D" used interchangeably.

The DirectX software development kit (SDK) consists of runtime libraries in redistributable binary form, along with accompanying documentation and headers for use in coding. Originally, the runtimes were only installed by games or explicitly by the user. Windows 95 did not launch with DirectX, but DirectX was included with Windows 95 OEM Service Release 2. Windows 98 and Windows NT 4.0 both shipped with DirectX, as has every version of Windows released since. The SDK is available as a free download. While the runtimes are proprietary, closed-source software, source code is provided for most of the SDK samples.
Direct3D 9Ex, Direct3D 10 and Direct3D 11 are only available for Windows Vista and Windows 7 because each of these new versions were built to depend upon the new Windows Display Driver Model that was introduced for Windows Vista. The new Vista/WDDM graphics architecture includes a new video memory manager that supports virtualizing graphics hardware to multiple applications and services such as the Desktop Window Manager.

History
In late 1994 Microsoft was on the verge of releasing its next operating system, Windows 95. The main factor that would determine the value consumers would place on their new operating system very much rested on what programs would be able to run on it. Three Microsoft employees – Craig Eisler, Alex St. John, and Eric Engstrom – were concerned because programmers tended to see Microsoft's previous operating system, MS-DOS, as a better platform for game programming, meaning few games would be developed for Windows 95 and the operating system would not be as much of a success.
DOS allowed direct access to video cards, keyboards, mice, sound devices, and all other parts of the system, while Windows 95, with its protected memory model, restricted access to all of these, working on a much more standardized model. Microsoft needed a way that would let programmers get what they wanted, and they needed it quickly; the operating system was only months away from being released. Eisler (development lead), St. John, and Engstrom (program manager) worked together to fix this problem, with a solution that they eventually named DirectX.

The first version of DirectX was released in September 1995 as the Windows Games SDK. It was the Win32 replacement for the DCI and WinG APIs for Windows 3.1. Simply put, DirectX allowed all versions of Microsoft Windows, starting with Windows 95, to incorporate high-performance multimedia. Eisler wrote about the frenzy to build DirectX 1 through 5 in his blog.
DirectX 2.0 became a component of Windows itself with the releases of Windows 95 OSR2 and Windows NT 4.0 in mid-1996. As Windows 95 was itself still new and few games had been released for it, Microsoft engaged in heavy promotion of DirectX to developers who were generally distrustful of Microsoft's ability to build a gaming platform in Windows. Alex St. John, working as an evangelist for DirectX, staged an elaborate event at the 1996 Computer Game Developers Conference which game developer Jay Barnson described as a Roman theme, including real lions, togas, and something resembling an indoor carnival. It was at this event that Microsoft first introduced Direct3D and DirectPlay, and demonstrated multi-player MechWarrior 2 being played over the Internet.
The DirectX team faced the challenging task of testing each DirectX release against an array of hardware and software. A variety of different graphics cards, audio cards, motherboards, CPUs, input devices, games, and other multimedia applications were tested with each beta and final release. The DirectX team also built and distributed tests that allowed the hardware industry to confirm that new hardware designs and driver releases would be compatible with DirectX.
Prior to DirectX, Microsoft had included OpenGL on their Windows NT platform. At the time, OpenGL required "high-end" hardware and was focused on engineering and CAD uses. Direct3D was intended to be a lightweight partner to OpenGL, focused on game use. As 3D gaming grew, OpenGL evolved to include better support for programming techniques for interactive multimedia applications like games, giving developers choice between using OpenGL or Direct3D as the 3D graphics API for their applications. At that point a "battle" began between supporters of the cross-platform OpenGL and the Windows-only Direct3D. Incidentally, OpenGL was supported at Microsoft by the DirectX team. If a developer chose to use OpenGL 3D graphics API, the other APIs of DirectX are often combined with OpenGL in computer games because OpenGL does not include all of DirectX's functionality (such as sound or joystick support).
In a console-specific version, DirectX was used as a basis for Microsoft's Xbox and Xbox 360 console API. The API was developed jointly between Microsoft and Nvidia, who developed the custom graphics hardware used by the original Xbox. The Xbox API is similar to DirectX version 8.1, but is non-updateable like other console technologies. The Xbox was code named DirectXbox, but this was shortened to Xbox for its commercial name.
In 2002 Microsoft released DirectX 9 with support for the use of much longer shader programs than before with pixel and vertex shader version 2.0. Microsoft has continued to update the DirectX suite since then, introducing shader model 3.0 in DirectX 9.0c, released in August 2004.
As of April 2005, DirectShow was removed from DirectX and moved to the Microsoft Platform SDK instead. The DirectX SDK is, however, still required to build the DirectShow samples.

Saturday, June 12, 2010

OpenGL

OpenGL (Open Graphics Library) is a standard specification defining a cross-language, cross-platform API for writing applications that produce 2D and 3D computer graphics. The interface consists of over 250 different function calls which can be used to draw complex three-dimensional scenes from simple primitives. OpenGL was developed by Silicon Graphics Inc. (SGI) in 1992 and is widely used in CAD, virtual reality, scientific visualization, information visualization, and flight simulation. It is also used in video games, where it competes with Direct3D on Microsoft Windows platforms (see OpenGL vs. Direct3D). OpenGL is managed by the non-profit technology consortium Khronos Group.

Design
OpenGL serves two main purposes:
  • Hide complexities of interfacing with different 3D accelerators by presenting a single, uniform interface
  • Hide differing capabilities of hardware platforms by requiring support of full OpenGL feature set for all implementations (using software emulation if necessary).
OpenGL's basic operation is to accept primitives such as points, lines and polygons, and convert them into pixels. This is done by a graphics pipeline known as the OpenGL state machine. Most OpenGL commands either issue primitives to the graphics pipeline, or configure how the pipeline processes these primitives. Prior to the introduction of OpenGL 2.0, each stage of the pipeline performed a fixed function and was configurable only within tight limits. OpenGL 2.0 offers several stages that are fully programmable using GLSL.
OpenGL is a low-level, procedural API, requiring the programmer to dictate the exact steps required to render a scene. This contrasts with descriptive (aka scene graph or retained mode) APIs, where a programmer only needs to describe a scene and can let the library manage the details of rendering it. OpenGL's low-level design requires programmers to have a good knowledge of the graphics pipeline, but also gives a certain amount of freedom to implement novel rendering algorithms.
OpenGL has historically been influential on the development of 3D accelerators, promoting a base level of functionality that is now common in consumer-level hardware:

Simplified version of the Graphics Pipeline Process; excludes a number of features like blending, VBOs and logic ops
  • Rasterised points, lines and polygons as basic primitives
  • A transform and lighting pipeline
  • Z-buffering
  • Texture mapping
  • Alpha blending
A brief description of the process in the graphics pipeline could be:
  1. Evaluation, if necessary, of the polynomial functions which define certain inputs, like NURBS surfaces, approximating curves and the surface geometry.
  2. Vertex operations, transforming and lighting them depending on their material. Also clipping non-visible parts of the scene in order to produce the viewing volume.
  3. Rasterisation or conversion of the previous information into pixels. The polygons are represented by the appropriate colour by means of interpolation algorithms.
  4. Per-fragment operations, like updating values depending on incoming and previously stored depth values, or colour combinations, among others.
  5. Lastly, fragments are inserted into the frame buffer.
Many modern 3D accelerators provide functionality far above this baseline, but these new features are generally enhancements of this basic pipeline rather than radical revisions of it.

History
In the 1980s, developing software that could function with a wide range of graphics hardware was a real challenge. Software developers wrote custom interfaces and drivers for each piece of hardware. This was expensive and resulted in much duplication of effort.
By the early 1990s, Silicon Graphics (SGI) was a leader in 3D graphics for workstations. Their IRIS GL API[8] was considered the state of the art and became the de facto industry standard, overshadowing the open standards-based PHIGS. This was because IRIS GL was considered easier to use, and because it supported immediate mode rendering. By contrast, PHIGS was considered difficult to use and outdated in terms of functionality.
SGI's competitors (including Sun Microsystems, Hewlett-Packard and IBM) were also able to bring to market 3D hardware, supported by extensions made to the PHIGS standard. This in turn caused SGI market share to weaken as more 3D graphics hardware suppliers entered the market. In an effort to influence the market, SGI decided to turn the IrisGL API into an open standard.
SGI considered that the IrisGL API itself wasn't suitable for opening due to licensing and patent issues. Also, the IrisGL had API functions that were not relevant to 3D graphics. For example, it included a windowing, keyboard and mouse API, in part because it was developed before the X Window System and Sun's NeWS systems were developed.
In addition, SGI had a large number of software customers; by changing to the OpenGL API they planned to keep their customers locked onto SGI (and IBM) hardware for a few years while market support for OpenGL matured. Meanwhile, SGI would continue to try to maintain their customers tied to SGI hardware by developing the advanced and proprietary Iris Inventor and Iris Performer programming APIs.
As a result, SGI released the OpenGL standard.
The OpenGL standardised access to hardware, and pushed the development responsibility of hardware interface programs, sometimes called device drivers, to hardware manufacturers and delegated windowing functions to the underlying operating system. With so many different kinds of graphic hardware, getting them all to speak the same language in this way had a remarkable impact by giving software developers a higher level platform for 3D-software development.
In 1992, SGI led the creation of the OpenGL architectural review board (OpenGL ARB), the group of companies that would maintain and expand the OpenGL specification for years to come. OpenGL evolved from (and is very similar in style to) SGI's earlier 3D interface, IrisGL. One of the restrictions of IrisGL was that it only provided access to features supported by the underlying hardware. If the graphics hardware did not support a feature, then the application could not use it. OpenGL overcame this problem by providing support in software for features unsupported by hardware, allowing applications to use advanced graphics on relatively low-powered systems.
In 1994, SGI played with the idea of releasing something called "OpenGL++" which included elements such as a scene-graph API (presumably based on their Performer technology). The specification was circulated among a few interested parties – but never turned into a product.
Microsoft released Direct3D in 1995, which would become the main competitor of OpenGL. On December 17, 1997, Microsoft and SGI initiated the Fahrenheit project, which was a joint effort with the goal of unifying the OpenGL and Direct3D interfaces (and adding a scene-graph API too). In 1998, Hewlett-Packard joined the project. It initially showed some promise of bringing order to the world of interactive 3D computer graphics APIs, but on account of financial constraints at SGI, strategic reasons at Microsoft, and general lack of industry support, it was abandoned in 1999.
OpenGL releases are backward compatible. In general, graphics cards released after the OpenGL version release dates shown below support those version features, and all earlier features. For example the GeForce 6800, listed below, supports all features up to and including OpenGL 2.0. (Specific cards may conform to an OpenGL spec, but selectively not support certain features. For details, the GPU Caps Viewer software includes a database of cards and their supported specs)

Thursday, June 10, 2010

Touchscreen

A touchscreen is an electronic visual display that can detect the presence and location of a touch within the display area. The term generally refers to touching the display of the device with a finger or hand. Touchscreens can also sense other passive objects, such as a stylus.
The touchscreen has two main attributes. First, it enables one to interact directly with what is displayed, rather than indirectly with a cursor controlled by a mouse or touchpad. Secondly, it lets one do so without requiring any intermediate device that would need to be held in the hand. Such displays can be attached to computers, or to networks as terminals. They also play a prominent role in the design of digital appliances such as the personal digital assistant (PDA), satellite navigation devices, mobile phones, and video games.

History
In 1971, the first "touch sensor" was developed by Doctor Sam Hurst (founder of Elographics) while he was an instructor at the University of Kentucky. This sensor, called the "Elograph," was patented by The University of Kentucky Research Foundation. The "Elograph" was not transparent like modern touch screens; however, it was a significant milestone in touch screen technology. In 1974, the first true touch screen incorporating a transparent surface was developed by Sam Hurst and Elographics. In 1977, Elographics developed and patented five-wire resistive technology, the most popular touch screen technology in use today. Touchscreens first gained some visibility with the invention of the computer-assisted learning terminal, which came out in 1975 as part of the PLATO project. Touchscreens have subsequently become familiar in everyday life. Companies use touch screens for kiosk systems in retail and tourist settings, point of sale systems, ATMs, and PDAs, where a stylus is sometimes used to manipulate the GUI and to enter data. The popularity of smart phones, PDAs, portable game consoles and many types of information appliances is driving the demand for, and acceptance of, touchscreens.
From 1979–1985, the Fairlight CMI (and Fairlight CMI IIx) was a high-end musical sampling and re-synthesis workstation that utilized light pen technology, with which the user could allocate and manipulate sample and synthesis data, as well as access different menus within its OS by touching the screen with the light pen. The later Fairlight series III models used a graphics tablet in place of the light pen.
The HP-150 from 1983 was one of the world's earliest commercial touchscreen computer. It did not have a touchscreen in the strict sense; instead, it had a 9" Sony Cathode Ray Tube (CRT) surrounded by infrared transmitters and receivers, which detected the position of any non-transparent object on the screen.
Until recently, most consumer touchscreens could only sense one point of contact at a time, and few have had the capability to sense how hard one is touching. This is starting to change with the commercialization of multi-touch technology.
Touchscreens are popular in hospitality, and in heavy industry, as well as kiosks such as museum displays or room automation, where keyboard and mouse systems do not allow a suitably intuitive, rapid, or accurate interaction by the user with the display's content.
Historically, the touchscreen sensor and its accompanying controller-based firmware have been made available by a wide array of after-market system integrators, and not by display, chip, or motherboard manufacturers. Display manufacturers and chip manufacturers worldwide have acknowledged the trend toward acceptance of touchscreens as a highly desirable user interface component and have begun to integrate touchscreen functionality into the fundamental design of their products.

Technologies
There are a variety of touchscreen technologies.

Resistive
A resistive touchscreen panel is composed of several layers, the most important of which are two thin, electrically conductive layers separated by a narrow gap. When an object, such as a finger, presses down on a point on the panel's outer surface the two metallic layers become connected at that point: the panel then behaves as a pair of voltage dividers with connected outputs. This causes a change in the electrical current, which is registered as a touch event and sent to the controller for processing.

Surface acoustic wave
Surface acoustic wave (SAW) technology uses ultrasonic waves that pass over the touchscreen panel. When the panel is touched, a portion of the wave is absorbed. This change in the ultrasonic waves registers the position of the touch event and sends this information to the controller for processing. Surface wave touch screen panels can be damaged by outside elements. Contaminants on the surface can also interfere with the functionality of the touchscreen.

Capacitive
A capacitive touchscreen panel is one which consists of an insulator such as glass, coated with a transparent conductor such as indium tin oxide (ITO). As the human body is also a conductor, touching the surface of the screen results in a distortion of the screen's electrostatic field, measurable as a change in capacitance. Different technologies may be used to determine the location of the touch. The location is then sent to the controller for processing.

Surface capacitance
In this basic technology, only one side of the insulator is coated with a conductive layer. A small voltage is applied to the layer, resulting in a uniform electrostatic field. When a conductor, such as a human finger, touches the uncoated surface, a capacitor is dynamically formed. The sensor's controller can determine the location of the touch indirectly from the change in the capacitance as measured from the four corners of the panel. As it has no moving parts, it is moderately durable but has limited resolution, is prone to false signals from parasitic capacitive coupling, and needs calibration during manufacture. It is therefore most often used in simple applications such as industrial controls and kiosks.

Projected capacitance
Projected Capacitive Touch (PCT) technology is a capacitive technology which permits more accurate and flexible operation, by etching the conductive layer. An X-Y grid is formed either by etching a single layer to form a grid pattern of electrodes, or by etching two separate, perpendicular layers of conductive material with parallel lines or tracks to form the grid (comparable to the pixel grid found in many LCD displays).
The greater resolution of PCT allows operation without direct contact, such that the conducting layers can be coated with further protective insulating layers, and operate even under screen protectors, or behind weather and vandal-proof glass. Due to the top layer of a PCT being glass, PCT is a more robust solution versus resistive touch technology. Depending on the implementation, an active or passive stylus can be used instead of or in addition to a finger. This is common with point of sale devices that require signature capture. Gloved fingers may or may not be sensed, depending on the implementation and gain settings. Conductive smudges and similar interference on the panel surface can interfere with the performance. Such conductive smudges come mostly from sticky or sweaty finger tips, especially in high humidity environments. Collected dust, which adheres to the screen due to the moisture from fingertips can also be a problem. There are two types of PCT: Self Capacitance and Mutual Capacitance.

Mutual Capacitance
In mutual capacitive sensors, there is a capacitor at every intersection of each row and each column. A 12-by-16 array, for example, would have 192 independent capacitors. A voltage is applied to the rows or columns. Bringing a finger or conductive stylus close to the surface of the sensor changes the local electrostatic field which reduces the mutual capacitance. The capacitance change at every individual point on the grid can be measured to accurately determine the touch location by measuring the voltage in the other axis. Mutual capacitance allows multi-touch operation where multiple fingers, palms or stylus can be accurately tracked at the same time.

Self Capacitance
Self capacitance sensors can have the same X-Y grid as mutual capacitance sensors, but the columns and rows operate independently. With self capacitance, the capacitive load of a finger is measured on each column or row electrode by a current meter. This method produces a stronger signal than mutual capacitance, but it is unable to resolve accurately more than one finger, which results in "ghosting", or misplaced location sensing.

Infrared
An infrared touchscreen uses an array of X-Y infrared LED and photodetector pairs around the edges of the screen to detect a disruption in the pattern of LED beams. These LED beams cross each other in vertical and horizontal patterns. This helps the sensors pick up the exact location of the touch. A major benefit of such a system is that it can detect essentially any input including a finger, gloved finger, stylus or pen. It is generally used in outdoor applications and point-of-sale systems which can't rely on a conductor (such as a bare finger) to activate the touchscreen. Unlike capacitive touchscreens, infrared touchscreens do not require any patterning on the glass which increases durability and optical clarity of the overall system.

Optical imaging
This is a relatively modern development in touchscreen technology, in which two or more image sensors are placed around the edges (mostly the corners) of the screen. Infrared back lights are placed in the camera's field of view on the other side of the screen. A touch shows up as a shadow and each pair of cameras can then be triangulated to locate the touch or even measure the size of the touching object (see visual hull). This technology is growing in popularity, due to its scalability, versatility, and affordability, especially for larger units.

Dispersive signal technology
Introduced in 2002 by 3M, this system uses sensors to detect the mechanical energy in the glass that occurs due to a touch. Complex algorithms then interpret this information and provide the actual location of the touch. The technology claims to be unaffected by dust and other outside elements, including scratches. Since there is no need for additional elements on screen, it also claims to provide excellent optical clarity. Also, since mechanical vibrations are used to detect a touch event, any object can be used to generate these events, including fingers and stylus. A downside is that after the initial touch the system cannot detect a motionless finger.

Acoustic pulse recognition
This system, introduced by Tyco International's Elo division in 2006, uses piezoelectric transducers located at various positions around the screen to turn the mechanical energy of a touch (vibration) into an electronic signal. The screen hardware then uses an algorithm to determine the location of the touch based on the transducer signals. The touchscreen itself is made of ordinary glass, giving it good durability and optical clarity. It is usually able to function with scratches and dust on the screen with good accuracy. The technology is also well suited to displays that are physically larger. As with the Dispersive Signal Technology system, after the initial touch, a motionless finger cannot be detected. However, for the same reason, the touch recognition is not disrupted by any resting objects.