Pages

Tuesday, December 21, 2010

Android

Android is a mobile operating system initially developed by Android Inc. Android was bought by Google in 2005. Android is based upon a modified version of the Linux kernel. Google and other members of the Open Handset Alliance collaborated to develop and release Android to the world. The Android Open Source Project (AOSP) is tasked with the maintenance and further development of Android. Unit sales for Android OS smartphones ranked first among all smartphone OS handsets sold in the U.S. in the second and third quarters of 2010, with a third quarter market share of 43.6%.
Android has a large community of developers writing application programs ("apps") that extend the functionality of the devices. There are currently over 200,000 apps available for Android.Android Market is the online app store run by Google, though apps can be downloaded from third party sites (except on AT&T, which disallows this). Developers write in the Java language, controlling the device via Google-developed Java libraries.

The unveiling of the Android distribution on the 5th of November 2007 was announced with the founding of the Open Handset Alliance, a consortium of 79 hardware, software, and telecom companies devoted to advancing open standards for mobile devices. Google released most of the Android code under the Apache License, a free software and open source license.
The Android operating system software stack consists of Java applications running on a Java based object oriented application framework on top of Java core libraries running on a Dalvik virtual machine featuring JIT compilation. Libraries written in C include the surface manager, OpenCore media framework, SQLite relational database management system, OpenGL ES 2.0 3D graphics API, WebKit layout engine, SGL graphics engine, SSL, and Bionic libc. The Android operating system consists of 12 million lines of code including 3 million lines of XML, 2.8 million lines of C, 2.1 million lines of Java, and 1.75 million lines of C++.

History
Acquisition by Google
In July 2005, Google acquired Android Inc., a small startup company based in Palo Alto, California, USA. Android's co-founders who went to work at Google included Andy Rubin (co-founder of Danger), Rich Miner (co-founder of Wildfire Communications, Inc.), Nick Sears (once VP at T-Mobile), and Chris White (headed design and interface development at WebTV). At the time, little was known about the functions of Android, Inc. other than that they made software for mobile phones. This began rumors that Google was planning to enter the mobile phone market.
At Google, the team led by Rubin developed a mobile device platform powered by the Linux kernel which they marketed to handset makers and carriers on the premise of providing a flexible, upgradable system. It was reported that Google had already lined up a series of hardware component and software partners and signaled to carriers that it was open to various degrees of cooperation on their part. More speculation that Google's Android would be entering the mobile-phone market came in December 2006. Reports from the BBC and The Wall Street Journal noted that Google wanted its search and applications on mobile phones and it was working hard to deliver that. Print and online media outlets soon reported rumors that Google was developing a Google-branded handset. More speculation followed reporting that as Google was defining technical specifications, it was showing prototypes to cell phone manufacturers and network operators.
In September 2007, InformationWeek covered an Evalueserve study reporting that Google had filed several patent applications in the area of mobile telephony.

Open Handset Alliance
"Today's announcement is more ambitious than any single 'Google Phone' that the press has been speculating about over the past few weeks. Our vision is that the powerful platform we're unveiling will power thousands of different phone models."
Eric Schmidt, Google Chairman/CEO
On the 5th of November 2007 the Open Handset Alliance, a consortium of several companies which include Texas Instruments, Broadcom Corporation, Google, HTC, Intel, LG, Marvell Technology Group, Motorola, Nvidia, Qualcomm, Samsung Electronics, Sprint Nextel and T-Mobile was unveiled with the goal to develop open standards for mobile devices. Along with the formation of the Open Handset Alliance, the OHA also unveiled their first product, Android, a mobile device platform built on the Linux kernel version 2.6.
On 9 December 2008, it was announced that 14 new members would be joining the Android Project, including PacketVideo, ARM Holdings, Atheros Communications, Asustek Computer Inc, Garmin Ltd, Softbank, Sony Ericsson, Toshiba Corp, and Vodafone Group Plc.

Licensing
With the exception of brief update periods, Android has been available under a free software / open source license since 21 October 2008. Google published the entire source code (including network and telephony stacks) under an Apache License.

Monday, December 20, 2010

iPhone

The iPhone (pronounced /ˈaɪfoʊn/ EYE-fohn) is a line of Internet and multimedia-enabled smartphones designed and marketed by Apple Inc. The first iPhone was introduced on January 9, 2007.
An iPhone functions as a camera phone, including text messaging and visual voicemail, a portable media player, and an Internet client, with e-mail, web browsing, and Wi-Fi connectivity. The user interface is built around the device's multi-touch screen, including a virtual keyboard rather than a physical one. Third-party applications are available from the App Store, which launched in mid-2008 and now has well over 300,000 "apps" approved by Apple. These apps have diverse functionalities, including games, reference, GPS navigation, social networking, security and advertising for television shows, films, and celebrities.
There are four generations of iPhone models, and they were accompanied by four major releases of iOS (formerly iPhone OS). The original iPhone established design precedents like screen size and button placement that have persisted through all models. The iPhone 3G added 3G cellular network capabilities and A-GPS location. The iPhone 3GS added a compass, faster processor, and higher resolution camera, including video. The iPhone 4 has two cameras for FaceTime video calling and a higher-resolution display. It was released on June 24, 2010.

History and availability
Development of the iPhone began with Apple CEO Steve Jobs' direction that Apple engineers investigate touchscreens. Apple created the device during a secretive and unprecedented collaboration with AT&T Mobility—Cingular Wireless at the time—at an estimated development cost of US$150 million over thirty months. Apple rejected the "design by committee" approach that had yielded the Motorola ROKR E1, a largely unsuccessful collaboration with Motorola. Instead, Cingular gave Apple the liberty to develop the iPhone's hardware and software in-house.
Jobs unveiled the iPhone to the public on January 9, 2007 at Macworld 2007. Apple was required to file for operating permits with the FCC, but since such filings are made available to the public, the announcement came months before the iPhone had received approval. The iPhone went on sale in the United States on June 29, 2007, at 6:00 pm local time, while hundreds of customers lined up outside the stores nationwide. The original iPhone was made available in the UK, France, and Germany in November 2007, and Ireland and Austria in the spring of 2008.
On July 11, 2008, Apple released the iPhone 3G in twenty-two countries, including the original six. Apple released the iPhone 3G in upwards of eighty countries and territories. Apple announced the iPhone 3GS on June 8, 2009, along with plans to release it later in June, July, and August, starting with the U.S., Canada and major European countries on June 19. Many would-be users objected to the iPhone's cost, and 40% of users have household incomes over US$100,000. In an attempt to gain a wider market, Apple retained the 8 GB iPhone 3G at a lower price point. When Apple introduced the iPhone 4, the 3GS became the less expensive model. Apple reduced the price several times since the iPhone's release in 2007, at which time an 8 GB iPhone sold for $599. An iPhone 3GS with the same capacity now costs $99. However, these numbers are misleading, since all iPhone units sold through AT&T require a two-year contract (costing several hundred dollars), and a SIM lock.
Apple sold 6.1 million original iPhone units over five quarters. The sales has been growing steadily thereafter, by the end of fiscal year 2010, a total of 73.5 million iPhones were sold. Sales in Q4 2008 surpassed temporarily those of RIM's BlackBerry sales of 5.2 million units, which made Apple briefly the third largest mobile phone manufacturer by revenue, after Nokia and Samsung. Approximately 6.4 million iPhones are active in the U.S. alone. While iPhone sales constitute a significant portion of Apple's revenue, some of this income is deferred.
The back of the original iPhone was made of aluminum with a black plastic accent. The iPhone 3G and 3GS feature a full plastic back to increase the strength of the GSM signal. The iPhone 3G was available in an 8 GB black model, or a black or white option for the 16 GB model. They both are now discontinued. The iPhone 3GS was available in both colors, regardless of storage capacity. The white model was discontinued in favor of a black 8 GB low-end model. The iPhone 4 has an aluminosilicate glass front and back with a stainless steel edge that serves as the antennae. It is available in black; a white version was announced, but has as of October 2010 not been released.
The iPhone has garnered positive reviews from critics like David Pogue and Walter Mossberg. The iPhone attracts users of all ages, and besides consumer use the iPhone has also been adopted for business purposes.

Wednesday, December 15, 2010

Windows Azure

Windows Azure is Microsoft's Cloud Services Platform

Building out an infrastructure that supports your web service or application can be expensive, complicated and time consuming. Forecasting the highest possible demand. Building out the network to support your peak times. Getting the right servers in place at the right time, managing and maintaining the systems.

Or you could look to the Microsoft cloud. The Windows Azure platform is a flexible cloud–computing platform that lets you focus on solving business problems and addressing customer needs. No need to invest upfront on expensive infrastructure. Pay only for what you use, scale up when you need capacity and pull it back when you don’t. We handle all the patches and maintenance — all in a secure environment with over 99.9% uptime.

At PDC10 last month, we announced a host of enhancements for Windows Azure designed to make it easier for customers to run existing Windows applications on Windows Azure, enable more affordable platform access and improve the Windows Azure developer and IT Professional experience. Today, we’re happy to announce that several of these are either generally available or ready for you to try as a beta or Community Technology Preview (CTP). Below is a list of what’s now available, along with links to more information.

The following functionality is now generally available through the Windows Azure SDK and Windows Azure Tools for Visual Studio and the new Windows Azure Management Portal:

Development of more complete applications using Windows Azure is now possible with the introduction of Elevated Privileges and Full IIS. Developers can now run a portion or all of their code in Web and Worker roles with elevated administrator privileges. The Web role now provides Full IIS functionality, which enables multiple IIS sites per Web role and the ability to install IIS modules.
Remote Desktop functionality enables customers to connect to a running instance of their application or service in order to monitor activity and troubleshoot common problems.
Windows Server 2008 R2 Roles: Windows Azure now supports Windows Server 2008 R2 in its Web, worker and VM roles. This new support enables you to take advantage of the full range of Windows Server 2008 R2 features such as IIS 7.5, AppLocker, and enhanced command-line and automated management using PowerShell Version 2.0.
Multiple Service Administrators: Windows Azure now supports multiple Windows Live IDs to have administrator privileges on the same Windows Azure account. The objective is to make it easy for a team to work on the same Windows Azure account while using their individual Windows Live IDs.
Better Developer and IT Professional Experience: The following enhancements are now available to help developers see and control how their applications are running in the cloud:
A completely redesigned Silverlight-based Windows Azure portal to ensure an improved and intuitive user experience
Access to new diagnostic information including the ability to click on a role to see role type, deployment time and last reboot time
A new sign-up process that dramatically reduces the number of steps needed to sign up for Windows Azure.
New scenario based Windows Azure Platform forums to help answer questions and share knowledge more efficiently.
We are releasing the following functionality is now available as beta:

Windows Azure Virtual Machine Role: Support for more types of new and existing Windows applications will soon be available with the introduction of the Virtual Machine (VM) role. Customers can move more existing applications to Windows Azure, reducing the need to make costly code or deployment changes.
Extra Small Windows Azure Instance, which is priced at $0.05 per compute hour, provides developers with a cost-effective training and development environment. Developers can also use the Extra Small instance to prototype cloud solutions at a lower cost.
Developers and IT Professionals can sign up for either of the betas above via the Windows Azure Management Portal.

Windows Azure Marketplace is an online marketplace for you to share, buy and sell building block components, premium data sets, training and services needed to build Windows Azure platform applications. The first section in the Windows Azure Marketplace, DataMarket, became commercially available at PDC 10. Today, we’re launching a beta of the application section of the Windows Azure Marketplace with 40 unique partners and over 50 unique applications and services.

Monday, December 13, 2010

Cloud Computing Deployment Models

Public cloud
Public cloud or external cloud describes cloud computing in the traditional main stream sense, whereby resources are dynamically provisioned on a fine-grained, self-service basis over the Internet, via web applications/web services, from an off-site third-party provider who bills on a fine-grained utility computing basis.

Community cloud
A community cloud may be established where several organizations have similar requirements and seek to share infrastructure so as to realize some of the benefits of cloud computing. With the costs spread over fewer users than a public cloud (but more than a single tenant) this option is more expensive but may offer a higher level of privacy, security and/or policy compliance. Examples of community cloud include Google's "Gov Cloud".

Hybrid cloud
There is some confusion over the term "Hybrid" when applied to the cloud - a standard definition of the term "Hybrid Cloud" has not yet emerged. The term "Hybrid Cloud" has been used to mean either two separate clouds joined together (public, private, internal or external), or a combination of virtualized cloud server instances used together with real physical hardware. The most correct definition of the term "Hybrid Cloud" is probably the use of physical hardware and virtualized cloud server instances together to provide a single common service. Two clouds that have been joined together are more correctly called a "combined cloud".
A combined cloud environment consisting of multiple internal and/or external providers "will be typical for most enterprises". By integrating multiple cloud services users may be able to ease the transition to public cloud services while avoiding issues such as PCI compliance.
Another perspective on deploying a web application in the cloud is using Hybrid Web Hosting, where the hosting infrastructure is a mix between Cloud Hosting and Managed dedicated servers - this is most commonly achieved as part of a web cluster in which some of the nodes are running on real physical hardware and some are running on cloud server instances.
A hybrid storage cloud uses a combination of public and private storage clouds. Hybrid storage clouds are often useful for archiving and backup functions, allowing local data to be replicated to a public cloud.

Private cloud
Douglas Parkhill first described the concept of a "Private Computer Utility" in his 1966 book The Challenge of the Computer Utility. The idea was based upon direct comparison with other industries (e.g. the electricity industry) and the extensive use of hybrid supply models to balance and mitigate risks.
Private cloud and internal cloud have been described as neologisms, however the concepts themselves pre-date the term cloud by 40 years. Even within modern utility industries, hybrid models still exist despite the formation of reasonably well-functioning markets and the ability to combine multiple providers.
Some vendors have used the terms to describe offerings that emulate cloud computing on private networks. These (typically virtualization automation) products offer the ability to host applications or virtual machines in a company's own set of hosts. These provide the benefits of utility computing -shared hardware costs, the ability to recover from failure, and the ability to scale up or down depending upon demand.
Private clouds have attracted criticism because users "still have to buy, build, and manage them" and thus do not benefit from lower up-front capital costs and less hands-on management, essentially " the economic model that makes cloud computing such an intriguing concept".

Sunday, December 12, 2010

Cloud Computing

Cloud computing is Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand, as with the electricity grid. Cloud computing is a natural evolution of the widespread adoption of virtualization, Service-oriented architecture and utility computing. Details are abstracted from consumers, who no longer have need for expertise in, or control over, the technology infrastructure "in the cloud" that supports them. Cloud computing describes a new supplement, consumption, and delivery model for IT services based on the Internet, and it typically involves over-the-Internet provision of dynamically scalable and often virtualized resources. It is a byproduct and consequence of the ease-of-access to remote computing sites provided by the Internet. This frequently takes the form of web-based tools or applications that users can access and use through a web browser as if it were a program installed locally on their own computer. NIST provides a somewhat more objective and specific definition here. The term "cloud" is used as a metaphor for the Internet, based on the cloud drawing used in the past to represent the telephone network, and later to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents. Typical cloud computing providers deliver common business applications online that are accessed from another Web service or software like a Web browser, while the software and data are stored on servers.

Most cloud computing infrastructures consist of services delivered through common centers and built on servers. Clouds often appear as single points of access for consumers' computing needs. Commercial offerings are generally expected to meet quality of service (QoS) requirements of customers, and typically include service level agreements (SLAs). The major cloud service providers include Amazon, Rackspace Cloud, Salesforce, Microsoft and Google. Some of the larger IT firms that are actively involved in cloud computing are Fujitsu, Dell, Red Hat, Hewlett Packard, IBM, VMware and NetApp.

Comparisons
Cloud computing derives characteristics from, but should not be confused with:

  1. Autonomic computing — "computer systems capable of self-management"
  2. Client–server model – client–server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requesters (clients)
  3. Grid computing — "a form of distributed computing and parallel computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks"
  4. Mainframe computer — powerful computers used mainly by large organizations for critical applications, typically bulk data-processing such as census, industry and consumer statistics, enterprise resource planning, and financial transaction processing.
  5. Utility computing — the "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity";
  6. Peer-to-peer – distributed architecture without the need for central coordination, with participants being at the same time both suppliers and consumers of resources (in contrast to the traditional client–server model)
  7. Service-oriented computing – Cloud computing provides services related to computing while, in a reciprocal manner, service-oriented computing consists of the computing techniques that operate on software-as-a-service.


Characteristics
The fundamental concept of cloud computing is that the computing is "in the cloud" i.e. that the processing (and the related data) is not in a specified, known or the same place(s). This is in opposition to where the processing takes place in one or more specific servers that are known. All the other concepts mentioned are supplementary or complementary to this concept.
Generally, cloud computing customers do not own the physical infrastructure, instead avoiding capital expenditure by renting usage from a third-party provider. They consume resources as a service and pay only for resources that they use. Many cloud-computing offerings employ the utility computing model, which is analogous to how traditional utility services (such as electricity) are consumed, whereas others bill on a subscription basis. Sharing "perishable and intangible" computing power among multiple tenants can improve utilization rates, as servers are not unnecessarily left idle (which can reduce costs significantly while increasing the speed of application development). A side-effect of this approach is that overall computer usage rises dramatically, as customers do not have to engineer for peak load limits. In addition, "increased high-speed bandwidth" makes it possible to receive the same. The cloud is becoming increasingly associated with small and medium enterprises (SMEs) as in many cases they cannot justify or afford the large capital expenditure of traditional IT. SMEs also typically have less existing infrastructure, less bureaucracy, more flexibility, and smaller capital budgets for purchasing in-house technology. Similarly, SMEs in emerging markets are typically unburdened by established legacy infrastructures, thus reducing the complexity of deploying cloud solutions.

Economics
Cloud computing users avoid capital expenditure (CapEx) on hardware, software, and services when they pay a provider only for what they use. Consumption is usually billed on a utility (resources consumed, like electricity) or subscription (time-based, like a newspaper) basis with little or no upfront cost. Other benefits of this approach are low barriers to entry, shared infrastructure and costs, low management overhead, and immediate access to a broad range of applications. In general, users can terminate the contract at any time (thereby avoiding return on investment risk and uncertainty), and the services are often covered by service level agreements (SLAs) with financial penalties.
According to Nicholas Carr, the strategic importance of information technology is diminishing as it becomes standardized and less expensive. He argues that the cloud computing paradigm shift is similar to the displacement of frozen water trade by electricity generators early in the 20th century.
Although companies might be able to save on upfront capital expenditures, they might not save much and might actually pay more for operating expenses. In situations where the capital expense would be relatively small, or where the organization has more flexibility in their capital budget than their operating budget, the cloud model might not make great fiscal sense. Other factors having an impact on the scale of potential cost savings include the efficiency of a company's data center as compared to the cloud vendor's, the company's existing operating costs, the level of adoption of cloud computing, and the type of functionality being hosted in the cloud.
Among the items that some cloud hosts charge for are instances (often with extra charges for high-memory or high-CPU instances), data transfer in and out, storage (measured by the GB-month), I/O requests, PUT requests and GET requests, IP addresses, and load balancing. In some cases, users can bid on instances, with pricing dependent on demand for available instances.

Architecture
Cloud computing sample architecture

Cloud architecture, the systems architecture of the software systems involved in the delivery of cloud computing, typically involves multiple cloud components communicating with each other over application programming interfaces, usually web services. This resembles the Unix philosophy of having multiple programs each doing one thing well and working together over universal interfaces. Complexity is controlled and the resulting systems are more manageable than their monolithic counterparts.
The two most significant components of cloud computing architecture are known as the front end and the back end. The front end is the part seen by the client, i.e. the computer user. This includes the client’s network (or computer) and the applications used to access the cloud via a user interface such as a web browser. The back end of the cloud computing architecture is the ‘cloud’ itself, comprising various computers, servers and data storage devices.

History
The underlying concept of cloud computing dates back to the 1960s, when John McCarthy opined that "computation may someday be organized as a public utility." Almost all the modern-day characteristics of cloud computing (elastic provision, provided as a utility, online, illusion of infinite supply), the comparison to the electricity industry and the use of public, private, government and community forms was thoroughly explored in Douglas Parkhill's 1966 book, The Challenge of the Computer Utility.
The actual term "cloud" borrows from telephony in that telecommunications companies, who until the 1990s primarily offered dedicated point-to-point data circuits, began offering Virtual Private Network (VPN) services with comparable quality of service but at a much lower cost. By switching traffic to balance utilization as they saw fit, they were able to utilize their overall network bandwidth more effectively. The cloud symbol was used to denote the demarcation point between that which was the responsibility of the provider from that of the user. Cloud computing extends this boundary to cover servers as well as the network infrastructure. The first scholarly use of the term “cloud computing” was in a 1997 lecture by Ramnath Chellappa.
Amazon played a key role in the development of cloud computing by modernizing their data centers after the dot-com bubble, which, like most computer networks, were using as little as 10% of their capacity at any one time, just to leave room for occasional spikes. Having found that the new cloud architecture resulted in significant internal efficiency improvements whereby small, fast-moving "two-pizza teams" could add new features faster and more easily, Amazon initiated a new product development effort to provide cloud computing to external customers, and launched Amazon Web Service (AWS) on a utility computing basis in 2006.
In 2007, Google, IBM and a number of universities embarked on a large scale cloud computing research project. In early 2008, Eucalyptus became the first open source AWS API compatible platform for deploying private clouds. By mid-2008, Gartner saw an opportunity for cloud computing "to shape the relationship among consumers of IT services, those who use IT services and those who sell them" and observed that "organisations are switching from company-owned hardware and software assets to per-use service-based models" so that the "projected shift to cloud computing ... will result in dramatic growth in IT products in some areas and significant reductions in other areas."

Sunday, December 5, 2010

Kinect History

Kinect was first announced on June 1, 2009 at E3 2009 under the code name "Project Natal". Following in Microsoft's tradition of using cities as code names, "Project Natal" was named after the Brazilian city of Natal as a tribute to the country by Microsoft director Alex Kipman, who incubated the project, and who is from Brazil. The name Natal was also chosen because the word natal means "of or relating to birth", reflecting Microsoft's view of the project as "the birth of the next generation of home entertainment".
Four demos were shown to showcase Kinect when it was revealed at Microsoft's E3 2009 Media Briefing: Ricochet, Paint Party, Milo & Kate, and "Mega Ring Ring, Mega Ring Ring". A demo based on Burnout Paradise was also shown outside of Microsoft's media briefing. The skeletal mapping technology shown at E3 2009 was capable of simultaneously tracking four people, with a feature extraction of 48 skeletal points on a human body at 30 Hz.
It was rumored that the launch of Project Natal would be accompanied with the release of a new Xbox 360 console (as either a new retail configuration, a significant design revision and/or a modest hardware upgrade). Microsoft dismissed the reports in public, and repeatedly emphasized that Project Natal would be fully compatible with all Xbox 360 consoles. Microsoft indicated that the company considers it to be a significant initiative, as fundamental to the Xbox brand as Xbox Live, and with a launch akin to that of a new Xbox console platform. Kinect was even referred to as a "new Xbox" by Microsoft CEO Steve Ballmer at a speech for the Executives' Club of Chicago. When asked if the introduction will extend the time before the next-generation console platform is launched (historically about 5 years between platforms), Microsoft corporate vice president Shane Kim reaffirmed that the company believes that the life cycle of the Xbox 360 will last through 2015 (10 years).

During Kinect's development, project team members experimentally adapted numerous games to Kinect-based control schemes to help evaluate usability. Among these games were Beautiful Katamari and Space Invaders Extreme, which were demonstrated at the Tokyo Game Show in September 2009. According to creative director Kudo Tsunoda, adding Kinect-based control to pre-existing games would involve significant code alterations, making it unlikely for Kinect features to be added through software updates.
Although the sensor unit was originally planned to contain a microprocessor that would perform operations such as the system's skeletal mapping, it was revealed in January 2010 that the sensor would no longer feature a dedicated processor. Instead processing would be handled by one of the processor cores of the Xbox 360's Xenon CPU. According to Alex Kipman, the Kinect system consumes about 10-15% of the Xbox 360's computing resources. However, in November, Alex Kipman made a statement that "the new motion control tech now only uses a single-digit percentage of the Xbox 360's processing power, down from the previously stated ten to 15 percent." A number of observers commented that the computational load required for Kinect makes the addition of Kinect functionality to pre-existing games through software updates even less likely, with Kinect-specific concepts instead likely to be the focus for developers using the platform.
On March 25, Microsoft sent out a save the date flier for an event called the "World Premiere 'Project Natal' for the Xbox 360 Experience" at E3 2010. The event took place on the evening of Sunday, June 13, 2010 at the Galen Center. The event featured a performance by Cirque du Soleil. It was announced that the system would officially be called Kinect, a portmanteau of the words "kinetic" and "connect", which describe key aspects of the initiative. Microsoft also announced that the North American launch date for Kinect will be November 4, 2010. Despite previous statements dismissing speculation of a new Xbox 360 to accompany the launch of new control system, Microsoft announced at E3 2010 that it was introducing a redesigned Xbox 360, complete with a Kinect-ready connector port. In addition, on July 20, 2010, Microsoft announced a Kinect bundle with a redesigned Xbox 360, to be available with the Kinect launch.

Launch
Microsoft has an advertising budget of US$500 million for the launch of Kinect, a larger sum than the investment at launch of the Xbox console. Plans involve featuring Kinect on the YouTube homepage, advertisements on Disney's and Nickelodeon's digital publications as well as during Dancing with the Stars and Glee. Print ads are to be published in People Magazine and InStyle, while brands such as Pepsi, Kellogg's and Burger King will also carry Kinect advertisements. A major Kinect event was also organized in Times Square, where Kinect was promoted via the many billboards.
On October 19, before the Kinect launch, Microsoft advertised Kinect at The Oprah Winfrey Show by giving free Xbox 360s and Kinect sensors to the people who were in the audience. Later they also gave away Kinect bundles with Xbox 360's to the audiences of The Ellen Show and Late Night With Jimmy Fallon.
On October 23, Microsoft held a pre-launch party for Kinect in Beverly Hills. The party was hosted by Ashley Tisdale and was attended by soccer star David Beckham and his three sons, Cruz, Brooklyn, and Romeo. Guests were treated to sessions with Dance Central and Kinect Adventures, followed by Tisdale having a Kinect voice chat with Nick Cannon. On November 1, 2010, Burger King started giving away a free Kinect bundle "every 15 minutes". The contest ended on November 28, 2010.

Kinect

Kinect for Xbox 360, or simply Kinect (originally known by the code name Project Natal), is a "controller-free gaming and entertainment experience" by Microsoft for the Xbox 360 video game platform, and may later be supported by PCs via Windows 8. Based around a webcam-style add-on peripheral for the Xbox 360 console, it enables users to control and interact with the Xbox 360 without the need to touch a game controller, through a natural user interface using gestures, spoken commands, or presented objects and images. The project is aimed at broadening the Xbox 360's audience beyond its typical gamer base. Kinect competes with the Wii Remote with Wii MotionPlus and PlayStation Move & PlayStation Eye motion control systems for the Wii and PlayStation 3 home consoles, respectively.
Kinect was launched in North America on November 4, 2010, in Europe on November 10, 2010, in Australia, New Zealand and Singapore on November 18, 2010 and in Japan on November 20, 2010. Purchase options for the sensor peripheral include a bundle with the game Kinect Adventures and console bundles with either a 4 GB or 250 GB Xbox 360 console and Kinect Adventures.
As of November 29, 2010, 2.5 million Kinect sensors have been sold.

Technology
Kinect is based on software technology developed internally by Rare, a subsidiary of Microsoft Game Studios owned by Microsoft and range camera technology by Israeli developer PrimeSense, which interprets 3D scene information from a continuously-projected infrared structured light.
The Kinect sensor is a horizontal bar connected to a small base with a motorized pivot, and is designed to be positioned lengthwise above or below the video display. The device features an "RGB camera, depth sensor and multi-array microphone running proprietary software", which provides full-body 3D motion capture, facial recognition and voice recognition capabilities. Voice recognition capabilities will be available in Japan, the United Kingdom, Canada and the United States at launch, but have been postponed until spring 2011 in mainland Europe. The Kinect sensor's microphone array enables the Xbox 360 to conduct acoustic source localization and ambient noise suppression, allowing for things such as headset-free party chat over Xbox Live.
The depth sensor consists of an infrared laser projector combined with a monochrome CMOS sensor, and allows the Kinect sensor to see in 3D under any ambient light conditions. The sensing range of the depth sensor is adjustable, with the Kinect software capable of automatically calibrating the sensor based on gameplay and the player's physical environment, such as the presence of furniture.
Described by Microsoft personnel as the primary innovation of Kinect, the software technology enables advanced gesture recognition, facial recognition, and voice recognition. According to information supplied to retailers, the Kinect is capable of simultaneously tracking up to six people, including two active players for motion analysis with a feature extraction of 20 joints per player.
Through reverse engineering efforts, it has been determined that the Kinect sensor outputs video at a frame rate of 30 Hz, with the RGB video stream at 8-bit VGA resolution (640 × 480 pixels) with a Bayer color filter, and the monochrome video stream used for depth sensing at 11-bit VGA resolution (640 × 480 pixels with 2,048 levels of sensitivity). The Kinect sensor has a practical ranging limit of 1.2–3.5 metres (3.9–11 ft) distance when used with the Xbox software. The area required to play Kinect is roughly around a 6m² area, although the sensor can maintain tracking through an extended range of approximately 0.7–6 metres (2.3–20 ft). The sensor has an angular field of view of 57° horizontally and a 43° vertically, while the motorized pivot is capable of tilting the sensor as much as 27° either up or down. The microphone array features four microphone capsules, and operates with each channel processing 16-bit audio at a sampling rate of 16 kHz.
Because the Kinect sensor's motorized tilt mechanism requires more power than can be supplied via the Xbox 360's USB ports, the Kinect sensor features a proprietary connector combining USB communication with additional power. Redesigned "Xbox 360 S" models include a special AUX port for accommodating the connector, while older models require a special power supply cable (included with the sensor) which splits the connection into separate USB and power connections; power is supplied from the mains by way of an AC adapter.

Friday, December 3, 2010

HTML5 video

HTML5 video is an element introduced in the HTML5 draft specification for the purpose of playing videos or movies[1], partially replacing the object element.
Adobe Flash Player is widely used to embed video on web sites such as YouTube, since many web browsers have Adobe's Flash Player pre-installed (with exceptions such as the browsers on the Apple iPhone and iPad and on Android 2.1 or less). HTML5 video is intended by its creators to become the new standard way to show video online, but has been hampered by lack of agreement as to which video formats should be supported in the video tag.

Supported video formats
The current HTML5 draft specification does not specify which video formats browsers should support in the video tag. User agents are free to support any video formats they feel are appropriate.

Default video format debate
Main article: Use of Ogg formats in HTML5
It is desirable to specify at least one video format which all user agents (browsers) should support. The ideal format should:
Have good compression, good image quality, and low decode processor use.
Be royalty-free.
In addition to software decoders, a hardware video decoder should exist for the format, as many embedded processors do not have the performance to decode video.
Initially, Ogg Theora was the recommended standard video format in HTML5, because it was not affected by any known patents. But on December 10, 2007, the HTML5 specification was updated, replacing the reference to concrete formats:
User agents should support Theora video and Vorbis audio, as well as the Ogg container format.
with a placeholder:
It would be helpful for interoperability if all browsers could support the same codecs. However, there are no known codecs that satisfy all the current players: we need a codec that is known to not require per-unit or per-distributor licensing, that is compatible with the open source development model, that is of sufficient quality as to be usable, and that is not an additional submarine patent risk for large companies. This is an ongoing issue and this section will be updated once more information is available.
Although Theora is not affected by known patents, companies such as Apple and (reportedly) Nokia are concerned about unknown patents that might affect it, whose owners might be waiting for a corporation with extensive financial resources to use the format before suing. Formats like H.264 might also be subject to unknown patents in principle, but they have been deployed much more widely and so it is presumed that any patent-holders would have already sued someone. Apple has also opposed requiring Ogg format support in the HTML standard (even as a "should" requirement) on the grounds that some devices might support other formats much more easily, and that HTML has historically not required particular formats for anything.
Some web developers criticized the removal of the Ogg formats from the specification. A follow-up discussion also occurred on the W3C questions and answers blog.
H.264/MPEG-4 AVC is widely used, and has good speed, compression, hardware decoders, and video quality, but is covered by patents. Except in particular cases, users of H.264 have to pay licensing fees to the MPEG LA, a group of patent-holders including Microsoft and Apple. As a result, it has not been considered as a required default codec.
Google's acquisition of On2 resulted in the WebM Project, a royalty-free, open source release of VP8, in a Matroska container with Vorbis audio. It is supported by Google Chrome, Opera Browser and Mozilla Firefox.

Thursday, December 2, 2010

Spike Video Game Awards Previous Winners

2009 Awards
The 2009 VGAs were held on December 12, 2009 at the Nokia Event Deck in Los Angeles, California[5] and were the first VGAs to not have an overall host.[6] It opened with a trailer announcing the sequel to Batman Arkham Asylum. There were other exclusive looks at Prince of Persia: The Forgotten Sands, UFC 2010 Undisputed, Halo: Reach, and others. Samuel L. Jackson previewed LucasArts newest upcoming Star Wars game, Star Wars: The Force Unleashed II. In addition, Green Day: Rock Band was announced and accompanied with a trailer.
With appearances from Stevie Wonder, MTV Jersey Shore cast, Green Day and Jack Black, live music performances at the 2009 awards included Snoop Dogg and The Bravery.
Game of the Year: Uncharted 2: Among Thieves
Studio of the Year: Rocksteady Studios
Best Independent Game Fueled by Dew: Flower
Best Xbox 360 Game: Left 4 Dead 2
Best PS3 Game: Uncharted 2: Among Thieves
Best Wii Game: New Super Mario Bros. Wii
Best PC Game: Dragon Age: Origins
Best Handheld Game: Grand Theft Auto: Chinatown Wars
Best Shooter: Call of Duty: Modern Warfare 2
Best Action Adventure Game: Assassin's Creed II
Best RPG: Dragon Age: Origins
Best Multiplayer Game: Call of Duty: Modern Warfare 2
Best Fighting Game: Street Fighter IV
Best Individual Sports Game: UFC 2009 Undisputed
Best Team Sports Game: FIFA 10
Best Driving Game: Forza Motorsport 3
Best Music Game: The Beatles: Rock Band
Best Soundtrack: DJ Hero
Best Original Score: Halo 3: ODST
Best Graphics: Uncharted 2: Among Thieves
Best Game Based On A Movie/TV Show: South Park Let's Go Tower Defense Play!
Best Performance By A Human Female: Megan Fox as Mikaela Banes in Transformers: Revenge of the Fallen
Best Performance By A Human Male: Hugh Jackman as Wolverine in X-Men Origins: Wolverine
Best Cast: X-Men Origins: Wolverine
Best Voice: Jack Black for the voice of Eddie Riggs in Brütal Legend
Best Downloadable Game: Shadow Complex
Best DLC: Grand Theft Auto: The Ballad of Gay Tony
Most Anticipated Game of 2010: God of War III

2008 Awards
The 2008 VGAs were held on December 14, 2008. The show, hosted by Jack Black, featured ten previews of upcoming games. Musical performances included 50 Cent, The All-American Rejects, Weezer, and LL Cool J.
Game of the Year: Grand Theft Auto IV
Studio of the Year: Media Molecule
Gamer God: Will Wright, creator of The Sims and Spore
Best Shooter: Gears of War 2
Best RPG: Fallout 3
Best Individual Sports Game: Shaun White Snowboarding
Best Handheld Game: Professor Layton and the Curious Village
Best Graphics: Metal Gear Solid 4: Guns of the Patriots
Best Game Based on a Movie or TV Show: Lego Indiana Jones: The Original Adventures
Best Music Game: Rock Band 2
Best Driving Game: Burnout Paradise
Best Action Adventure Game: Grand Theft Auto IV
Best Soundtrack: Rock Band 2
Best Xbox 360 Game: Gears of War 2
Best Wii Game: Boom Blox
Best PS3 Game: LittleBigPlanet
Best PC Game: Left 4 Dead
Best Original Score: Metal Gear Solid 4: Guns of the Patriots
Best Multiplayer Game: Left 4 Dead
Best Independent game: World of Goo
Best Fighting Game: Soulcalibur IV
Best Male Voice: Michael Hollick as Niko Bellic in Grand Theft Auto IV
Best Female Voice: Debi Mae West as Meryl Silverburgh in Metal Gear Solid 4: Guns of the Patriots
Big Name in the Game Male: Kiefer Sutherland as Sgt. Roebuck in Call of Duty: World at War
Big Name in the Game Female: Jenny McCarthy as Special Agent Tanya Adams in Command & Conquer: Red Alert 3
Best Team Sports Game: NHL 09

2007 Awards
The 2007 VGAs aired December 9, 2007. Hosted by Samuel L. Jackson, the winners were announced ahead of the event which was held at the Mandalay Bay Events Center in Las Vegas. The show featured performances by Foo Fighters, Kid Rock, and exclusive world videogame premieres of Borderlands, Gran Turismo 5 Prologue, Tom Clancy's Rainbow Six: Vegas 2 and TNA iMPACT!
Game of the Year: BioShock
Studio of the Year: Harmonix
Best Shooter: Call of Duty 4: Modern Warfare
Best RPG: Mass Effect
Best Military Game: Call of Duty 4: Modern Warfare
Best Individual Sports Game: Skate
Best Handheld Game: The Legend of Zelda: Phantom Hourglass
Best Graphics: Crysis
Best Game Based on a Movie or TV Show: The Simpsons Game
Best Rhythm Game: Rock Band
Best Driving Game: Colin McRae: Dirt
Best Action Game: Super Mario Galaxy
Best Team Sports Game: Madden NFL 08
Best Soundtrack: Rock Band
Breakthrough Technology: The Orange Box/Portal
Best Xbox 360 Game: BioShock
Best Wii Game: Super Mario Galaxy
Best PS3 Game: Ratchet & Clank Future: Tools of Destruction
Best PC Game: The Orange Box
Best Original Score: BioShock
Best Multiplayer Game: Halo 3
Most Addictive Video Game: Call of Duty 4: Modern Warfare

2006 Awards
The 2006 VGAs featured musical performances by Tenacious D and AFI and show appearances by 50 Cent, Eva Mendes, Sarah Silverman, Seth Green, Masi Oka, Hayden Panettiere, Brandon Routh, Rachael Leigh Cook, Tony Hawk, Michael Irvin, Method Man, Maria Menounos, Tyrese, Xzibit, James Gandolfini, Kurt Angle, among others. In character as Stewie Griffin and Tom Tucker from Family Guy, Seth MacFarlane served as the voice of the VGAs. The awards aired December 13, 2006 and were hosted by Samuel L. Jackson.
Game of the Year: The Elder Scrolls IV: Oblivion
Studio of the Year: Epic Games
Cyber Vixen of the Year: Alyx Vance - Half-Life 2: Episode One
Best Individual Sports Game: Tony Hawk's Project 8
Best Team Sports Game: NBA 2K7
Best Game Based on a Movie or TV Show: Lego Star Wars II: The Original Trilogy
Best Performance by a Human Male:Patrick Stewart in The Elder Scrolls IV: Oblivion
Best Supporting Male Performance: James Gandolfini in The Sopranos: Road to Respect
Best Performance by a Human Female: Vida Guerra in Scarface: The World Is Yours
Best Supporting Female Performance: Rachael Leigh Cook in Kingdom Hearts II
Best Cast: Family Guy Video Game!
Best Song: "Lights and Sounds" by Yellowcard in Burnout Revenge
Best Soundtrack: Guitar Hero II
Best Original Score: The Elder Scrolls IV: Oblivion
Best Driving Game Award: Burnout Revenge
Most Addictive Game: The Elder Scrolls IV: Oblivion
Best Fighting Game: Mortal Kombat: Armageddon
Best Action Game: Dead Rising
Best Shooter: Gears of War
Best Military Game: Company of Heroes
Best Graphics: Gears of War
Best Handheld Game: New Super Mario Bros.
Best Multiplayer Game: Gears of War
Breakthrough Technology: Wii
Best RPG: The Elder Scrolls IV: Oblivion
Best PC Game: Company of Heroes
Best Wireless Game: SWAT Force
Critic's Choice (released after 11/15/2006 and before 12/31/2006) - The Legend of Zelda: Twilight Princess
Gamer's Choice-Breakthrough Performance: Rosario Dawson in Marc Eckō's Getting Up: Contents Under Pressure
Gamer's Choice-Character of the Year: Jack Sparrow portrayed by Johnny Depp in Pirates of the Caribbean: The Legend of Jack Sparrow

2005 Awards
The 2005 VGAs were held December 10, 2005 at the Gibson Amphitheatre in Los Angeles. This was the first year that Samuel L. Jackson hosted the VGAs.
Game of the Year: Resident Evil 4
Action Game of the Year: God of War
Best Individual Sports Game: Tony Hawk's American Wasteland
Best Team Sports Game: Madden NFL 06
Cyber Vixen of the Year: Maria Menounos as Eva in From Russia with Love
Best Game Based on a Movie: Peter Jackson's King Kong: The Official Game of the Movie
Best Performance by a Human Male: Jack Black in Peter Jackson's King Kong: The Official Game of the Movie
Best Supporting Male Performance: Christopher Walken in True Crime: New York City
Best Performance by a Human Female: Charlize Theron in Æon Flux
Best Supporting Female Performance: Traci Lords in True Crime: New York City
Best Action Game: Peter Jackson's King Kong: The Official Game of the Movie
Best Original Song: "Maybe We Crazy" by 50 Cent in 50 Cent: Bulletproof
Best Soundtrack: Guitar Hero
Best Original Score: We Love Katamari
Designer of the Year: David Jaffe for God of War
Pontiac Best Driving Game Award (Viewer's Choice): Burnout Revenge
Most Addictive Game Fueled by Mountain Dew: World of Warcraft
Best Fighting Game: Fight Night Round 2
Best First-Person Action: F.E.A.R.
Best Military Game: Call of Duty 2
Best Graphics: Resident Evil 4
Best Handheld Game: Lumines
Best Multiplayer Game: World of Warcraft
Best Breakthrough Technology: PSP
Best RPG: World of Warcraft
Best PC Game: World of Warcraft
Best Wireless Game: Marc Eckō's Getting Up: Contents Under Pressure

2004 Awards
The 2004 VGAs were held in Santa Monica, California at the Barker Hangar. They were hosted by Snoop Dogg. The event featured musical performances including Sum 41, Ludacris and a special live performance by Snoop Dogg and the remaining members of The Doors performing "Riders on the Storm".
Game of the Year: Grand Theft Auto: San Andreas
Best Game Based on a Movie: The Chronicles of Riddick: Escape from Butcher Bay
Best Performance by a Human Female: Brooke Burke in Need for Speed: Underground 2
Best Performance by a Human Male: Samuel L. Jackson in Grand Theft Auto: San Andreas
Cyber Vixen of the Year: BloodRayne in BloodRayne 2
Best Driving Game: Burnout 3: Takedown
Best Sports Game: Madden NFL 2005
Best Fighting Game: Mortal Kombat: Deception
Best Action Game: Grand Theft Auto: San Andreas
Best First-Person Action: Halo 2
Best Song in a Video Game: "American Idiot" by Green Day in Madden NFL 2005
Best Soundtrack: Grand Theft Auto: San Andreas
Designer of the Year: Jason Jones and Bungie Studios for Halo 2
Best Military Game: Call of Duty: Finest Hour
Best PC Game: Half-Life 2
Best Wireless Game: Might and Magic
Best Graphics: Half-Life 2
Best New Technology: Nintendo DS
Best Handheld: Metroid: Zero Mission
Best Massively Multiplayer Game: City of Heroes
Best RPG: Fable
Most Addictive Game (viewer's choice): Burnout 3: Takedown
Best Gaming Publication (fan favorite): Game Informer
Best Gaming Web Site (fan favorite): GameSpot

2003 Awards
The 2003 VGAs were the first VGAs to be hosted by Spike TV. They were held at the MGM Grand in Las Vegas, Nevada on December 2, 2003 and aired on December 4, 2003. The event was hosted by David Spade and featured appearances by Lil' Kim, Jaime Pressly, DMX, P.O.D., Orlando Jones, and Cedric the Entertainer. The event also featured a WWE tag team wrestling match featuring the superstars Rey Mysterio, Chris Jericho, Trish Stratus, and Victoria.
Game of the Year: Madden NFL 2004
Best Sports Game: Madden NFL 2004
Best Action Game: True Crime: Streets of LA
Best Animation: Dead or Alive Xtreme Beach Volleyball
Best Game Based on a Movie: Enter the Matrix
Pontiac/GTO Driving Award: NASCAR Thunder 2004
Best Music: Def Jam Vendetta
Best Performance by a Human: Ray Liotta in Grand Theft Auto: Vice City
Most Anticipated: Halo 2
Most Addictive: Soulcalibur II
Best PC Game: Halo: Combat Evolved
Best Online Game: Final Fantasy XI
Best Handheld Game: Tom Clancy's Splinter Cell
Best Fighting Game: WWE SmackDown! Here Comes the Pain
Best First Person Action: Call of Duty
Best Fantasy Game: Star Wars: Knights of the Old Republic

Spike Video Game Awards 2010

The Spike Video Game Awards (VGA) is an award show hosted by Spike TV that recognizes the best computer and video games of the year. Beginning in 2003, the Spike TV Video Game Awards garnered much attention, since video game awards were not common prior to its introduction. The VGAs feature live music performances and appearances by popular performers in music, movies, and television. Additionally, preview trailers for upcoming games are highlighted. The show is produced by GameTrailers TV's Geoff Keighley. The event has been held at various locations in Los Angeles and Santa Monica, California as well as Las Vegas, Nevada.
The program has encountered criticism for its selection of both nominees and winners by the gaming community. Bias for specific platforms and products has been claimed by critics. Additionally, winners are selected via online polling leading to critics calling the results merely a "popularity contest." Many consider the VGAs to be a gimmick awards show, whereas other awards given out by the G4 network, IGN, and other media outlets are considered to be more accurate.

2010 Awards
The 2010 VGAs will be held Saturday, December 11, 2010 in Los Angeles, CA at the L.A. Convention Center.
On Tuesday, 16 November 2010, it was revealed that two new titles would be revealed ahead of their debut at the 2010 Awards the following day, 17 November 2010. BioWare will announce a new title at the 2010 Awards.[citation needed] On November 21, 2010, Sony Russia tweeted this announcement to be Mass Effect 3. Batman: Arkham City will have it's gameplay debut at the 2010 awards along with an unknown game announcement by Activision. Lucasarts is rumored to announce Star Wars Battlefront 3, but it seems unlikely.

Nominees:
Game of the Year:
Call of Duty: Black Ops, God of War III, Halo: Reach, Mass Effect 2, Red Dead Redemption

Studio of the Year:
Bioware Mass Effect 2
Blizzard Entertainment, StarCraft II: Wings of Liberty
Bungie Studios, Halo: Reach
Rockstar San Diego, Red Dead Redemption

Best Xbox 360 Game:
Alan Wake, Fable III, Halo: Reach, Mass Effect 2

Best PS3 Game:
God of War III, Heavy Rain, ModNation Racers, Red Dead Redemption

Best Wii Game:
Donkey Kong Country Returns, Kirby's Epic Yarn, Metroid: Other M, Super Mario Galaxy 2

Best PC Game:
Fallout: New Vegas, Mass Effect 2, Sid Meier's Civilization V, StarCraft II: Wings of Liberty

Best Handheld Game:
God of War: Ghost of Sparta, Metal Gear Solid: Peace Walker, Professor Layton and the Unwound Future, Super Scribblenauts

Best Shooter:
Battlefield: Bad Company 2, BioShock 2, Call of Duty: Black Ops, Halo: Reach

Best Action Adventure Game:
Assassin's Creed: Brotherhood, God of War III, Red Dead Redemption, Super Mario Galaxy 2

Best RPG:
Fable III, Fallout: New Vegas, Final Fantasy XIII, Mass Effect 2

Best Multi-player:
Battlefield: Bad Company 2, Halo: Reach, Call of Duty: Black Ops, StarCraft II: Wings of Liberty

Best Individual Sports Game:
EA Sports MMA, Shaun White Skateboarding, Tiger Woods PGA Tour 11, UFC Undisputed 2010

Best Team Sports Game:
FIFA 11, Madden NFL 11, NBA 2K11, MLB 10: The Show

Best Driving Game:
Blur, ModNation Racers, Need for Speed: Hot Pursuit, Split/Second

Best Music Game:
Dance Central, DJ Hero 2, Def Jam Rapstar, Rock Band 3

Best Soundtrack:
Def Jam Rapstar, DJ Hero 2, Guitar Hero: Warriors of Rock, Rock Band 3

Best Song in a Game:
"Basket Case" by Green Day, Green Day: Rock Band
"Black Rain" by Soundgarden, Guitar Hero: Warriors of Rock
"Far Away" by José González, Red Dead Redemption
"GoldenEye" by Nicole Scherzinger, GoldenEye 007 (2010)
"Won't Back Down" by Eminem, Call of Duty: Black Ops
"Replay/Rude Boy Mashup" by Iyaz/Rihanna, DJ Hero 2

Best Original Score:
God of War III, Halo: Reach, Mass Effect 2, Red Dead Redemption

Best Graphics:
God of War III, Heavy Rain, Kirby's Epic Yarn, Red Dead Redemption

Best Adapted Video Game:
Lego Harry Potter: Years 1-4, Scott Pilgrim vs. the World: The Game, Spider-Man: Shattered Dimensions, Star Wars: The Force Unleashed II, Transformers: War for Cybertron

Best Performance by a Human Male:
Daniel Craig as James Bond, James Bond 007: Blood Stone
Gary Oldman as Sergeant Reznov, Call of Duty: Black Ops
John Cleese as Jasper, Fable III
Martin Sheen as The Illusive Man, Mass Effect 2
Nathan Fillion as Sergeant Edward Buck, Halo: Reach
Neil Patrick Harris as Peter Parker/Amazing Spider-Man, Spider-Man: Shattered Dimensions
Rob Wiethoff as John Marston, Red Dead Redemption
Sam Worthington as Alex Mason, Call of Duty: Black Ops

Best Performance by a Human Female:
Dame Judi Dench as M, James Bond 007: Blood Stone
Danica Patrick as Herself, Blur
Emmanuelle Chriqui as The Numbers Lady, Call of Duty: Black Ops
Felicia Day as Veronica Santangelo, Fallout: New Vegas
Jennifer Hale as Commander Shepard (female version), Mass Effect 2
Kristen Bell as Lucy Stillman, Assassin's Creed: Brotherhood
Tricia Helfer as Sarah Kerrigan, StarCraft II: Wings of Liberty
Yvonne Strahovski as Miranda Lawson, Mass Effect 2

Best Downloadable Game:
Costume Quest, Lara Croft and the Guardian of Light, Monday Night Combat, Scott Pilgrim vs. the World: The Game

Best DLC:
BioShock 2: Minerva's Den
Borderlands: The Secret Armory of General Knoxx
Mass Effect 2: Lair of the Shadow Broker
Red Dead Redemption: Undead Nightmare

Best Independent Game:
Joe Danger, Limbo, Super Meat Boy, The Misadventures of P.B. Winterbottom

Most Anticipated Game:
Batman: Arkham City, BioShock: Infinite, Gears of War 3, Portal 2

Wednesday, December 1, 2010

HTML5

HTML5 is the next major revision of the HTML standard, currently under development.
Like its immediate predecessors, HTML 4.01 and XHTML 1.1, HTML5 is a standard for structuring and presenting content on the World Wide Web. It incorporates new features for the control of embedded audio and video, and the utilization of drag-and-drop—historically implemented in various ways depending on platform.

W3C standardization process
The Web Hypertext Application Technology Working Group (WHATWG) started work on the specification in June 2004 under the name Web Applications 1.0. As of March 2010, the specification is in the Draft Standard state at the WHATWG, and in Working Draft state at the W3C. Ian Hickson of Google, Inc. is the editor of HTML5.
The HTML5 specification was adopted as the starting point of the work of the new HTML working group of the World Wide Web Consortium (W3C) in 2007. This working group published the First Public Working Draft of the specification on January 22, 2008. The specification is an ongoing work, and is expected to remain so for many years, although parts of HTML5 are going to be finished and implemented in browsers before the whole specification reaches final Recommendation status.
According to the W3C timetable, it is estimated that HTML5 will reach W3C Recommendation by late 2010. However, the First Public Working Draft estimate was missed by 8 months, and Last Call and Candidate Recommendation were expected to be reached in 2008, but as of July 2010 HTML5 is still at Working Draft stage in the W3C. HTML5 has been at Last Call in the WHATWG since October 2009.
Ian Hickson, editor of the HTML5 specification, expects the specification to reach the Candidate Recommendation stage during 2012. The criterion for the specification becoming a W3C Recommendation is “two 100% complete and fully interoperable implementations”. In an interview with TechRepublic, Hickson guessed that this would occur in the year 2022 or later. However, many parts of the specification are stable and may be implemented in products:

Some sections are already relatively stable and there are implementations that are already quite close to completion, and those features can be used today.
– WHAT Working Group, When will HTML5 be finished?

Markup
HTML5 introduces a number of new elements and attributes that reflect typical usage on modern websites. Some of them are semantic replacements for common uses of generic block (div) and inline (span) elements, for example nav (website navigation block), footer (usually referring to bottom of web page or to last lines of HTML code), or audio and video instead of object. Some deprecated elements from HTML 4.01 have been dropped, including purely presentational elements such as font and center, whose effects are achieved using Cascading Style Sheets. There is also a renewed emphasis on the importance of DOM scripting (e.g., JavaScript) in Web behavior.
The HTML5 syntax is no longer based on SGML despite the similarity of its markup. It has, however, been designed to be backward compatible with common parsing of older versions of HTML. It comes with a new introductory line that looks like an SGML document type declaration, doctype html, which triggers the standards-compliant rendering mode. HTML5 also incorporates Web Forms 2.0, another WHATWG specification.

New APIs
In addition to specifying markup, HTML5 specifies scripting application programming interfaces (APIs). Existing document object model (DOM) interfaces are extended and de facto features documented. There are also new APIs, such as:
  • The canvas element for immediate mode 2D drawing. See Canvas 2D API Specification 1.0 specification
  • Timed media playback
  • Offline storage database (offline web applications). See Web Storage
  • Document editing
  • Drag-and-drop
  • Cross-document messaging
  • Browser history management
  • MIME type and protocol handler registration.
  • Microdata
Not all of the above technologies are included in the W3C HTML5 specification, though they are in the WHATWG HTML specification. Some related technologies, which are not part of either the W3C HTML5 or the WHATWG HTML specification, are
  • Geolocation
  • Web SQL Database, a local SQL Database.
  • The Indexed Database API, an indexed hierarchical key-value store (formerly WebSimpleDB).
The W3C publishes specifications for these separately.

Monday, November 29, 2010

Top 10 Best Strategy Games for PC (Part 2)

Rise of Nations
This game features the idea of expanding territory similarly to Civilization IV, but employes a real-time mode of gameplay. Territory is expanding by building more cities and forts within the borders, which opens more options on a technology tree, through which options are selected to customize the territory. Cities support citizen units, which can be assigned to specific tasks, but will always look for tasks to do when idle if not assigned to anything specific. Rise of Nations specifies six different resources, food, timber, metal, oil, wealth and knowledge, which are used to create buildings, units, and to research technologies.
Any nation within the game is playable at any point in history, regardless of the actual historical timeline of that nation, but resources only become available in the age in which they were originally utilized. Keeping a balance between offensive and defensive forces is crucial to successful gameplay, as is the state of the economy. Rise of Nations is both rewarding and frustrating in turns, but always highly addictive.

Starcraft II: Wings of Liberty
This long awaited sequel to the original Starcraft has earned a spot on this list in its own right. Finally released in July of 2010, the story picks up four years after the events of the original Starcraft, and follows an insurgent group attempting to make its way across the Terran Dominion. Non-linear gameplay with regard to the campaigns keeps the game interesting, and is a minor departure from the original. However, the order in which the campaigns are done will not interrupt the narrative.
Units remain largely the same, with some additional specialized units available only for campaign play and not in regular multi-player, such as the Terran Wraith, Vulture, and Diamondback. There is also a map editor, similar to the original StarEdit, which allows for customization of terrain and campaigns.
A word of warning for players hoping to have a nostalgic evening of strategy gaming with local friends, though: Blizzard has killed LAN play with this release, so players can only play together online, and on the same server. Any players wishing to play together must ensure that they've signed up for the same server at the time of original registration, because the game is region-locked.

Warcraft III
Before it was an extremely popular (and often parodied) MMORPG, the "world" of Warcraft existed in a series of real-time strategy games. Standard resource-gathering and unit-building rules apply, with "black mask" covering unopened areas of the map. Once explored, the black mask is removed, but these areas must remain within sight of at least one unit, or they will be covered in the "fog of war".
With AI-controlled, universally hostile units called "creeps" guarding areas heavy in resources, there is a slight element of RPGs, especially since players win experience points, gold and items after defeating them. Also introduced in this game was the shifting from day to night, which provides more cover, but reduces the ability to see incoming attackers.
There are five total campaigns, which are broken up according the various character race factions, though some specific "hero" characters are retained across each race's campaigns. Warcraft III still has a devoted following, and in spite of the massive popularity of the MMORPG, remains a favorite among fans of Warcraft and strategy games alike.

Supreme Commander
Considered by many to be a spiritual successor to 1997's Total Annihilation, Supreme Commander begins with a single unit, which much be expanded and multiplied to aid in a war which has erupted in the future after humans developed portal technology, referred to as a "quantum gateway".
The three warring factions are the Cybran nation, comprised of cyborgs who wish to separate from the other two factions, and get out from under the thumb of the United Earth Federation, which represents a united government for all three factions, which of course would be based off of the planet Earth. Finally, there is the Aeon Illuminate, which wishes to emulate the so-called "Golden Age" of Earth, in which alien life was first discovered but soon went south due to xenophobia. Naturally, the thing to do when fighting xenophobia is to go out and wage war on anyone who doesn't share the same beliefs.
The critical moment comes when the UEF decides to use "Black Sun", a weapon which, if deployed, will wipe out both planets of the other factions. However, the Crybrans and the Aeon Illuminate have their own secret weapons as well, and one of these does involve use of the phrase "Monkeylord", which is reason enough to put any game on a top 10 list.

Company of Heroes
This release and subsequent expansions from Relic Entertainment takes place during WWII. Players are put in charge of two U.S. units during the Allied takeover of France from occupying Nazi forces.
Micromanagement skills are key in this game, and perhaps provide the tiniest degree of realism (minus the horrific violence, of course, as this is not typically depicted in strategy games) as to what it must have been like to actually "storm the beach at Normandy", considering that there were over 150,000 troops in reality, and each one had to know exactly what when and where in order for the mission to be successful.
Players take control of points on the campaign map, collect munitions, fuel and manpower, and can takeover civilian buildings to convert them to barracks, which will aid in the creation of new units.
Company of Heroes has been praised as one of the best real-time strategy games of all time, with several successful expansions and both LAN and online options available for multi-player.

Top 10 Best Strategy Games for PC

Strategy games, whether they are turn-based or real-time, occupy a unique niche within gaming. While there is not always the thrill of the fight, there is often a deep satisfaction achieved from outsmarting both other players and particularly AI. Here are some of the most favored titles in recent years, in no particular order.

Starcraft
This classic real-time strategy game, released in 1998 is still one of the most popular releases of all time. Three species duke it out in the 26th century to gain control of a faraway chunk of the Milky Way. Terrans are humans who've been exiled from Earth. Another humanoid species, the Protoss, who are fairly advanced and possess various psychic abilities, are trying to keep their culture safe from the insectoid Zerg, who are bent on assimilating everyone else.
Starcraft is largely considered a game that revolutionized real-time strategy gameplay, as well as providing a deeply engaging story. There is still a thriving community of professional competitors, particularly in Asia, complete with sponsorships and televised events. Zerg Rush!

Age of Empires III
This real-time strategy game, released in 2005, takes place largely during the colonial era, from the late 1400s to the 1850s. Players must choose to develop a colony of Europe, Asia, or North America from an initial settlement to a thriving empire. Development of the colony goes through various technological ages, but unlike other games about territorial conquest, such as the Civilization series, where it is technically possible to play an entire game without fighting, this game requires the player to destroy the enemy's colony. Emphasis is placed on the production of civilian units to collect resources to stimulate the economy, and the development of the military to defend against rival colonies.
Another feature unique to this game is the use of a "Politician System", where players must choose from among several politicians upon successful completion of each level, which grant various bonuses. Difficulty level is assigned to specific colonies, as opposed to a more customized method, which often serves as motivation to keep playing.

Warhammer 40,000: Dawn of War II
This title, a sequel and marked improvement to the first Dawn of War is unique in that the multiplayer option involves co-op, as opposed to pitting players against one another. The campaigns, unlike those found in this game's predecessors are non-linear, and do not have base building elements. Units must be selected before a missions beings, and no new units are issued once it is progress.
Players are faced with decisions regarding the missions and locations chosen in which to fight, and consequences are based on these choices. Even after choices are made, missions can have multiple objectives which may be mutually exclusive depending on the further unfolding of events.
This game can be appealing to those who normally prefer RPGs, as players to level up, and some units can be equipped with scavenged weaponry and armor. This is a good crossover game for any die-hard RPG fans who are interested in experiencing a strategy game without completely unfamiliar elements.

World in Conflict
Many strategy games take place either in the distant past or future, but this title, released in 2007, is set in more recent times, during the collapse of the Soviet Union, but speculates as to what would have happened if Soviet forces had attempted to remain in power through aggressive action.
There is no resource collection or base building in this game, but rather reinforcement units are bought with a pre-determined amount of in-game points, and dropped into the battlefield. When units are dead, the points gradually return to the player's balance, so that new units can be acquired.
In multi-player games, players choose a specific role from among four preset roles, Air, Armor, Infantry, and Support. These have various abilities, such as unusually effective long ranged attacks, and the ability to hide easily, but are usually balanced with a weakness of some sort, like being vulnerable to attack on open ground, or being useless in short-range skirmishes.
Players will enjoy the small user interface, as it provides a more open view of the battlefield and the ability to manage individual units more effectively.

Civilization IV
Like the other titles in this series, Civilization IV is a turn-based game in which the player takes on the role of the leader of an empire that must be built from scratch from a single city, built by a settler in 4000 B.C. As the building expands, so do the options for infrastructure, military fortification and training, study of science and art, religion, and all the other stuff that empires have. Build "wonders" around the empire, and experience the birth of historical figures who can enhance various aspects of cities within the empire.
This game, like many turn-based strategy games can feel slow for the first few turns, but things get interesting once contact is made with neighboring cultures, and the potential for trade, aid, and war arises. Bonus: Leonard Nimoy congratulates the you overtime you attain a new technology or hit a milestone within your empire.

Next : Part 2

Sunday, November 28, 2010

World Wide Web Consortium

The World Wide Web Consortium (W3C) is the main international standards organization for the World Wide Web (abbreviated WWW or W3).
Founded and headed by Tim Berners-Lee, the consortium is made up of member organizations which maintain full-time staff for the purpose of working together in the development of standards for the World Wide Web. As of 8 September 2009, the World Wide Web Consortium (W3C) has 356 members.
W3C also engages in education and outreach, develops software and serves as an open forum for discussion about the Web.

History
The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he left the European Organization for Nuclear Research (CERN) in October, 1994. It was founded at the Massachusetts Institute of Technology Laboratory for Computer Science (MIT/LCS) with support from the European Commission and the Defense Advanced Research Projects Agency (DARPA), which had pioneered the Internet.
W3C was created to ensure compatibility and agreement among industry members in the adoption of new standards. Prior to its creation, incompatible versions of HTML were offered by different vendors, increasing the potential for inconsistency between web pages. The consortium was created to get all those vendors to agree on a set of core principles and components which would be supported by everyone.
It was originally intended that CERN host the European branch of W3C; however, CERN wished to focus on particle physics, not information technology. In April 1995 the Institut national de recherche en informatique et en automatique (INRIA) became the European host of W3C, with Keio University becoming the Japanese branch in September 1996. Starting in 1997, W3C created regional offices around the world; as of September 2009, it has eighteen World Offices covering Australia, the Benelux countries (Netherlands, Luxembourg, and Belgium), Brazil, China, Finland, Germany, Austria, Greece, Hong Kong, Hungary, India, Israel, Italy, South Korea, Morocco, South Africa, Spain, Sweden, and the United Kingdom and Ireland.
In January 2003, the European host was transferred from INRIA to the European Research Consortium for Informatics and Mathematics (ERCIM), an organization that represents European national computer science laboratories.

Recommendations and Certifications
In accord with the W3C Process Document, a Recommendation progresses through five maturity levels:

  • Working Draft
  • Last Call Working Draft
  • Call for implementation
  • Call for Review of a Proposed Recommendation
  • W3C Recommendation (REC)

A Recommendation may be updated by separately published Errata until enough substantial edits accumulate, at which time a new edition of the Recommendation may be produced (e.g., XML is now in its fifth edition). W3C also publishes various kinds of informative Notes which are not intended to be treated as standards.
W3C leaves it up to manufacturers to follow the Recommendations. Many of its standards define levels of conformance, which the developers must follow if they wish to label their product W3C-compliant. Like any standards of other organizations, W3C recommendations are sometimes implemented partially. The Recommendations are under a royalty-free patent license, allowing anyone to implement them.
Unlike the ISOC and other international standards bodies, the W3C does not have a certification program. A certification program is a process which has benefits and drawbacks; the W3C has decided, for now, that it is not suitable to start such a program owing to the risk of creating more drawbacks for the community than benefits.

Wednesday, November 17, 2010

Augmented Reality Applications

Current applications

Advertising: Marketers started to use AR to promote products via interactive AR applications. For example, at the 2008 LA Auto Show, Nissan unveiled the concept vehicle Cube and presented visitors with a brochure which, when held against a webcam, showed several versions of the vehicle. In August 2009,discrepancies Best Buy ran a circular with an augmented reality code that allowed users with a webcam to interact with the product in 3D. In 2010 Walt Disney used mobile augmented reality to connect a movie experience to outdoor advertising.

Support with complex tasks: Complex tasks such as assembly, maintenance, and surgery can be simplified by inserting additional information into the field of view. For example, labels can be displayed on parts of a system to clarify operating instructions for a mechanic who is performing maintenance on the system. AR can include images of hidden objects, which can be particularly effective for medical diagnostics or surgery. Examples include a virtual X-ray view based on prior tomography or on real time images from ultrasound and microconfocal probes or open NMR devices. A doctor could observe the fetus inside the mother's womb. See also Mixed reality.

Navigation devices: AR can augment the effectiveness of navigation devices for a variety of applications. For example, building navigation can be enhanced for the purpose of maintaining industrial plants. Outdoor navigation can be augmented for military operations or disaster management. Head-up displays or personal display glasses in automobiles can be used to provide navigation hints and traffic information. These types of displays can be useful for airplane pilots, too. Head-up displays are currently used in fighter jets as one of the first AR applications. These include full interactivity, including eye pointing.

Industrial Applications: AR can be used to compare the data of digital mock-ups with physical mock-ups for efficiently finding discrepancies between the two sources. It can further be employed to safeguard digital data in combination with existing real prototypes, and thus save or minimize the building of real prototypes and improve the quality of the final product.

Military and emergency services: AR can be applied to military and emergency services as wearable systems to provide information such as instructions, maps, enemy locations, and fire cells.

Prospecting: In the fields of hydrology, ecology, and geology, AR can be used to display an interactive analysis of terrain characteristics. Users could use, and collaboratively modify and analyze, interactive three-dimensional maps.

Art: AR can be incorporated into artistic applications that allow artists to create art in real time over reality such as painting, drawing, modeling, etc. One such example of this phenomenon is called Eyewriter that was developed in 2009 by Zachary Lieberman and a group formed by members of Free Art and Technology (FAT), OpenFrameworks and the Graffiti Research Lab to help a graffiti artist, who became paralyzed, draw again.

Architecture: AR can be employed to simulate planned construction projects.

Sightseeing: Models may be created to include labels or text related to the objects/places visited. With AR, users can rebuild ruins, buildings, or even landscapes as they previously existed.

Collaboration: AR can help facilitate collaboration among distributed team members via conferences with real and virtual participants. The Hand of God is a good example of a collaboration system.

Entertainment and education: AR can be used in the fields of entertainment and education to create virtual objects in museums and exhibitions, theme park attractions (such as Cadbury World), games (such as ARQuake) and books.

Music: Pop group Duran Duran included interactive AR projections into their stage show during their 2000 Pop Trash concert tour. Sydney band Lost Valentinos launched the world's first interactive AR music video on 16 October 2009, where users could print out 5 markers representing a pre-recorded performance from each band member which they could interact with live and in real-time via their computer webcam and record as their own unique music video clips to share via YouTube.

Future applications
It is important to note that augmented reality is a costly development in technology. Because of this, the future of AR is dependent on whether or not those costs can be reduced in some way. If AR technology becomes affordable, it could be very widespread but for now major industries are the sole buyers that have the opportunity to utilize this resource.
Expanding a PC screen into the real environment: program windows and icons appear as virtual devices in real space and are eye or gesture operated, by gazing or pointing. A single personal display (glasses) could concurrently simulate a hundred conventional PC screens or application windows all around a user
Virtual devices of all kinds, e.g. replacement of traditional screens, control panels, and entirely new applications impossible in "real" hardware, like 3D objects interactively changing their shape and appearance based on the current task or need.
Enhanced media applications, like pseudo holographic virtual screens, virtual surround cinema, virtual 'holodecks' (allowing computer-generated imagery to interact with live entertainers and audience)
Virtual conferences in "holodeck" style
Replacement of cellphone and car navigator screens: eye-dialing, insertion of information directly into the environment, e.g. guiding lines directly on the road, as well as enhancements like "X-ray"-views
Virtual plants, wallpapers, panoramic views, artwork, decorations, illumination etc., enhancing everyday life. For example, a virtual window could be displayed on a regular wall showing a live feed of a camera placed on the exterior of the building, thus allowing the user to effectually toggle a wall's transparency
With AR systems getting into mass market, we may see virtual window dressings, posters, traffic signs, Christmas decorations, advertisement towers and more. These may be fully interactive even at a distance, by eye pointing for example.
Virtual gadgetry becomes possible. Any physical device currently produced to assist in data-oriented tasks (such as the clock, radio, PC, arrival/departure board at an airport, stock ticker, PDA, PMP, informational posters/fliers/billboards, in-car navigation systems, etc.) could be replaced by virtual devices that cost nothing to produce aside from the cost of writing the software. Examples might be a virtual wall clock, a to-do list for the day docked by your bed for you to look at first thing in the morning, etc.
Subscribable group-specific AR feeds. For example, a manager on a construction site could create and dock instructions including diagrams in specific locations on the site. The workers could refer to this feed of AR items as they work. Another example could be patrons at a public event subscribing to a feed of direction and information oriented AR items.
AR systems can help the visually impaired navigate in a much better manner (combined with a text-to-speech software).
Computer games which make use of position and environment information to place virtual objects, opponents, and weapons overlaid in the player's visual field.

Friday, November 12, 2010

Augmented Reality Technology

Hardware
The main hardware components for augmented reality are: display, tracking, input devices, and computer. Combination of powerful CPU, camera, accelerometers, GPS and solid state compass are often present in modern smartphones, which make them prospective platforms for augmented reality.

Display
There are three major display techniques for Augmented Reality:
Head Mounted Displays
Handheld Displays
Spatial Displays

Head Mounted Displays
A Head Mounted Display (HMD) places images of both the physical world and registered virtual graphical objects over the user's view of the world. The HMD's are either optical see-through or video see-through in nature. An optical see-through display employs half-silver mirror technology to allow views of physical world to pass through the lens and graphical overlay information to be reflected into the user's eyes. The HMD must be tracked with a six degree of freedom sensor.This tracking allows for the computing system to register the virtual information to the physical world. The main advantage of HMD AR is the immersive experience for the user. The graphical information is slaved to the view of the user. The most common products employed are as follows: MicroVision Nomad, Sony Glasstron, Vuzix AR and I/O Displays.

Handheld Displays
Handheld Augment Reality employs a small computing device with a display that fits in a user's hand. All handheld AR solutions to date have employed video see-through techniques to overlay the graphical information to the physical world. Initially handheld AR employed sensors such as digital compasses and GPS units for its six degree of freedom tracking sensors. This moved onto the use of fiducial marker systems such as the ARToolKit for tracking. Today vision systems such as SLAM or PTAM are being employed for tracking. Handheld display AR promises to be the first commercial success for AR technologies. The two main advantages of handheld AR is the portable nature of handheld devices and ubiquitous nature of camera phones.

Spatial Displays
Instead of the user wearing or carrying the display such as with head mounted displays or handheld devices; Spatial Augmented Reality (SAR) makes use of digital projectors to display graphical information onto physical objects. The key difference in SAR is that the display is separated from the users of the system. Because the displays are not associated with each user, SAR scales naturally up to groups of users, thus allowing for collocated collaboration between users. SAR has several advantages over traditional head mounted displays and handheld devices. The user is not required to carry equipment or wear the display over their eyes. This makes spatial AR a good candidate for collaborative work, as the users can see each other’s faces. A system can be used by multiple people at the same time without each having to wear a head mounted display. Spatial AR does not suffer from the limited display resolution of current head mounted displays and portable devices. A projector based display system can simply incorporate more projectors to expand the display area. Where portable devices have a small window into the world for drawing, a SAR system can display on any number of surfaces of an indoor setting at once. The tangible nature of SAR makes this an ideal technology to support design, as SAR supports both a graphical visualisation and passive haptic sensation for the end users. People are able to touch physical objects, and it is this process that provides the passive haptic sensation.

Tracking
Modern mobile augmented reality systems use one or more of the following tracking technologies: digital cameras and/or other optical sensors, accelerometers, GPS, gyroscopes, solid state compasses, RFID, wireless sensors. Each of these technologies have different levels of accuracy and precision. Most important is the tracking of the pose and position of the user's head for the augmentation of the user's view. The user's hand(s) can be tracked or a handheld input device could be tracked to provide a 6DOF interaction technique. Stationary systems can employ 6DOF track systems such as Polhemus, ViCON, A.R.T, or Ascension.

Input devices
This is a current open research question. Some systems, such as the Tinmith system, employ pinch glove techniques. Another common technique is a wand with a button on it. In case of a smartphone, the phone itself could be used as 3D pointing device, with 3D position of the phone restored from the camera images.

Computer
Camera based systems require powerful CPU and considerable amount of RAM for processing camera images. Wearable computing systems employ a laptop in a backpack configuration. For stationary systems a traditional workstation with a powerful graphics card. Sound processing hardware could be included in augmented reality systems.

Software
For consistent merging real-world images from camera and virtual 3D images, virtual images should be attached to real-world locations in visually realistic way. That means a real world coordinate system, independent from the camera, should be restored from camera images. That process is called Image registration and is part of Azuma's definition of Augmented Reality.
Augmented reality image registration uses different methods of computer vision, mostly related to video tracking. Many computer vision methods of augmented reality are inherited from similar visual odometry methods.
Usually those methods consist of two parts. First interest points, or fiduciary markers, or optical flow detected in the camera images. First stage can use Feature detection methods like Corner detection, Blob detection, Edge detection or thresholding and/or other image processing methods.
In the second stage, a real world coordinate system is restored from the data obtained in the first stage. Some methods assume objects with known 3D geometry(or fiduciary markers) present in the scene and make use of those data. In some of those cases all of the scene 3D structure should be precalculated beforehand. If not all of the scene is known beforehand SLAM technique could be used for mapping fiduciary markers/3D models relative positions. If no assumption about 3D geometry of the scene made structure from motion methods are used. Methods used in the second stage include projective(epipolar) geometry, bundle adjustment, rotation representation with exponential map, kalman and particle filters.