Pages

Wednesday, September 23, 2009

Hard Disk Drive

A hard disk drive (HDD) is a non-volatile, random access device for digital data. It features rotating rigid platters on a motor-driven spindle within a protective enclosure. Data is magnetically read and written on the platter by read/write heads that float on a film of air above the platters.

Introduced by IBM in 1956, hard disk drives have fallen in cost and physical size over the years while dramatically increasing capacity. Hard disk drives have been the dominant device for secondary storage of data in general purpose computers since the early 1960s. They have maintained this position because advances in their areal recording density have kept pace with the requirements for secondary storage. Today's HDDs operate on high-speed serial interfaces; i.e., serial ATA (SATA) or serial attached SCSI (SAS).

History
Hard disk drives were introduced in 1956 as data storage for an IBM accounting computer) and were developed for use with general purpose mainframe and mini computers.
Driven by areal density doubling every two to four years since their invention, HDDs have changed in many ways, a few highlights include:
Capacity per HDD increasing from 3.75 megabytes to greater than 1 terabyte, a greater than 270 thousand to 1 improvement.
Size of HDD decreasing from 87.9 cubic feet (a double wide refrigerator) to 0.002 cubic feet (2½-inch form factor, a pack of cards), a greater than 44 thousand to 1 improvement.
Price decreasing from about $15,000 per megabyte to less than $0.0001 per megabyte ($100/1 terabyte), a greater than 150 million to 1 improvement
Average access time decreasing from greater than 0.1 second to a few thousandths of a second, a greater than 40 to 1 improvement.
Market application expanding from general purpose computers to most computing applications including consumer applications.

Technology

Magnetic recording
HDDs record data by magnetizing ferromagnetic material directionally. Sequential changes in the direction of magnetization represent patterns of binary data bits. The data are read from the disk by detecting the transitions in magnetization and decoding the originally written data. Different encoding schemes, such as Modified Frequency Modulation, group code recording, run-length limited encoding, and others are used.
A typical HDD design consists of a spindle that holds flat circular disks called platters, onto which the data are recorded. The platters are made from a non-magnetic material, usually aluminum alloy or glass, and are coated with a shallow layer of magnetic material, typically 10–20 nm in depth—for reference, standard copy paper is 0.07–0.18 millimetre (70,000–180,000 nm)—with an outer layer of carbon for protection.
The platters are spun at speeds varying from 3,000 RPM in energy-efficient portable devices, to 15,000 RPM for high performance servers. Information is written to, and read from a platter as it rotates past devices called read-and-write heads that operate very close (tens of nanometers in new drives) over the magnetic surface. The read-and-write head is used to detect and modify the magnetization of the material immediately under it. In modern drives there is one head for each magnetic platter surface on the spindle, mounted on a common arm. An actuator arm (or access arm) moves the heads on an arc (roughly radially) across the platters as they spin, allowing each head to access almost the entire surface of the platter as it spins. The arm is moved using a voice coil actuator or in some older designs a stepper motor.
The magnetic surface of each platter is conceptually divided into many small sub-micrometer-sized magnetic regions referred to as magnetic domains. In older disk designs the regions were oriented horizontally and parallel to the disk surface, but beginning about 2005, the orientation was changed to perpendicular to allow for closer magnetic domain spacing. Due to the polycrystalline nature of the magnetic material each of these magnetic regions is composed of a few hundred magnetic grains. Magnetic grains are typically 10 nm in size and each form a single magnetic domain. Each magnetic region in total forms a magnetic dipole which generates a magnetic field.
For reliable storage of data, the recording material needs to resist self-demagnetization, which occurs when the magnetic domains repel each other. Magnetic domains written too densely together to a weakly magnetizable material will degrade over time due to physical rotation of one or more domains to cancel out these forces. The domains rotate sideways to a halfway position that weakens the readability of the domain and relieves the magnetic stresses. Older hard disks used iron(III) oxide as the magnetic material, but current disks use a cobalt-based alloy.
A write head magnetizes a region by generating a strong local magnetic field. Early HDDs used an electromagnet both to magnetize the region and to then read its magnetic field by using electromagnetic induction. Later versions of inductive heads included metal in Gap (MIG) heads and thin film heads. As data density increased, read heads using magnetoresistance (MR) came into use; the electrical resistance of the head changed according to the strength of the magnetism from the platter. Later development made use of spintronics; in these heads, the magnetoresistive effect was much greater than in earlier types, and was dubbed "giant" magnetoresistance (GMR). In today's heads, the read and write elements are separate, but in close proximity, on the head portion of an actuator arm. The read element is typically magneto-resistive while the write element is typically thin-film inductive.
The heads are kept from contacting the platter surface by the air that is extremely close to the platter; that air moves at or near the platter speed. The record and playback head are mounted on a block called a slider, and the surface next to the platter is shaped to keep it just barely out of contact. This forms a type of air bearing.
In modern drives, the small size of the magnetic regions creates the danger that their magnetic state might be lost because of thermal effects. To counter this, the platters are coated with two parallel magnetic layers, separated by a 3-atom layer of the non-magnetic element ruthenium, and the two layers are magnetized in opposite orientation, thus reinforcing each other. Another technology used to overcome thermal effects to allow greater recording densities is perpendicular recording, first shipped in 2005, and as of 2007 the technology was used in many HDDs.

Components
A typical hard drive has two electric motors, one to spin the disks and one to position the read/write head assembly. The disk motor has an external rotor attached to the platters; the stator windings are fixed in place. The actuator has a read-write head under the tip of its very end (near center); a thin printed-circuit cable connects the read-write head to the hub of the actuator. A flexible, somewhat U-shaped, ribbon cable, seen edge-on below and to the left of the actuator arm in the first image and more clearly in the second, continues the connection from the head to the controller board on the opposite side.
The head support arm is very light, but also stiff; in modern drives, acceleration at the head reaches 550 Gs.
The silver-colored structure at the upper left of the first image is the top plate of the permanent-magnet and moving coil motor that swings the heads to the desired position (it is shown removed in the second image). The plate supports a squat neodymium-iron-boron (NIB) high-flux magnet. Beneath this plate is the moving coil, often referred to as the voice coil by analogy to the coil in loudspeakers, which is attached to the actuator hub, and beneath that is a second NIB magnet, mounted on the bottom plate of the motor (some drives only have one magnet).
The voice coil itself is shaped rather like an arrowhead, and made of doubly coated copper magnet wire. The inner layer is insulation, and the outer is thermoplastic, which bonds the coil together after it is wound on a form, making it self-supporting. The portions of the coil along the two sides of the arrowhead (which point to the actuator bearing center) interact with the magnetic field, developing a tangential force that rotates the actuator. Current flowing radially outward along one side of the arrowhead and radially inward on the other produces the tangential force. If the magnetic field were uniform, each side would generate opposing forces that would cancel each other out. Therefore the surface of the magnet is half N pole, half S pole, with the radial dividing line in the middle, causing the two sides of the coil to see opposite magnetic fields and produce forces that add instead of canceling. Currents along the top and bottom of the coil produce radial forces that do not rotate the head.

Sunday, September 20, 2009

Computer Memory

In computing, memory refers to the state information of a computing system, as it is kept active in some physical structure. The term "memory" is used for the information in physical systems which are fast (i.e. RAM), as a distinction from physical systems which are slow to access (i.e. data storage). By design, the term "memory" refers to temporary state devices, whereas the term "storage" is reserved for permanent data. Advances in storage technology have blurred the distinction a bit —memory kept on what is conventionally a storage system is called "virtual memory".
Colloquially, computer memory refers to the physical devices used to store data or programs (sequences of instructions) on a temporary or permanent basis for use in an electronic digital computer. Computers represent information in binary code, written as sequences of 0s and 1s. Each binary digit (or "bit") may be stored by any physical system that can be in either of two stable states, to represent 0 and 1. Such a system is called bistable. This could be an on-off switch, an electrical capacitor that can store or lose a charge, a magnet with its polarity up or down, or a surface that can have a pit or not. Today, capacitors and transistors, functioning as tiny electrical switches, are used for temporary storage, and either disks or tape with a magnetic coating, or plastic discs with patterns of pits are used for long-term storage.
Computer memory is usually meant to refer to the semiconductor technology that is used to store information in electronic devices. Current primary computer memory makes use of integrated circuits consisting of silicon-based transistors. There are two main types of memory: volatile and non-volatile.

History
In the early 1940s, memory technology mostly permitted a capacity of a few bytes. The first programmable digital computer, the ENIAC, using thousands of octal-base radio vacuum tubes, could perform simple calculations involving 20 numbers of ten decimal digits which were held in the vacuum tube accumulators.
The next significant advance in computer memory was with acoustic delay line memory developed by J. Presper Eckert in the early 1940s. Through the construction of a glass tube filled with mercury and plugged at each end with a quartz crystal, delay lines could store bits of information within the quartz and transfer it through sound waves propagating through mercury. Delay line memory would be limited to a capacity of up to a few hundred thousand bits to remain efficient.
Two alternatives to the delay line, the Williams tube and Selectron tube, were developed in 1946, both using electron beams in glass tubes as means of storage. Using cathode ray tubes, Fred Williams would invent the Williams tube, which would be the first random access computer memory. The Williams tube would prove to be advantageous to the Selectron tube because of its greater capacity (the Selectron was limited to 256 bits, while the Williams tube could store thousands) and being less expensive. The Williams tube would nevertheless prove to be frustratingly sensitive to environmental disturbances.
Efforts began in the late 1940s to find non-volatile memory. Jay Forrester, Jan A. Rajchman and An Wang would be credited with the development of magnetic core memory, which would allow for recall of memory after power loss. Magnetic core memory would become the dominant form of memory until the development of transistor based memory in the late 1960s.

Volatile memory
Volatile memory is computer memory that requires power to maintain the stored information. Current semiconductor volatile memory technology is usually either static RAM (see SRAM) or dynamic RAM (see DRAM). Static RAM exhibits data remanence, but is still volatile, since all data is lost when memory is not powered. Whereas, dynamic RAM allows data to be leaked and disappear automatically without a refreshing. Upcoming volatile memory technologies that hope to replace or compete with SRAM and DRAM include Z-RAM, TTRAM and A-RAM.

Non-volatile memory
Non-volatile memory is computer memory that can retain the stored information even when not powered. Examples of non-volatile memory include read-only memory (see ROM), flash memory, most types of magnetic computer storage devices (e.g. hard disks, floppy discs and magnetic tape), optical discs, and early computer storage methods such as paper tape and punched cards.Upcoming non-volatile memory technologies include FeRAM, CBRAM, PRAM, SONOS, RRAM, Racetrack memory, NRAM and Millipede.

Thursday, September 17, 2009

Accelerated Graphics Port

The Accelerated Graphics Port (often shortened to AGP) is a high-speed point-to-point channel for attaching a video card to a computer's motherboard, primarily to assist in the acceleration of 3D computer graphics. Since 2004, AGP has been progressively phased out in favor of PCI Express. As of mid-2009, PCIe cards dominate the market, but new AGP cards and motherboards are still available for purchase, though OEM driver support is minimal.

Advantages over PCI
As computers became increasingly graphically oriented, successive generations of graphics adapters began to push the limits of PCI, a bus with shared bandwidth. This led to the development of AGP, a "bus" dedicated to graphics adapters.
The primary advantage of AGP over PCI is that it provides a dedicated pathway between the slot and the processor rather than sharing the PCI bus. In addition to a lack of contention for the bus, the point-to-point connection allows for higher clock speeds. AGP also uses sideband addressing, meaning that the address and data buses are separated so the entire packet does not need to be read to get addressing information. This is done by adding eight extra 8-bit buses which allow the graphics controller to issue new AGP requests and commands at the same time with other AGP data flowing via the main 32 address/data (AD) lines. This results in improved overall AGP data throughput.
In addition, to load a texture, a PCI graphics card must copy it from the system's RAM into the card's framebuffer, whereas an AGP card is capable of reading textures directly from system RAM using the Graphics Address Remapping Table (GART). GART reapportions main memory as needed for texture storage, allowing the graphics card to access them directly. The maximum amount of system memory available to AGP is defined as the AGP aperture.
The two main reasons graphics cards with the PCI interface are still produced are that, first, they can be used in nearly any PC; because while some motherboards with built-in graphics adapters lack an AGP slot, few, if any, modern desktop PCs lack PCI slots. Secondly, a user with an appropriate operating system can use several PCI graphics cards (or several PCI graphics cards in combination with one AGP card) simultaneously — to give many different video outputs (for the use of many screens). This is almost impossible with AGP 1.0 and AGP 2.0 cards, because they do not support more than one AGP Master (video card) per AGP Target (chipset interface). AGP 3.0 does support more than one AGP Master per AGP Target, but nonetheless few PC motherboards are equipped with more than one AGP slot. Some server-class computers support having multiple AGP slots in a single system: the HP AlphaServer GS1280 has up to 6 AGP slots, the AlphaServer ES80 up to 4 AGP slots, and the AlphaServer ES47 up to 2 AGP slots.

History
The AGP slot first appeared on x86 compatible system boards based on Socket 7 Intel P5 Pentium and Slot 1 P6 Pentium II processors. Intel introduced AGP support with the i440LX Slot 1 chipset on the 26th of August 1997 and a flood of products followed from all the major system board vendors.
The first Socket 7 chipsets to support AGP were the VIA Apollo VP3, SiS 5591/5592, and the ALI Aladdin V. Intel never released an AGP-equipped Socket 7 chipset. FIC demonstrated the first Socket 7 AGP system board in November 1997 as the FIC PA-2012 based on the VIA Apollo VP3 chipset, followed very quickly by the EPoX P55-VP3 also based on the VIA VP3 chipset which was first to market.
Early video chipsets featuring AGP support included the Rendition Vérité V2200, 3dfx Voodoo Banshee, Nvidia RIVA 128, 3Dlabs PERMEDIA 2, Intel i740, ATI Rage series, Matrox Millennium II, and S3 ViRGE GX/2. Some early AGP boards used graphics processors built around PCI and were simply bridged to AGP. This resulted in the cards benefiting little from the new bus, with the only improvement used being the 66 MHz bus clock, with its resulting doubled bandwidth over PCI, and bus exclusivity. Examples of such cards were the Voodoo Banshee, Vérité V2200, Millennium II, and S3 ViRGE GX/2. Intel's i740 was explicitly designed to exploit the new AGP feature set. In fact it was designed to texture only from AGP memory, making PCI versions of the board difficult to implement (local board RAM had to emulate AGP memory.)

Microsoft first introduced AGP support into Windows 95 OEM Service Release 2 (OSR2 version 1111 or 950B) via the USB SUPPLEMENT to OSR2 patch. After applying the patch the Windows 95 system became Windows 95 version 4.00.950 B. The first Windows NT-based operating system to receive AGP support was Windows NT 4.0 with service pack 3, introduced in 1997. Linux support for AGP enhanced fast data transfers was first added in 1999 with the implementation of the AGPgart kernel module.

Compatibility
Compatibility, AGP Keys on card (top), on slot (bottom)

AGP cards are backward and forward compatible within limits. 1.5 V-only keyed cards will not go into 3.3 V slots and vice versa, though "Universal" slots exist which accept either type of card. AGP Pro cards will not fit into standard slots, but standard AGP cards will work in a Pro slot. Some cards, like Nvidia's GeForce 6 series (except the 6200) or ATI's Radeon X800 series, only have keys for 1.5 V to prevent them from being installed in older mainboards without 1.5 V support. Some of the last modern cards with 3.3 V support were the Nvidia GeForce FX series (FX 5200, FX 5500, FX 5700, some FX 5800, FX 5900 and some FX 5950) and the ATI Radeon 9500/9700/9800(R350) (but not 9600/9800(R360)). Some Geforce 6200 cards will function with AGP 1.0 (3.3v) slots.
It is important to check voltage compatibility as some cards incorrectly have dual notches and some motherboards incorrectly have fully open slots. Furthermore, some poorly designed older 3.3 V cards incorrectly have the 1.5 V key. Inserting a card into a slot that does not support the correct signaling voltage may cause damage.
Motherboard slots with both 3.3 V and 1.5 V keys do not exist.
There are some proprietary exceptions to this rule. For example, Apple Power Macintosh computers with the Apple Display Connector (ADC) have an extra connector which delivers power to the attached display. Additionally, moving cards between computers of various CPU architectures may not work due to firmware issues.

Use today
As of 2010, few new motherboards feature AGP slots. No new motherboard chipsets are equipped with AGP support, but motherboards continue to be produced with older chipsets that have AGP support. PCI Express allows for higher data transfer rates, has more robust full-duplex support, and also supports other devices.
All new graphics processors are designed for PCI-Express. To create AGP graphics cards, those chips require an additional PCIe to AGP bridge chip to convert PCIe signals to and from AGP signals. This incurs additional board costs due to the need for the additional bridge chip and for a separate AGP-designed circuit board.
Various manufacturers of graphics cards continue to produce AGP cards for the shrinking AGP user-base. The first bridged cards were the GeForce 6600 and ATI Radeon X800 XL boards, released during 2004-5. As of 2009, AGP cards from Nvidia have a ceiling of the GeForce 7 Series. As of 2009, DirectX 10-capable AGP cards from ATI include the Radeon HD 2400, 2600, 3650, and 3850 and the Radeon HD 4650, 4670.

Tuesday, September 15, 2009

PCI Express

PCI Express (Peripheral Component Interconnect Express), officially abbreviated as PCIe (or PCI-E, as it is commonly called), is a computer expansion card standard designed to replace the older PCI, PCI-X, and AGP standards. PCIe 3.0 is the latest standard for expansion cards that is available on mainstream personal computers.
PCI Express is used in consumer, server, and industrial applications, as a motherboard-level interconnect (to link motherboard-mounted peripherals) and as an expansion card interface for add-in boards. A key difference between PCIe and earlier buses is a topology based on point-to-point serial links, rather than a shared parallel bus architecture.
The PCIe electrical interface is also used in a variety of other standards, most notably the ExpressCard laptop expansion card interface.
Conceptually, the PCIe bus can be thought of as a high-speed serial replacement of the older (parallel) PCI/PCI-X bus. At the software level, PCIe preserves compatibility with PCI; a PCIe device can be configured and used in legacy applications and operating systems which have no direct knowledge of PCIe's newer features (though PCIe cards cannot be inserted into PCI slots). In terms of bus protocol, PCIe communication is encapsulated in packets. The work of packetizing and depacketizing data and status-message traffic is handled by the transaction layer of the PCIe port (described later). Radical differences in electrical signaling and bus protocol require the use of a different mechanical form factor and expansion connectors (and thus, new motherboards and new adapter boards).

Architecture
PCIe, unlike previous PC expansion standards, is structured around point-to-point serial links, a pair of which (one in each direction) make up a lane; rather than a shared parallel bus. These lanes are routed by a hub on the main-board acting as a crossbar switch. This dynamic point-to-point behavior allows more than one pair of devices to communicate with each other at the same time. In contrast, older PC interfaces had all devices permanently wired to the same bus; therefore, only one device could send information at a time. This format also allows channel grouping, where multiple lanes are bonded to a single device pair in order to provide higher bandwidth.
The number of lanes is negotiated during power-up or explicitly during operation. By making the lane count flexible, a single standard can provide for the needs of high-bandwidth cards (e.g., graphics, 10 Gigabit Ethernet and multiport Gigabit Ethernet cards) while being economical for less demanding cards.
Unlike preceding PC expansion interface standards, PCIe is a network of point-to-point connections. This removes the need for bus arbitration or waiting for the bus to be free, and enables full duplex communication. While standard PCI-X (133 MHz 64 bit) and PCIe ×4 have roughly the same data transfer rate, PCIe ×4 will give better performance if multiple device pairs are communicating simultaneously or if communication between a single device pair is bidirectional.
Format specifications are maintained and developed by the PCI-SIG (PCI Special Interest Group), a group of more than 900 companies that also maintain the Conventional PCI specifications.

Interconnect
PCIe devices communicate via a logical connection called an interconnect or link. A link is a point-to-point communication channel between 2 PCIe ports, allowing both to send/receive ordinary PCI-requests (configuration read/write, I/O read/write, memory read/write) and interrupts (INTx, MSI, MSI-X). At the physical level, a link is composed of 1 or more lanes. Low-speed peripherals (such as an 802.11 Wi-Fi card) use a single-lane (×1) link, while a graphics adapter typically uses a much wider (and thus, faster) 16-lane link.

Lane
A lane is composed of a transmit and receive pair of differential lines. Each lane is composed of 4 wires or signal paths, meaning conceptually, each lane is a full-duplex byte stream, transporting data packets in 8 bit 'byte' format, between endpoints of a link, in both directions simultaneously. Physical PCIe slots may contain from one to thirty-two lanes, in powers of two (1, 2, 4, 8, 16 and 32). Lane counts are written with an × prefix (e.g., ×16 represents a sixteen-lane card or slot), with ×16 being the largest size in common use.

Serial Bus
The bonded serial format was chosen over a traditional parallel format due to the phenomenon of timing skew. Timing skew is a direct result of the limitations imposed by the speed of an electrical signal traveling down a wire, which it does at the finite speed of electricity. Because signal paths across an interface have different finite lengths, parallel signals transmitted simultaneously arrive at their destinations at slightly different times. When the interface clock rate increases to the point where the wavelength of a single bit is less than the smallest difference between path lengths, the bits of a single word do not arrive at their destination simultaneously, making parallel recovery of the word difficult. Thus, the speed of the electrical signal, combined with the difference in length between the longest and shortest path in a parallel interconnect, leads to a naturally imposed maximum bandwidth. Serial channel bonding avoids this issue by not requiring the bits to arrive simultaneously. PCIe is just one example of a general trend away from parallel buses to serial interconnects. Other examples include Serial ATA, USB, SAS, FireWire and RapidIO. Multichannel serial design increases flexibility by allowing slow devices to be allocated fewer lanes than fast devices.

Friday, September 11, 2009

Mary Jane Watson (Character)

Mary Jane Watson, often shortened to MJ, is a fictional supporting character appearing, originally, in Marvel comic books and, later, in multiple spin-offs and dramatizations of the Spider-Man titles as the best friend, love interest, and wife (as Mary Jane Watson-Parker) of Peter Parker, the alter ego of Spider-Man. Created by writer Stan Lee and artist John Romita, Sr., after a few partial appearances and references, her first full appearance was in The Amazing Spider-Man #42 (November 1966).

Fictional character biography
Mary Jane is depicted as an extremely beautiful, green-eyed redhead, and has been the primary romantic interest of Peter Parker for the last twenty years, although initially competing with others for his affection, most prominently with Gwen Stacy and the Black Cat. Mary Jane's relatively unknown early life was eventually explored in Amazing Spider-Man #259.
Early issues of Amazing Spider-Man featured a running joke about Peter dodging his Aunt May's attempts to set him up with "that nice Watson girl next door", whom Peter had not yet met and assumed would not be his type, since his aunt liked her (in the Parallel Lives graphic novel an identical scenario is shown between Mary Jane and her Aunt Anna). Mary Jane made her first actual appearance in Amazing Spider-Man (Vol. 1) #25 (June 1965); however, in that issue, her face was obscured. It is not until Amazing Spider-Man (Vol. 1) #42 (November 1966) that her face is actually seen. In that issue, on the last page, Peter finally meets her, and he is stunned by her beauty even as she speaks the now-famous line: "Face it, Tiger... you just hit the jackpot!"
Peter begins to date her, much to the annoyance of Gwen Stacy. However, her apparent superficiality proves to be an irritation to Peter that her rival did not share. Peter eventually learns to cope with this, and Mary Jane becomes an occasional flirtatious interest as well as a close friend to Peter, Gwen, and others.
Despite her enjoyment of life, her friendships, and dating, Mary Jane refuses to be tied down for too long. When her relationship with Harry Osborn comes to an end, it has significant impact on Harry, driving him to a drug overdose. This in turn creates a boomerang effect, driving his father Norman Osborn to the brink of insanity, temporarily restoring his memories as the Green Goblin. Mary Jane only realizes the true consequences of her lifestyle when she learns of Harry's predicament.
Later, when the Green Goblin murders Gwen, MJ stays with Peter during his mourning; though he initially tells her to leave him alone, he becomes interested in her as he recovers. Their relationship has a few initial hurdles, such as MJ's hot temper and Peter's always dashing off to be Spider-Man. Following the events of the original clone saga, Peter realizes that Mary Jane is the girl he has always loved, and the two begin dating again.
However, despite loving Peter, MJ does not wish to be tied, and when she allows the relationship to progress too far, she is left with a difficult decision when Peter proposes to her. After taking a short time to consider, she turns him down. Following a series of traumatic experiences involving Peter's absences and his costumed alter ego endangering his Aunt May, a spiritually-exhausted MJ leaves New York for several months. Peter meanwhile dates other women, most notably Felicia Hardy.
MJ eventually returns, her behavior showing a marked change with her abandonment of her false front. Following an attack on Peter by Puma, she breaks down and admits her knowledge of Peter's secret identity in Amazing Spider-Man (Vol. 1) #257. After learning of her own family history in Amazing Spider-Man #259, Peter finds a new respect for her and begins to truly understand her. MJ however, makes it clear to Peter that knowing his identity changes nothing about her feelings, and that she only loves him as a friend.
Despite the one-shot graphic novel "Parallel Lives" and Untold Tales of Spider-Man #16 revealing that Mary Jane discovered Peter's secret when she noticed Spider-Man climbing out of Peter's bedroom window, many comics published before this revelation claimed that she had simply "figured it out", with the details of how and when left ambiguous to the reader.
After yet another period of reconsidering his priorities in life, Peter contemplates letting go of the Spider-Man mantle, with Mary Jane backing the decision, but his relationship with Felicia Hardy soon resumes. Feeling lost and guilty, Peter visits Mary Jane and apologizes with an awkward kiss before heading to Berlin with Ned Leeds.
Following Ned Leed's murder at the hands of the Foreigner, a changed and bitter Peter returns to New York, where his lack of direction in life is not helped when Ned is framed as the Hobgoblin, and Felicia elects to leave Peter behind as she is tied to the Foreigner. Mary Jane returns to Peter, presumably to patch things up, but Peter surprises her with a second proposal of marriage, which MJ again turns down. She returns to her family to settle old debts with her father, with Peter following her. After aiding her sister in having her crooked father arrested, and aiding Peter against a Spider-Slayer, Mary Jane has an epiphany on marriage, and agrees to become Peter's wife.

Marriage
In spite of Peter and Mary Jane's mutual worry that they were marrying too early, Peter's concern for her safety, and her unwillingness to give up her "party girl" lifestyle, they married in Amazing Spider-Man (Vol. 1) Annual #21 (1987). She attached Peter's surname to her own, making her Mary Jane Watson-Parker. Spider-Man wore his black costume around this time, but after Mary Jane was frightened by a stalking Venom, she convinced him to change back to his old costume in Amazing Spider-Man (Vol. 1) #300 (May 1988).

Mary Jane continued to model after her marriage, but was stalked by her wealthy landlord, Jonathan Caesar. When she rejected his advances, he had her blacklisted as a model. She got a role on the soap opera "Secret Hospital," but was unhappy with her character's air-headed and mean personality. Although she successfully petitioned her boss to adjust her character's personality, a deranged fan tried to kill Mary Jane out of hatred for the actions of her soap opera character. Mary Jane quit her job out of fear for her own safety.
Due to this stress, the recent death of Harry Osborn, and the seeming return of her husband's parents, Mary Jane began smoking (a habit she had quit in high school), only increasing the tension between her and Peter. Peter ultimately convinced her to stop smoking when he tricked her into visiting Nick Katzenberg suffering heavily from lung cancer (he presumably died; Peter encountered his ghost in an out of body experience). When his parents were discovered to be fakes, Peter was unable to cope with the knowledge and disappeared for a time. Mary Jane visited her sister Gayle and her father for the first time in years, and finally reconciled with them. Meanwhile, Peter overcame his problems on his own. When she and Peter reunited, both were happier than they had been in a long time.

Wednesday, September 9, 2009

Venom (Character)

Venom, or the Venom Symbiote, is a Symbiote, an extraterrestrial life form in the Marvel Comics universe. The creature is a sentient alien with a gooey, almost liquid-like form. It requires a host, usually human, to bond around for its survival. In return the Venom creature gives its host enhanced powers. In effect, when the Venom Symbiote bonds with a human to form a supervillain, that new dual-life form itself is also often called Venom. Its second host Eddie Brock, after bonding with the Symbiote to become the first Venom, is one of Spider-Man's archenemies. Spider-Man was the first host it merged with before its evil motives were clear. After Spider-Man rejected it, the Symbiote went on to merge with other hosts and so began its reign as the villain known as Venom. Comics journalist and historian Mike Conroy writes of the character: "What started out as a replacement costume for Spider-Man turned into one of the Marvel web-slinger's greatest nightmares."

Venom has become one of Spider-Man's most enduring and popular foes. Indeed, he has become so popular that he is seen as Spider-Man's arch-nemesis, in terms of his popularity. Venom was ranked as the 22nd Greatest Comic Book Villain of All Time in IGN's list of the top 100 comic villains, 33rd on Empire's 50 Greatest Comic Book Characters, and was ranked as the 98th Greatest Comic Book Character Ever in Wizard magazine's 200 Greatest Comic Book Characters of all Time list.

Overview
Spider-Man first encountered the Venom Symbiote in Secret Wars #8, in which he unwittingly merged with it. After Spider-Man rejected it, the Symbiote merged with Eddie Brock, its most well-known host, in The Amazing Spider-Man #298 (May 1988). Its next host became Mac Gargan, the villain formerly known as Scorpion.
Originally, the Symbiote was portrayed as a mute and lonely creature craving the company of a host. More recently, it has been shown as increasingly abusive of its hosts, and having the power of speech. The Venom Symbiote has no known name, as "Venom" is essentially the moniker it has adopted since its history with Spider-Man on Earth. According to S.H.I.E.L.D., it is considered one of the greatest threats to humanity, alongside Magneto, Doctor Doom, and Red Skull.
Contrary to popular belief, the idea for the Venom Symbiote was not originally thought up by artists Mike Zeck and Todd McFarlane or writer Dave Micheline, but by a Marvel Comics reader from Norridge, Illinois named Randy Schueller. Marvel bought the idea for $220.00 and the current editor in chief at the time, Jim Shooter, sent Schueller a letter acknowledging Marvel's desire to purchase the idea from him. Schueller's design was then modified by Mike Zeck, becoming the Venom symbiote.

Powers and Abilities
Though it requires a living host in order to survive, the Venom Symbiote has been shown on some occasions to be able to fend for itself with its own set of unique powers. The Symbiote, even without a host, has shown shapeshifting abilities like forming spikes and expanding its size.
The Symbiote is telepathic and does not require physical contact to influence the minds of others. In Planet Of the Symbiotes, the creature, after being rejected by its host, emits a psychic scream which drives nearby humans to states of extreme depression. Later, with the assistance of Eddie Brock, it emits an even more powerful variant of that power which results in the mass suicide of an invasive force of Symbiotes. The Symbiote can also blend with any background, using an optic-camouflage type of effect, and shapeshift to resemble ordinary clothing. Venom is immune to the Penance Stare, an ability used by Ghost Rider, Johnny Blaze and Daniel Ketch. The Symbiote also augments the strength of its hosts.
The Symbiote originally rejected its species' habit of consuming its hosts, but in some interpretations it still required certain chemicals (human adrenaline) in order to survive. When starved of these chemicals, the Symbiote developed a mutable exoskeleton, allowing it to form its own solid body which it used to hunt and kill prey without the assistance of a host. However, because of Brock's, and later Gargan's, influence on its personality the Symbiote has developed a taste for blood, which both its hosts were forced to sate by physically devouring their victims. Later, the suit's evolution progressed and as shown in the 2003 Venom comic book series, its clone could spontaneously jump from host to host and after every departure said hosts would be left dead.
Because of its contact with Spider-Man, the Symbiote grants all of its subsequent hosts the hero's powers and cannot be detected by his spider-sense. As Spider-Man's fighting style is partly dependent on his spider-sense, his effectiveness was somewhat hampered when he battled Eddie Brock, allowing the less experienced Brock to keep up with him. However, the Symbiote is vulnerable to loud noises, such as the ringing of church bells.
Some interpretations of the Venom Symbiote has been shown to have the ability to replicate itself. This ability is shown in Spider-Man: Reign, when Venom recreates his own Symbiote to combat his loneliness. This ability is also used by Venom in Spider-Man: Web of Shadows, when Venom discovers the ability to copy his Symbiote and uses it to take over Manhattan. Such an ability has not been demonstrated in the main 616 universe.

Monday, September 7, 2009

SimCity (Game)

SimCity is a city-building simulation game, first released in 1989 and designed by Will Wright. SimCity was Maxis' first product, which has since been ported into various personal computers and game consoles, and spawned several sequels including SimCity 2000 in 1993, SimCity 3000 in 1999, SimCity 4 in 2003, SimCity DS, and SimCity Societies in 2007. The original SimCity was later renamed SimCity Classic. Until the release of The Sims in 2000, the SimCity series was the best-selling line of computer games made by Maxis.
SimCity spawned a series of Sim games. Since the release of SimCity, similar simulation games have been released focusing on different aspects of reality such as business simulation in Capitalism.
On January 10, 2008 the SimCity source code was released under the free software GPL 3 license under the name Micropolis.

History
SimCity was originally developed by game designer Will Wright. The inspiration for SimCity came from a feature of the game Raid on Bungeling Bay that allowed Wright to create his own maps during development. Wright soon found he enjoyed creating maps more than playing the actual game, and SimCity was born.
In addition, Wright also was inspired by reading "The Seventh Sally", a short story by Stanislaw Lem, in which an engineer encounters a deposed tyrant, and creates a miniature city with artificial citizens for the tyrant to oppress.
The first version of the game was developed for the Commodore 64 in 1985, but it would not be published for another four years. The original working title of SimCity was Micropolis. The game represented an unusual paradigm in computer gaming, in that it could neither be won nor lost; as a result, game publishers did not believe it was possible to market and sell such a game successfully. Brøderbund declined to publish the title when Wright proposed it, and he pitched it to a range of major game publishers without success. Finally, founder Jeff Braun of then-tiny Maxis agreed to publish SimCity as one of two initial games for the company.
Wright and Braun returned to Brøderbund to formally clear the rights to the game in 1988, when SimCity was near completion. Brøderbund executives Gary Carlston and Don Daglow saw that the title was infectious and fun, and signed Maxis to a distribution deal for both of its initial games. With that, four years after initial development, SimCity was released for the Amiga and Macintosh platforms, followed by the IBM PC and Commodore 64 later in 1989.
On January 10, 2008 the SimCity source code was released under the free software GPL 3 license. The release of the source code was related to the donation of SimCity software to the One Laptop Per Child program, as one of the principles of the OLPC laptop is the use of free and open source software. The open source version is called Micropolis (the initial name for SimCity) since EA retains the trademark Simcity. The version shipped on OLPC laptops will still be called SimCity, but will have to be tested by EA quality assurance before each release to be able to use that name. The Micropolis source code has been translated to C++, integrated with Python and interfaced with both GTK+ and OpenLaszlo.

Objective
The objective of SimCity, as the name of the game suggests, is to build and design a city, without specific goals to achieve (except in the scenarios, see below). The player can mark land as being zoned as commercial, industrial, or residential, add buildings, change the tax rate, build a power grid, build transportation systems and take many other actions, in order to enhance the city.
Also, the player may face disasters including flooding, tornadoes, fires (often from air disasters or even shipwrecks), earthquakes and attacks by monsters. In addition, monsters and tornadoes can trigger train crashes by running into passing trains. Later disasters in the game's sequels included lightning strikes, volcanoes, meteors and attack by extraterrestrial craft.
In the SNES version and later, one can also build rewards when they are given to them, such as a mayor's mansion, casino, etc.

Scenarios
The original SimCity kicked off a tradition of goal-centered, timed scenarios that could be won or lost depending on the performance of the player/mayor. The scenarios were an addition suggested by Brøderbund in order to make SimCity more like a game.[7] The original cities were all based on real world cities and attempted to re-create their general layout, a tradition carried on in SimCity 2000 and in special scenario packs. While most scenarios either take place in a fictional timeline or have a city under siege by a fictional disaster, a handful of available scenarios are based on actual historical events.
The original scenarios are:
Bern, 1965 – The Swiss capital is clogged with traffic; the mayor needs to reduce traffic and improve the city by installing a mass transit system.
Boston, 2010 – The city's nuclear power plant suffers a meltdown, incinerating a portion of the city. The mayor must rebuild, contain the toxic areas, and return the city to prosperity. In some early editions of SimCity (on lower-power computers that did not include the nuclear power plants), this scenario was altered to have a tornado strike the city. Much like the Tokyo scenario below, the mayor needs to limit damage and rebuild.
Detroit, 1972 – Crime and depressed industry wreck the city. The mayor needs to reduce crime and reorganize the city to better develop. The scenario is a reference to Detroit's declining state during the late 20th century (See also History of Detroit) and the 1970's economic recession.
Rio de Janeiro, 2047 – Coastal flooding resulted from global warming rages through the city. The mayor must control the problem and rebuild. In some early editions of SimCity (on lower-power computers that did not include the flooding disaster), this scenario was altered to have the objective be fighting very high crime rates.
San Francisco, 1906 – An earthquake hits the city, the mayor must control the subsequent damage, fires and rebuild. The scenario references the 1906 San Francisco earthquake.
Tokyo, 1961 – The city is attacked by a Godzilla-type monster (Bowser in the SNES version). The mayor needs to limit the damage and rebuild. The scenario is strongly based on the original series of Godzilla films.
The PC version (IBM, Tandy compatible; on floppy disk) , CD re-release, as well as the Amiga and Atari ST versions included two additional scenarios:
Hamburg, Germany, 1944 – Bombing, where the mayor has to govern the city during the closing years of World War II and rebuild it later. This scenario references the bombing of Hamburg in World War II.
Dullsville, USA, 1910 – Boredom plagues a stagnating city in the middle of the United States; the mayor is tasked to turn Dullsville into a metropolis within 30 years.
In addition, the later edition of SimCity on the Super NES included the basics of these two scenarios in two, more difficult scenarios that were made available after a player had completed the original scenarios:
Las Vegas, 2096 – Aliens attack the city. This invasion is spread out over several years, stretching city resources. While somewhat similar to Hamburg, the scenario included casino features as well as animated flying saucers.
Freeland, 1991 – Using a blank map without any water form, the mayor must build a game-described megalopolis of at least 500,000 people. There is no time limit in this scenario. While similar to the earlier Dullsville scenario, Freeland took advantage of the SNES version's clear delineations between city sizes, particularly metropolis and megalopolis. In the center of Freeland is a series of trees that form the familiar head of Mario. However, as with all scenarios, the player is unable to build any of the reward buildings from the normal game.
While the scenarios were meant to be solved strategically, many players discovered by dropping the tax rate to zero near the end of the allotted timespan, one could heavily influence public opinion and population growth. In scenarios such as San Francisco, where rebuilding and, by extension, maintaining population growth play a large part of the objective, this kind of manipulation can mean a relatively easy victory. Later titles in the series would take steps to prevent players from using the budget to influence the outcome of scenarios.
Also, several of the original scenarios, such as the Bern scenario, could be won simply by destroying the city, as they checked only one factor, in this case traffic. In the SNES version of the Boston Nuclear Meltdown Scenario, there was a bug, such that when you are pressing any button, nothing can happen in the game, effectively pausing the game but allowing you to build or take any other actions. In this manner, you can bulldoze all the nuclear power plants before any of them can explode, averting disaster. However, the cost of rebuilding the power infrastructure afterwards made winning the scenario even more difficult than normal if you used this tactic.

Wednesday, September 2, 2009

Motherboard

A motherboard is the central printed circuit board (PCB) in many modern computers and holds many of the crucial components of the system, while providing connectors for other peripherals. The motherboard is sometimes alternatively known as the main board, system board, or, on Apple computers, the logic board. It is also sometimes casually shortened to mobo.

History
Prior to the advent of the microprocessor, a computer was usually built in a card-cage case or mainframe with components connected by a backplane consisting of a set of slots themselves connected with wires; in very old designs the wires were discrete connections between card connector pins, but printed circuit boards soon became the standard practice. The Central Processing Unit, memory and peripherals were housed on individual printed circuit boards which plugged into the backplane.
During the late 1980s and 1990s, it became economical to move an increasing number of peripheral functions onto the motherboard (see below). In the late 1980s, motherboards began to include single ICs (called Super I/O chips) capable of supporting a set of low-speed peripherals: keyboard, mouse, floppy disk drive, serial ports, and parallel ports. As of the late 1990s, many personal computer motherboards supported a full range of audio, video, storage, and networking functions without the need for any expansion cards at all; higher-end systems for 3D gaming and computer graphics typically retained only the graphics card as a separate component.
The early pioneers of motherboard manufacturing were Michronics, Mylex, AMI, DTK, Hauppauge, Orchid Technology, Elitegroup, DFI, and a number of Taiwan-based manufacturers.
The most popular computers such as the Apple II and IBM PC had published schematic diagrams and other documentation which permitted rapid reverse-engineering and third-party replacement motherboards. Usually intended for building new computers compatible with the exemplars, many motherboards offered additional performance or other features and were used to upgrade the manufacturer's original equipment.
The term mainboard is archaically applied to devices with a single board and no additional expansions or capability. In modern terms this would include embedded systems and controlling boards in televisions, washing machines, etc. A motherboard specifically refers to a printed circuit with the capability to add/extend its performance.

Overview
A motherboard, like a backplane, provides the electrical connections by which the other components of the system communicate, but unlike a backplane, it also connects the central processing unit and hosts other subsystems and devices.

A typical desktop computer has its microprocessor, main memory, and other essential components connected to the motherboard. Other components such as external storage, controllers for video display and sound, and peripheral devices may be attached to the motherboard as plug-in cards or via cables, although in modern computers it is increasingly common to integrate some of these peripherals into the motherboard itself.
An important component of a motherboard is the microprocessor's supporting chipset, which provides the supporting interfaces between the CPU and the various buses and external components. This chipset determines, to an extent, the features and capabilities of the motherboard.
Modern motherboards include, at a minimum:

  • sockets (or slots) in which one or more microprocessors may be installed
  • slots into which the system's main memory is to be installed (typically in the form of DIMM modules containing DRAM chips)
  • a chipset which forms an interface between the CPU's front-side bus, main memory, and peripheral buses
  • non-volatile memory chips (usually Flash ROM in modern motherboards) containing the system's firmware or BIOS
  • a clock generator which produces the system clock signal to synchronize the various components
  • slots for expansion cards (these interface to the system via the buses supported by the chipset)
  • power connectors, which receive electrical power from the computer power supply and distribute it to the CPU, chipset, main memory, and expansion cards.

Additionally, nearly all motherboards include logic and connectors to support commonly used input devices, such as PS/2 connectors for a mouse and keyboard. Early personal computers such as the Apple II or IBM PC included only this minimal peripheral support on the motherboard. Occasionally video interface hardware was also integrated into the motherboard; for example, on the Apple II and rarely on IBM-compatible computers such as the IBM PC Jr. Additional peripherals such as disk controllers and serial ports were provided as expansion cards.
Given the high thermal design power of high-speed computer CPUs and components, modern motherboards nearly always include heat sinks and mounting points for fans to dissipate excess heat.

CPU sockets
A CPU socket or slot is an electrical component that attaches to a printed circuit board (PCB) and is designed to house a CPU (also called a microprocessor). It is a special type of integrated circuit socket designed for very high pin counts. A CPU socket provides many functions, including a physical structure to support the CPU, support for a heat sink, facilitating replacement (as well as reducing cost), and most importantly, forming an electrical interface both with the CPU and the PCB. CPU sockets can most often be found in most desktop and server computers (laptops typically use surface mount CPUs), particularly those based on the Intel x86 architecture on the motherboard. A CPU socket type and motherboard chipset must support the CPU series and speed.

Integrated Peripherals


Block diagram of a modern motherboard, which supports many on-board peripheral functions as well as several expansion slots.
With the steadily declining costs and size of integrated circuits, it is now possible to include support for many peripherals on the motherboard. By combining many functions on one PCB, the physical size and total cost of the system may be reduced; highly integrated motherboards are thus especially popular in small form factor and budget computers.
For example, the ECS RS485M-M, a typical modern budget motherboard for computers based on AMD processors, has on-board support for a very large range of peripherals:

  • disk controllers for a floppy disk drive, up to 2 PATA drives, and up to 6 SATA drives (including RAID 0/1 support)
  • integrated graphics controller supporting 2D and 3D graphics, with VGA and TV output
  • integrated sound card supporting 8-channel (7.1) audio and S/PDIF output
  • Fast Ethernet network controller for 10/100 Mbit networking
  • USB 2.0 controller supporting up to 12 USB ports
  • IrDA controller for infrared data communication (e.g. with an IrDA-enabled cellular phone or printer)
  • temperature, voltage, and fan-speed sensors that allow software to monitor the health of computer components

Expansion cards to support all of these functions would have cost hundreds of dollars even a decade ago; however, as of April 2007 such highly integrated motherboards are available for as little as $30 in the USA.

Peripheral Card Slots
A typical motherboard of 2009 will have a different number of connections depending on its standard.
A standard ATX motherboard will typically have one PCI-E 16x connection for a graphics card, two conventional PCI slots for various expansion cards, and one PCI-E 1x (which will eventually supersede PCI). A standard EATX motherboard will have one PCI-E 16x connection for a graphics card, and a varying number of PCI and PCI-E 1x slots. It can sometimes also have a PCI-E 4x slot. (This varies between brands and models.)
Some motherboards have two PCI-E 16x slots, to allow more than 2 monitors without special hardware, or use a special graphics technology called SLI (for Nvidia) and Crossfire (for ATI). These allow 2 graphics cards to be linked together, to allow better performance in intensive graphical computing tasks, such as gaming and video editing.
As of 2007, virtually all motherboards come with at least four USB ports on the rear, with at least 2 connections on the board internally for wiring additional front ports that may be built into the computer's case. Ethernet is also included. This is a standard networking cable for connecting the computer to a network or a modem. A sound chip is always included on the motherboard, to allow sound output without the need for any extra components. This allows computers to be far more multimedia-based than before. Some motherboards have their graphics chip built into the motherboard rather than needing a separate card. A separate card may still be used.