Thursday, February 21, 2008

Graphics Card

A video card, also referred to as a graphics accelerator card, display adapter, graphics card, and numerous other terms, is an item of personal computer hardware whose function is to generate and output images to a display. It operates on similar principles as a sound card or other peripheral devices.The term is usually used to refer to a separate, dedicated expansion card that is plugged into a slot on the computer's motherboard, as opposed to a graphics controller integrated into the motherboard chipset. An integrated graphics controller may be referred to as an "integrated graphics processor" (IGP).Some video cards offer added functions, such as video capture, TV tuner adapter, MPEG-2 and MPEG-4 decoding or even FireWire, mouse, light pen, joystick connectors, or even the ability to connect two monitors. Video cards are not used exclusively in IBM type PCs; they have been used in devices such as Commodore Amiga (connected by the slots Zorro II and Zorro III), Apple II, Apple Macintosh, Atari Mega ST/TT (attached to the MegaBus or VME interface), Spectravideo SVI-328, MSX and in video game consoles.
History
Video card history starts in the 1960s, when printers were replaced with screens as visualization element. Video cards were needed to create the first images.The first IBM PC video card, which was released with the first IBM PC, was developed by IBM in 1981. The MDA (Monochrome Display Adapter) could only work in text mode representing 25x80 lines in the screen. It had a 4KB video memory and just one color.Starting with the MDA in 1981, several video cards were released, which are summarized in the attached table.VGA was widely accepted, which lead some corporations such as ATI, Cirrus Logic and S3 to work with that video card, improving its resolution and the number of colours it used. And so was born the SVGA (Super VGA) standard, which reached 2 MB of video memory and a resolution of 1024x768 at 256 color mode.The evolution of video cards took a turn for the better in 1995 with the release of the first 2D/3D cards, developed by Matrox, Creative, S3 and ATI, among others. Those video cards followed the SVGA standard, but incorporated 3D functions. In 1997, 3dfx released the graphics chip Voodoo, which was very powerful and included new 3D effects (Mip Mapping, Z-buffering, Anti-aliasing...). From this point, a series of 3D video cards were released, like Voodoo2 from 3dfx, TNT and TNT2 from NVIDIA. The power reached with these cards exceeded the PCI port capacity. Intel developed the AGP (Accelerated Graphics Port) which solved the bottleneck between the microprocessor and the video card. From 1999 until 2002, NVIDIA controlled the video card market (taking over 3dfx)[6] with the GeForce family. The improvements carried out in these years were focused in 3D algorithms and graphics processor clock rate. Nevertheless, video memory also needed to improve their data rate, and DDR technology was incorporated. The capacity of video memory goes in this period from 32 MB with GeForce to 128 MB with GeForce 4.
Components
A video card consists of a printed circuit board on which the components are mounted. These include:
Graphics processing unit (GPU)
A GPU is a dedicated graphics microprocessor optimized for floating point calculations which are fundamental to 3D graphics rendering. The main attributes of the GPU are the core clock rate, which typically ranges from 250 MHz to 1200 MHz in modern cards, and the number of pipelines (vertex and fragment shaders), which translate a 3D image characterized by vertices and lines into a 2D image formed by pixels.
Video memory
If the video card is integrated in the motherboard, it will use the computer RAM memory (lower throughput). If it is not integrated, the video card will have its own video memory which is called Video RAM or VRAM. The VRAM capacity of most modern video cards range from 128 MB to 2.0 GB. Before 2003, the VRAM was typically based on DDR technology. During and after that year, manufacturers moved towards the vastly superior DDR2, GDDR3 and GDDR4. The memory clock rate in modern cards are generally between 400 MHz and 2.0 GHz. A very important element of the video memory is the Z-buffer, which manages the depth coordinates in 3D graphics.

Video BIOS
The video BIOS or firmware chip is a chip that contains the basic program that governs the video card's operations and provides the instructions that allow the computer and software to interface with the card. It contains information on the memory timing, operating speeds and voltages of the processor and ram and other information. It is possible to re-flash a BIOS (enable factory-locked settings for higher performance) although this is typically only done by video card overclockers, and has the potential to irreversibly damage the card.

RAMDAC
Random Access Memory Digital-to-Analog Converter. RAMDAC takes responsibility for turning the digital signals produced by the computer processor into an analog signal which can be understood by the computer display. Depending on the number of bits used and the RAMDAC data transfer rate, the converter will be able to support different computer display refresh rates. With CRT displays, it is best to work over 75 Hz and never under 60 Hz, in order to minimise flicker. (With LCD displays, flicker is not a problem.) Due to the growing popularity of digital computer displays and the migration of some of its functions to the motherboard, the RAMDAC is slowly disappearing. All current LCD and plasma displays and TVs work in the digital domain and do not require a RAMDAC. There are few remaining legacy LCD and plasma displays which feature analog inputs (VGA, component, SCART etc.) only; these do require a RAMDAC but they reconvert the analog signal back to digital before they can display it, with the unavoidable loss of quality stemming from this digital-to-analog-to-digital conversion.

Outputs
S-video (TV-out), DVI and HD-15 outputs.The most common connection systems between the video card and the computer display are:
HD-15: Analog-based standard adopted in the late 1980s designed for CRT displays, also called VGA connector. Some problems of this standard are electrical noise, image distortion and sampling error evaluating pixels.
DVI: Digital-based standard designed for displays such as LCDs, plasma screens and video projectors. It avoids image distortion and electrical noise, corresponding each pixel from the computer to a display pixel, using its native resolution.
S-Video: Included to allow the connection with DVD players, video recorders and video game consoles.

SMotherboard interface
Chronologically, connection systems between video card and motherboard were, mainly:
ISA: 16 bits architecture, 8 MHz data transfer rate. Released in 1981 by IBM, dominant in the marketplace in the 1980s.
MCA: 32 bits, 10 MHz. Released in 1987 by IBM. It wasn't compatible with previous motherboards.
EISA: 32 bits, 8.33 MHz. Released in 1988 to compete with IBM. Compatible with previous motherboards.
VESA: ISA extension. 32 bit, 33 MHz.
PCI: 32 bit, 33 MHz. Replaced the previous buses from 1993. PCI allowed dynamic connectivity between devices, avoiding the jumpers manual adjustments. PCI-X was a version introduced in 1998 that improved PCI to 64 bits and 133 MHz.
UPA: A interconnect bus architecture introduced by Sun Microsystems in 1995. 64 bits, initially 67 or 83 MHz.
AGP: First used in 1997. Dedicated to graphics bus, 32 bits, 66 MHz.
PCI-Express: Point to point interface, released in 2004. In 2006 provided double data transfer rate of AGP. Should not be confused with PCI-X, an enhanced version of the original PCI specification.

Cooling devices
Heat sink with fan attached.
Due to video card work charge, high temperatures are reached, which can cause a breakdown. Cooling devices are incorporated to avoid excessive heat.

There are three types of cooling devices:
Heat sink: generally referred to as a passive cooling device, it has no moving parts and, therefore, is soundless and very reliable; it absorbs and dissipates heat from the GPU using thermal contact (by either direct or radiant contact with a cooling medium such as air). Its effectiveness depends on its size and other characteristics including shape and material (generally copper or aluminium). To increase effectiveness, this is typically (but not necessarily) combined with a computer fan.
Computer fan: sometimes known as an active cooling device, a small electrical fan which drives air across a heat sink and as such will generate a small amount of noise. It is more effective than a heat sink alone at cooling, but due to the moving parts a fan requires maintenance and possible replacement in the long term.
Water Block: a heat sink which transfers heat from the GPU to a circulating liquid, rather than the air. This liquid is carried outside the computer case and the heat dissipated to the air by a heat sink (with or without a fan). Typically part of a larger
liquid cooling system for the computer as a whole, this approach has the advantages outlined in that article (to summarise, the heat sink design and positioning is not restricted by being located inside a computer case).

Power supply
Until 2006, video card power consumption had not been a big problem; nevertheless, present video card tendency is to consume even more power. Although power supplies are increasing their power too, the bottleneck is due to the PCI-Express connection, which is limited to supplying 150W.Nowadays, video cards with a power consumption over 150W usually include a six-pin power socket that connects directly to the power supply, which allows a direct connection between the computer power supply and the card, avoiding motherboard connection and, therefore, the PCIe port.

DEMO:How To Install A Graphics Card




No comments:

Please Do Give A Feedback In Order To Improve The Site