Archive

Posts Tagged ‘nvidia’

Alphacool to 3D-scan GPUs to make waterblocks for non-reference cards

Alphacool to 3D-scan GPUs to make waterblocks for non-reference cards

Non-reference graphics cards often have capacitors and VRM circuitry in different places to reference PCBs making it tricky to make universal waterblocks. Alphacool will be soon be able to tailor-make blocks for specific non-reference models.


Alphacool has announced that it will soon be able to 3D-scan graphics cards with non-reference PCBs in order to make custom waterblocks far more easily.

It has made use of a cutting-edge 3D scanner to accurately measure the PCB to allow it to quickly manufacture custom cooling plates that are compatible with a new range of waterblocks.

In addition, it’s offering a free waterblock set for your graphics card (see requirements below) in return for you loaning it to the company. This means it can scan your model and add it to its manufacturing database so others can potentially buy it too.

In the past, if you owned a graphics card with a non-reference PCB – that is one that’s maybe had additional power circuitry added to offer better overclocking or even just a few capacitors moved around, you were very often out of luck if you later wanted to water-cool it.

This is due to the simple reason that it wasn’t worth the time of waterblock manufacturers to go through their usual lengthy production process to create a new waterblock that far fewer people would buy compared to reference models.

Alphacool to 3D-scan GPUs to make waterblocks for non-reference cards *Alphacool to 3D-scan GPUs to make waterblocks for non-reference cards
The dimensions will be used to create a custom base plate that cools the memory and VRMs, which attaches to a backplate and universal waterblock that cools the GPU core directly. As the waterblock is universal, you can re-fit it to future GPUs and just buy a new base plate and backplate for the new GPU.

The baseplate is made from aluminium (at no point does it come in contact with the coolant), and Alphacool claims the mosfets on the card will be cooled to the same level as they would be on an air cooled graphics card running its fan at full speed while the core and ram would see a temperature drop in the region of 30-40°C.

The new waterblock range and 3D-scanning service will cater for any Nvidia GeForce 7XX-series model and any AMD Radeon 2XX-series models only at the start, with both reference GeForce GTX 750 Ti and Titan Black waterblock kits available at launch.

Alphacool to 3D-scan GPUs to make waterblocks for non-reference cards *Alphacool to 3D-scan GPUs to make waterblocks for non-reference cards
If you’re interested in sending your non reference GeForce 7XX-series or Radeon 2XX-series card to Alphacool, you can contact them directly at www.alphacool.com or via your local Alphacool etailer.

Alphacool will also be producing a unique ‘multi-bridge’ connection system for customers with more than one GPU. The bridge will effortlessly connect the waterblocks as well as letting the customer illuminate the Alphacool logo with 5mm LED’s.

Alphacool to 3D-scan GPUs to make waterblocks for non-reference cards *Alphacool to 3D-scan GPUs to make waterblocks for non-reference cards
To support the modding community Alphacool will be publishing the dimensions of the ‘multi-bridge’ cover so you have the ability to make your own. Also if there is enough demand for a specific brand or logo Alphacool will be making custom covers available.

This could in theory be one way to create a proper water-cooling solution for AMD’s new R9 295X2 as well. Do you think Alphacool’s idea could be useful? Have you had to opt for reference models in the past as you needed to water-cool them? Let us know your thoughts in the forum.

Article source: http://feedproxy.google.com/~r/bit-tech/news/~3/OIFQEl0z-PE/1


Article source: http://feedproxy.google.com/~r/GamingRipplesWeb/~3/dnhcvM50ZNo/

AMD Radeon R9 295X2 Review

AMD Radeon R9 295X2 8GB Review

Manufacturer: AMD
UK price (as reviewed):
Approx. £1,050 – £1,100 (inc VAT)
US price (as reviewed): MSRP $1,499 (ex Tax)

At launch, the R9 290X attracted criticism for running too hot and loud, although recent custom cooled cards have shown that it was as much to do with a poorly designed, cheap and inadequate stock cooler as much as it was to do with an inherently inefficient GPU. Even so, the idea of putting two of these toasty chips onto a single card would have seemed ludicrous to most. Recent rumours and teasers, however, began to indicate that AMD was planning exactly that. Today, we can reveal what AMD has been up to: meet the Radeon R9 295X2, appropriately codenamed Vesuvius.

*AMD Radeon R9 295X2 Review **NDA 08/04 1PM** AMD Radeon R9 295X2 Review *AMD Radeon R9 295X2 Review **NDA 08/04 1PM** AMD Radeon R9 295X2 Review
Click to enlarge
As you can see, it’s the first reference card to be water-cooled. It uses a sealed loop cooler from Asetek, with a similar design to that seen on the Asus Ares II. Like that card, it also comes in a padded flight case. Everything about it, from packaging to construction, is premium, which is no surprise – it’s launching at $1500, with our sources indicating that its price will approach the £1,100 mark this side of the Atlantic. It’s due to have retailer availability in the week of April 21st. Without doubt, at this price, it’s reserved for a lucky few, but as enthusiasts, we still jump at the chance to look at something as gloriously over the top as this.

We’ll start with the core hardware. Effectively, the R9 295X2 comprises two fully functional Hawaii GPUs with 44 Compute Units a piece and each with a full offering of 4GB GDDR5 (for the full lowdown on Hawaii see here). Of course, this means it will be marketed as having 5,632 cores and 8GB VRAM, but the correct way to think of it is 2 x 2,816 cores and 2 x 4GB VRAM, as the two GPUs are still functionally separate. Either way, it’s obvious that there is a serious amount of firepower here. These specs also makes this card the fastest single consumer graphics card in the world, though Nvidia’s GeForce GTX Titan Z is set to challenge that. As you’d expect, both TrueAudio and Mantle are fully supported.

Unusually for a dual GPU card, clock speeds aren’t any less than the single card variants – in fact, they’ve actually been given a marginal bump. The maximum GPU clock speed is 1,018MHz compared to 1,000MHz before, while the memory runs at the same 5GHz effective. AMD claims that this, along with improved cooling (which allows the GPUs to run at maximum frequency more often) and hard work from its driver team, means it will actually be faster than two stock speed R9 290X cards running in CrossFire. AMD also claims that the best scaling will be at 4K resolutions, and freely admits that this card is overkill for 1080p and even 1440p.

Article source: http://feedproxy.google.com/~r/bit-tech/hardware/~3/uAL3fA8wqUE/1

Article source: http://feedproxy.google.com/~r/GamingRipplesWeb/~3/KOzL4COOH9A/

Nvidia claims Mantle-beating gains with 337.50 Beta

Nvidia claims Mantle-beating gains with 337.50 Beta

Nvidia’s latest beta drivers, 337.50, come with a claimed performance boost that can best AMD’s Mantle in some titles – and without the need for devs to go back and add support.


Nvidia has made big promises of its latest GeForce beta driver, with claims that optimisations made to its DirectX implementation can result in performance improvements that in some cases better those promised by rival AMD’s Mantle architecture, without the need to modify existing games.

Released late yesterday, the GeForce 337.50 Beta driver bundle make various optimisations to DirectX 11 titles which, the Nvidia claims, can have a dramatic effect on performance. According to the company’s internal testing, those running single-GPU systems will see a boost as high as 64 per cent in selected titles, while those on multi-GPU systems via SLI can expect gains as high as 71 per cent.

Granted, those gains apply only to Nvidia’s latest and greatest graphics cards, and even then only on selected titles: the company has measured a near-50 per cent boost in the performance of Sniper Elite v2 and Total War: Rome 2 using a pair of GeForce GTX 780Ti boards in SLI with an Intel Core i7 3960K and 8GB of RAM, just over 40 per cent in Alien vs. Predator, and just shy of 30 per cent in Sleeping Dogs. Other games highlighted by the company include Battlefield 4, with a promised 12 per cent boost, and Thief, with a 7 per cent boost.

The company also teased its DirectX 12 implementation, promising that when Microsoft’s next-generation API is ready for release it will bring even higher performance boosts for those whose graphics chips will support the standard.

Additional announcements made by Nvidia include the release of the GeForce Experience 2.0 software, offering optimisations for more than 150 games and the introduction of ShadowPlay and GameStream support for laptop users. The SHIELD hand-held console, yet to launch in the UK, has also seen a software update which adds the ability to stream games from a desktop over the internet as well as the local area network.

The company’s full performance figures for the new driver update, along with a link to download the drivers, are available on the official GeForce site.

Article source: http://feedproxy.google.com/~r/bit-tech/news/~3/9L4V4BKGY78/1


Article source: http://feedproxy.google.com/~r/GamingRipplesWeb/~3/w3L5IFGynho/

AMD unveils FirePro W9100 16GB GPU

AMD unveils FirePro W9100 16GB GPU

AMD’s FirePro W9100 offers a massive 16GB of GDDR5 memory, connected to a Hawaii GCN GPU offering up to five teraflops of single-precision compute power.


AMD has announced its latest workstation-oriented graphics board, the FirePro W9100, which packs an impressive 16GB of GDDR5 memory – nearly three times that of its predecessor the FirePro W9000.

The AMD FirePro W9100 is based around a 28nm implementation of the Graphics Core Next 1.1 ‘Hawaii’ architecture, an upgrade from the GCN 1.0 ‘Tahiti’ of its predecessor. Although full specifications aren’t due to be announced until early next month, the company has confirmed an increase from 2,048 stream processors to 2,816 and from 128 texture units to 176. The result: five teraflops of single-precision compute performance, or 2.67 teraflops of double-precision.

Now, to put that into perspective, Nvidia’s latest GeForce GTX Titan Z board offers eight teraflops of single-precision performance, but requires two GPUs in which to do it – and each GPU has access to only 6GB of the shared 12GB GDDR5 memory. The FirePro W9100, on the other hand, has only a single GPU which has the entire 16GB to itself – and, the company has confirmed, the boards will support stacking of up to four cards via CrossFire for systems that require higher performance.

The board, AMD explained during its press conference, is designed for those working on ultra-high resolution projects. As well as the increasingly popular UHD and 4K resolutions, the company has claimed to be seeing demand from markets looking to work with resolutions as high as 8K – which needs a significantly larger framebuffer than the company’s previous FirePro W9000 6GB could offer.

Official pricing for the FirePro W9100 has yet to be confirmed, but those interested in getting their hands on one – or four – can expect to dig deep: as professional products the FirePros demand a hefty pricetag, the FirePro W9000 launched in August 2012 at $4,000 and the W9100 should easily smash that figure when it formally launches on the 7th of April.

Article source: http://feedproxy.google.com/~r/bit-tech/news/~3/3RnIMTs26HM/1


Article source: http://feedproxy.google.com/~r/GamingRipplesWeb/~3/9RqJiWztg6Y/

Nvidia launches Tegra-based Jetson K1 SBC

Nvidia launches Tegra-based Jetson K1 SBC

Nvidia’s Tegra K1 processor forms the heart of the Jetson K1, a new hobbyist-targeted single-board computer from the graphics giant.


Nvidia has announced plans to get into the hobbyist development market with the launch of a new single-board computer based around its Tegra K1 system-on-chip (SoC) processor: the Jetson K1.

Based on a compact board layout measuring just 127mm x 127mm and 26mm in height, the Jetson K1 packs at its heart a Tegra K1 SoC boasting 192 Kepler-class graphics processing cores and the company’s traditional quad-core ARM Cortex-A15 CPU cores with a fifth low-power companion core for background tasks. 2GB of memory is included on the board, along with a 16GB eMMC flash module for local storage – expandable through an SD slot or the on-board SATA port.

The board also boasts a singular half-mini-PCIe slot, a single USB 2.0 port and a further USB 3.0 port, on-board Realtek-powered gigabit Ethernet connectivity, analogue audio input and output, an RS232 serial port, a full-size HDMI port and a 4MB boot flash module. An expansion port also offers a number of general-purpose input-output (GPIO) connections, along with serial UART, I2C, HSIC, SPI, CSI and DisplayPort or LVDS digital video outputs.

Zotac has confirmed that it will be bringing the Jetson K1 to the UK, but has yet to announce availability and pricing. Nvidia’s US arm, meanwhile, is taking pre-orders for the device in North America for $192 (around £116 excluding taxes) – making it a significantly pricier alternative to the popular, although considerably less powerful, Raspberry Pi.

Article source: http://feedproxy.google.com/~r/bit-tech/news/~3/aqcjo-CIgW0/1

Article source: http://feedproxy.google.com/~r/GamingRipplesWeb/~3/Nj33TixXeMA/

Nvidia unveils GeForce GTX Titan Z at GTC

Nvidia unveils GeForce GTX Titan Z at GTC

Jen-Hsun Huang surprised crowds at the GTC 2014 event with the unveiling of the company’s most expensive GeForce card ever, the GTX Titan Z.


Nvidia surprised the crowds at its annual GPU Technology Conference last night with the announcement of a new top-end graphics card, the dual-GPU GeForce GTX Titan Z.

Featuring a pair of Kepler GK110 chips, the Titan Z offers 5,760 CUDA processing cores both running at full speed. Each has 6GB of GDDR5 video memory, for a combined total of 12GB – a figure more usually associated with the company’s Tesla accelerator boards than its GeForce consumer GPUs. Sadly, beyond the usual claims that it’s the world’s fastest graphics card, Nvidia did not share full specifications at the event beyond promising that both GPUs are clock-linked, meaning there’ll be no bottlenecking involved.

Nvidia was also quiet on thermal design profile (TDP) at the unveiling, but with the single-GPU GeForce GTX Titan on which the Titan Z is based drawing 250W it’s hard to imagine that the company has found a way to jam two GPUs onto a card without a major increase in power draw. That said, Nvidia has claimed that the card will be ‘cool and quiet rather than hot and loud, promising low-profile components for a triple-slot design and ducted baseplate channels to reduce air turbulence and therefore noise levels. Huang also claimed that a triple-Titan Z setup, for those that could afford such a thing, would draw around 2KW in total – suggesting a 500W+ TDP if you allow for other system components.

One thing Nvidia was willing to share, surprisingly, was the price: the card will launch in the US at a wallet-emptying $2,999. Compared with the company’s existing GeForce line-up, that’s a seriously high price tag – but with eight teraflops of floating-point performance, and Jen-Hsun Huang tellingly describing it as the card for those ‘in desperate need of a supercomputer that you need to fit under your desk,‘ it seems that despite its GeForce moniker the Titan Z is being positioned as an alternative to the Tesla accelerator board family.

Article source: http://feedproxy.google.com/~r/bit-tech/news/~3/bQTCtd8DnUM/1


Article source: http://feedproxy.google.com/~r/GamingRipplesWeb/~3/mt1hdhSRExk/

EVGA announces GeForce GTX 780 6GB models

EVGA announces GeForce GTX 780 6GB models

EVGA’s new GTX 780 models, one with stock cooler and one with dual-fan ACX cooler, both come with a boost to 6GB of GDDR5 memory.


EVGA has announced a new entry in its GeForce GTX 780 line-up, boosting the video memory available to an impressive 6GB of GDDR5 without the need to splash out on the like of a Titan.

The company has confirmed plans to launch a pair of GeForce GTX 780 6GB models, starting with a version feature Nvidia’s stock cooler design. As with the existing models, the EVGA GeForce GTX 780 6GB SC includes a Kepler GPU with 2,304 CUDA cores, a base clock of 941MHz rising to 993MHz under boost conditions, and a 384-bit memory bus. Where it differs from the usual models is in its use of 6GB of GDDR5 memory, in place of the usual 3GB.

The stock cooler edition is to be joined by a premium version offering EVGA’s customised ACX dual-fan cooler. Switching the cooler out, the company has claimed, allows for a factor overclock that sees the base clock of the card rise to 967MHz and the boost clock to break the gigahertz barrier at 1,020MHz. The rest of the card’s specifications, including the boost to 6GB, remain the same.

Both models will be covered under EVGA’s Step-Up programme, which allows anyone who has bought an EVGA-branded graphics card in the 90 days prior to the launch of the new boards to upgrade to the new model. Those taking the company up on the offer will, naturally, be asked to pay the difference in cost between their existing board and the new 6GB models.

Official UK pricing for the EVGA GeForce GTX 780 SC and GeForce GTX 780 ACX have not yet been confirmed, with EVGA offering a recommended retail price for the former of $549.99 (around £334 excluding taxes) in the US as a guideline.

Article source: http://feedproxy.google.com/~r/bit-tech/news/~3/OnyZx88ImaY/1


Article source: http://feedproxy.google.com/~r/GamingRipplesWeb/~3/Z7U1Ca7eda4/

Nvidia promises Titanfall improvements

Nvidia promises Titanfall improvements

Respawn’s Titanfall is due to get a lot prettier, at least for GeForce owners, thanks to an incoming patch from Nvidia’s GameWorks programme.


Respawn Entertainment’s mech-based first-person shooter Titanfall, based as it is on the ageing Source engine, might not be anyone’s first choice for showing off just what modern PC gaming can do – but Nvidia has claimed it’s going to get a lot better soon, promising 4K support and numerous tweaks for users of its GPUs.

Launched exclusively on Xbox One, Xbox 360 and Windows, Titanfall is a multiplayer shooter which combines parkour-based on-foot combat with the ability to summon and pilot gigantic mechs dubbed Titans. Unlike previous titles like the well-regarded Mechwarrior series, these mechs play more like giant people than war machines, and come with heavy armour and armaments to help turn the tide of battle in your team’s favour.

There’s no denying that the PC port of the game, as per usual, is easily the best-looking of the bunch, despite its use of Valve’s somewhat ageing Source engine. Nvidia and Respawn have broken cover to promise that things are going to get a lot better in the near future, thanks to the former company’s GameWorks programme.

We are working towards implementing several Nvidia GameWorks technologies that can make Titanfall look and play even better, including TXAA for high quality anti-aliasing and HBAO+ technologies for improved shadow details,‘ claimed Respawn’s Vince Zampella of the partnership. ‘We will also be working towards updates for SLI and 4k support to ensure a fantastic high end PC experience.

The use of temporal anti-aliasing should mean a reduction in the flickering of edges and transparencies in motion, while horizon-based ambient occlusion plus will result in shadows that behave as you might expect when they encounter non-uniform objects in the game. The support for multi-GPU rendering via SLI will also be welcomed by those with such systems, who are currently restricted to using only a single GPU if they want to play the game.

Sadly, while Respawn has already sent a server patch for glitch fixing and balancing live, neither company has yet offered a release date for the Nvidia GameWorks update.

Article source: http://feedproxy.google.com/~r/bit-tech/news/~3/XmFaNjrUVCs/1


Article source: http://feedproxy.google.com/~r/GamingRipplesWeb/~3/3tjJyVf1_ME/

Microsoft announces DirectX 12 at GDC 2014

Microsoft announces DirectX 12 at GDC 2014

Microsoft’s DirectX 12 has been formally announced, and rumours suggesting that it will offer CPU performance enhancements similar to AMD’s Mantle have proven true.


True to its word, Microsoft has used the Game Developers Conference to formally announce DirectX 12 with the promise of significant performance improvements thanks to an approach that allows programmers to get closer to the bare-metal hardware.

Introduced back in 1995 as the Windows Games SDK, DirectX is a collection of application programming interfaces (APIs) that allow developers to abstract themselves away from the hardware in a system. By far its most famous component is Direct3D, added to DirectX in 1996, which allows for high-performance 3D acceleration across any Direct3D-certified graphics processor – first introduced as a lightweight consumer-grade alternative to the Khronos Group’s OpenGL API, which was at the time focused on professional use on workstation-grade hardware.

Now, Microsoft’s Direct3D – and, by extension, the DirectX bundle – has grown into the dominant standard in the PC gaming industry, and even extends to consoles since the launch of the Xbox family. In its latest incarnation, Microsoft claims DirectX significantly reduces CPU overhead for gaming using techniques similar to AMD’s hardware-specific Mantle technology to allow programmers lower-level access to the graphics hardware.

First and foremost, [Direct3D 12] provides a lower level of hardware abstraction than ever before, allowing games to significantly improve multithread scaling and CPU utilisation,‘ claimed Microsoft’s Matt Sandy of the new release. ‘In addition, games will benefit from reduced GPU overhead via features such as descriptor tables and concise pipeline state objects. And that’s not all – Direct3D 12 also introduces a set of new rendering pipeline features that will dramatically improve the efficiency of algorithms such as order-independent transparency, collision detection, and geometry culling.

To back up his claims, Sandy offered a demonstration of 3DMark running on Direct3D 11 compared to Direct3D 12 which halved the CPU time required to render a scene while also helping to spread the load more evenly across multiple processor cores – something from which the eight-core Xbox One will draw considerable benefit.

Microsoft has promised that DirectX 12 will be supported by current-generation hardware, with AMD claiming that all Graphics Core Next (GCN) GPUs and Nvidia that all Fermi, Kepler and Maxwell GPUs will be updated in future drivers to include DirectX 12 support. What the company has so far been silent on, however, is operating system support: while Microsoft has promised that DirectX 12 will be launched across Windows desktops, laptops, tablets and mobiles as well as the Xbox One, the company hasn’t yet confirmed whether it will be available on anything but its latest Windows revisions.

Although Microsoft has shown off working Direct3D 12 implementations – including the application and driver layers – it has not yet suggested a launch date for the bundle. It has, however, posted some technical details over on the MSDN Blog.

Article source: http://feedproxy.google.com/~r/bit-tech/news/~3/mTWdLPyi_-E/1

Article source: http://feedproxy.google.com/~r/GamingRipplesWeb/~3/snGQQKcDJjw/

Gigabyte GeForce GTX 780 GHz Edition Review

Gigabyte GeForce GTX 780 GHz Edition 3GB Review

Manufacturer: Gigabyte
UK price (as reviewed):
£384.38 (inc VAT)
US price (as reviewed): $519.99 (ex Tax)

We’ve looked at a fair number of custom made high end GPUs recently, and the latest one to enter our labs is the Gigabyte GTX 780 GHz Edition. With modifications to the clock speeds, power circuitry and cooling equipment there’s plenty to get our teeth into. It comes in at £385, which is around £25 more than stock cards but still slightly less than basic AMD R9 290X cards.

Gigabyte GeForce GTX 780 GHz Edition Review
Click to enlarge
The GTX 780 uses a cut down GK110 GPU with a total of 2,304 stream processors and 194 texture units. It has the full 384-bit wide memory interface and 48 ROPs of the GTX 780 Ti and Titan Black, however, as well as 3GB of GDDR5.

As Gigabyte’s naming scheme hints at, its overclocked card is very fast by straight out of the box. It ships with a base core clock of 1,020MHz, a massive 18 percent faster than the default 863MHz. This also gives it a rated boost clock of 1,072MHz, though our sample stayed at a mighty 1,176MHz under load. This overclock is very impressive, as it’s almost the same as that which we achieved with our stock GTX 780, indicating that Gigabyte has selected only the best GTX 780 GPUs for use in this card. Sadly, it hasn’t overclocked the memory, which remains at 1.5GHz (6GHz effective). This is a shame as with our original GTX 780 we were able to raise this all the way to 7GHz.

Gigabyte GeForce GTX 780 GHz Edition Review
Click to enlarge
Physically, the GTX 780 GHz Edition is 287mm long (20mm longer than stock) but crucially won’t occupy more than two expansion slots, unlike the Sapphire R9 290X Tri-X. It maintains the standard video outputs, with the DisplayPort connection ensuring G-Sync compatibility. Along the top the usual pair of SLI connectors are present, but Gigabyte has beefed up the power connectivity, going from an 8-pin/6-pin combination to dual 8-pin connections to provide a little more juice – a 600W power supply is recommend as a minimum. Finally, on the back is a brushed metal backplate, and as with the Asus R9 290 DirectCU II, this is primarily for stability and aesthetic purposes.

While the backplate isn’t used to cool the card, there’s plenty of cooling going on at the front. The custom Windforce 3X cooler is rather hefty and responsible for the card’s extra length. There’s also an extended section at the top featuring the Windforce logo which increases the height of the card by 21mm beyond the edge of the PCI bracket. The open black shroud means that heat is dumped into your chassis from the three slimline 80mm fans, which are powered and controlled by a single header on the PCB.

Gigabyte GeForce GTX 780 GHz Edition Review
Click to enlarge
Removing the cooler reveals a copper baseplate for the GPU connected to six copper heat pipes (two 8mm, four 6mm) in the main heatsink. One of these heat pipes loops back into the heatsink while the remaining five feed the secondary one. Aluminium plates and thermal pads are used to cool all twelve memory chips as well as the MOSFETs of the eight GPU power phases.

Gigabyte GeForce GTX 780 GHz Edition Review
Click to enlarge
Looking at the PCB, we see Gigabyte has stuck with the SK Hynix H5GQ2H24AFR-R0C memory chips, which are rated for 6Gbps. The GTX 780 usually has a 6+2 phase arrangement, but Gigabyte has given the GPU two extra phases for a total of 8+2 power phases making for a supposedly cleaner supply of power to the GPU.

Unfortunately, we no longer have samples of the PNY GTX 780 XLR8 OC or Asus ROG Poseidon GTX 780 with which we can compare this card. However, we do still have up to date results for the Sapphire R9 290X Tri-X, which is similar in being a high-end custom cooled card.

Specifications

  • Graphics processor Nvidia GeForce GTX 780, 1,020MHz (boosting to 1,072MHz)
  • Pipeline 2,304 stream processors, 194 texture units, 48 ROPs
  • Memory 3GB GDDR5, 6GHz effective
  • Bandwidth 288.4GB/sec, 384-bit interface
  • Compatibility DirectX 11, OpenGL 4.3
  • Outputs/Inputs Dual Link DVI-D, Dual Link DVI-I, HDMI, DisplayPort
  • Power connections 2 x 8-pin PCI-E top-mounted
  • Size 287mm long, dual-slot
  • Warranty Retailer dependent

Article source: http://feedproxy.google.com/~r/bit-tech/hardware/~3/aIPQBWqPNsE/1


Article source: http://feedproxy.google.com/~r/GamingRipplesWeb/~3/oGKDXYzmo34/