Nvidia V100 32Gb on Dell PowerEdge R730 + ESXi

The Nvidia V100 card was the top of the line GPU a few years back and the 32Gb version is still powerful enough to power a lot of the AI models out there even in 2025 with relatively good speed.

In this post, I will describe how I put a single V100 card into a Dell poweredge R730 server to test out the viability of creating a basic AI server using slightly dated equipment. Although officially on the Dell website and other resources it is mentioned that V100 is not supported on the R730 server, I want to confirm that this is not true and I managed to get the V100 working on a R730 server.

One thing I was not sure about was whether it was possible to install 2 V100 GPUs onto a single server. Perhaps some enthusiasts out there can advis

A few things you will need to consider

(1) The power cable supplying the V100 GPU is different to the one supplying older Nvidia models such as M10 and other older models – although the older versions still have 8-pins the cable that is needed is different and it is called the EPS12V – I bought the 2 cables on Amazon:- https://www.amazon.com/dp/B08N4BJL2J

Although I think there are similar ones one Ebay which work as well:-
https://www.ebay.com/itm/226156784263

(2) Power supply– because the GPU is quite power hungry – it is recommended that you get the PSUs with at least 1,100W – make sure you have 2 PSUs. To power one GPU I think 1,110W is enough. I’m told that if you have two PSUs it is possible to configure them so that they run in parallel giving you potentially 2,200W rather than configuring them so that one of them is redundant

(3) Hard-drive – use SSDs as the old spindle drives would be too slow to run stuff on

(4) RAM – the old adage is that if the GPU has X Gb of VRAM then you will need 2X of normal RAM – so if you have 32Gb in GPU VRAM then you should have at least 64Gb in normal RAM. I actually have 256Gb RAM on this server.

The power cable is probably the most important and fidgety part so let me describe what I did; some suppliers will label which end is for the GPU and which end is for the server’s power socket. The problem is that the cable would fit in either orientation and one way would work and the other way might fry your server and the GPU. So it’s really important to get it right. There is a good video here that talks you through which end goes where:- https://www.youtube.com/watch?v=Uc7msMdIHpM – Watch from 1:25 to 2:00.

I have created an image to show you the difference – I hope this is clear.

When installing the GPU; do it on Riser 1 or Riser 2 as the PCIe slots are 16-bit – I have read online that some people have installed on Riser 3 too for better airflow reasons but Riser 3 is 8-bit so you may take a hit on performance if you do that.

This is the aerial view of the server after I have installed the GPU card into the server. Next I will power up the server and hopefully the video card would be detected. Originally I thought the only way to see the GPU was through iDRAC but actually I found that if you installed it correctly it will show up within the Lifecycle controller. Please see screenshot below:-

Dell lifecycle controller R730 showing V100

The below are some more pictures of the server setup:-



BTW – for SSDs – don’t get the official Dell SSDs as they are really expensive; I got the Samsung 870 EVO SATA III Internal SSD 2.5″ and they work perfectly well on the H730P Raid Controller or even the H330 Raid controller. In fact, I think it should work with all of the Raid Controllers that come with the R730 server.

Next I’m going to put this server into our data-centre. For the Operating System I will try to use ESXi and have Ubuntu run as one of the Virtual Machines with a GPU passthrough.

Stay tuned and I will share with you how I get on with the next part.