Chapter 3: Picking the PC components for a digital pathology lab

Chapter 3: Picking the PC components for a digital pathology lab

I am very excited about this part (no pun intended!). I have been building my own PCs for a very long time and it is always a wonderful, exciting and sometimes borderline obsessive process.

We will repeat a few terms here on purpose, so you have handy terminology if you briefly forget or are unsure about a specific PC component.

Knowing which PC components your machine needs is a huge plus for many reasons: - Helps you understand your needs better and lets you budget wisely. This matters in an institutional environment where every cent in funding counts. - Helps you make a stronger case to the finance department or the principal investigator about why you need a specific new part. - Helps you troubleshoot faster, which cuts downtime. As the builder, or at least the architect, nobody knows that system like you do.

In pathology, you look at tiny details all day. The computer that shows you those details must be fast, stable, and honest about colors. This guide explains each part of a workstation and storage setup in simple terms, and it gives sensible options at different budget levels. It also explains why some choices matter for patient care and team efficiency.

Whole-slide images (WSIs) are very large. They are often saved as “pyramids,” which means the file contains several versions of the same slide at different zoom levels. That design lets you jump smoothly from low power to high power while the computer loads only the piece you are looking at. The workstation, storage, and network must work together to keep that smooth feeling. [1,2]

Why picking the right parts matters

A single high-detail slide file can be many gigapixels and, when fully decoded, can occupy many gigabytes of memory. If the computer is weak in the wrong place, the viewer will feel sticky when you pan or zoom, colors can drift, and analysis can take hours instead of minutes. The right parts make slides open quickly, keep colors trustworthy, and reduce crashes during long tasks. QuPath, a popular viewer and analysis tool, benefits from multiple CPU cores and plenty of memory. [3]

About PC building

You have two paths.

  1. Buy the parts and assemble. Your IT team or a local vendor can build and validate it.

  2. Buy a prebuilt workstation from a major vendor such as Dell or Lenovo. Many hospitals prefer this because of unified warranty, on-site support, and education pricing.

If you want a starting point for budgeting, pick one of the three builds below and adjust as needed with your IT team.

A) Viewing-only workstation (clinical review, no AI training)

  • CPU (the “brain”): mid-range 6 to 8 cores.
  • Memory (RAM): 32 GB.
  • Graphics (GPU): basic NVIDIA card with stable drivers.
  • System storage: 1 TB NVMe solid-state drive for speed. [4]
  • Bulk storage: keep large slide archives on the hospital NAS or SAN, not on this PC.
  • Network: 1 Gb Ethernet is OK, 10 Gb is nicer if IT offers it. [8]
  • Display: 27 to 32 inch IPS monitor, hardware-calibrated if possible, brightness 300 to 350 cd/m² or higher, and at least about 3 megapixels. [9]

B) Viewer plus AI assistance and small model inference

  • CPU: 8 to 12 cores.
  • Memory: 64 GB.
  • Graphics: NVIDIA GeForce class with 16 GB on-board memory. Examples: RTX 5080 16 GB or RTX 5070 Ti 16 GB. [27,28]
  • System storage: 2 TB NVMe SSD. [4]
  • Bulk storage: NAS or SAN with large hard drives for the archive. Keep SSD for this PC only. [6]
  • Network: ask for 10 Gb to move WSIs faster. [8]
  • Display: same as above. Consider two monitors: one for reports, one for images.

C) Research workstation for heavy AI and batch jobs

  • CPU: 16 to 24 cores.
  • Memory: 128 GB or more. For long jobs, consider ECC memory, which can correct certain random memory errors that otherwise crash jobs or corrupt data. [11]
  • Graphics: NVIDIA GeForce RTX 5090 32 GB for large models and fast batch inference, or workstation-class RTX 5000 Ada 32 GB or RTX 6000 Ada 48 GB if you need pro features and long duty cycles. [26,22,21]
  • System storage: 2 to 4 TB NVMe SSD for active projects and scratch space. [4]
  • Bulk storage: central NAS or SAN with big hard drives. Keep only your “working set” on the local SSD. [6]
  • Network: 10 Gb Ethernet to shared storage. [8]
  • Display: large IPS or OLED, calibrated. If you use OLED, follow burn-in care tips. [9,10]

Component by component

What it is, why it matters, what can go wrong, and sensible options.

4.1 CPU: keeps the whole system responsive

What it is
The main processor that coordinates everything.

Why it matters
Viewers create small image tiles on the fly. Analysis tools run many tasks at once. More cores and good single-core speed help with smooth panning and multitasking. [3]

What can go wrong
Too few cores or slow clocks make the viewer feel laggy when you zoom or when several slides are open.

Good options
- Viewing-only: 6 to 8 cores.
- Viewer plus AI help: 8 to 12 cores.
- Research: 16 to 24 cores.

Memory pairing tip
If you leave long detection or training jobs running, ECC memory reduces the risk from random memory bit errors seen in large field studies. [11]

4.2 Memory (RAM): how much slide data you can hold at once

What it is
Short-term working space. It holds the piece of the slide you are looking at and what your tools need right now.

Why it matters
More RAM means fewer pauses when you zoom or when several large slides are open. QuPath benefits from more memory to cache image tiles and run analysis faster. [3]

What can go wrong
Too little RAM causes stutters or crashes.

Good sizes
- 32 GB for viewing-only.
- 64 GB for viewer plus AI assistance.
- 128 GB or more for research.

4.3 Storage inside the PC: where the operating system and your “working set” live

Simple rule
Use an NVMe SSD inside the computer. NVMe connects more directly to the CPU and has lower overhead than older SATA or SAS drives, which means lower latency and faster loads. [4]

What can go wrong
A slow SATA drive will bottleneck opening slides and saving results.

Practical picks
- 1 TB NVMe for viewing-only.
- 2 TB NVMe for AI assistance.
- 2 to 4 TB NVMe for research and scratch.

4.4 Storage for the lab (NAS or SAN): where the slide library lives

Simple rule
Use large hard disk drives (HDDs) in the NAS or SAN for the shared archive. They offer many more terabytes per dollar than SSDs, which is important when you need tens to hundreds of terabytes or more. Backblaze fleet data shows high-capacity HDDs are widely used for bulk storage at trackable failure rates. [6]

Why it matters
WSIs accumulate fast. Pyramid images and formats like JPEG 2000 are efficient on disk but still decode into very large data in memory. Central storage that is expandable and backed up keeps the team productive. [1,14,15]

What can go wrong
Keeping the archive on local PC drives is hard to back up, hard to share, and easy to lose if a single machine fails.

Practical layout
- NAS or SAN with big HDDs for long-term storage and sharing.
- Workstations keep only the “active” slides on the fast NVMe SSD.
- Ask IT to enable snapshots and to keep an off-site or cloud backup.

4.5 Graphics card (GPU)

Plain idea
A GPU is a specialist that can do huge amounts of math in parallel. That helps with AI tools and can also speed up how fast image tiles are decoded and shown.

Why we recommend NVIDIA today
- Major AI frameworks ship first-class, widely documented support for NVIDIA by default. TensorFlow’s standard GPU packages target NVIDIA CUDA. [12] PyTorch offers both CUDA and ROCm builds, but CUDA remains the most common path many labs use. [13]
- Many medical imaging ecosystems assume NVIDIA. MONAI’s whole-slide readers work with cuCIM and can send patches to a CUDA device. [16] NVIDIA’s nvJPEG2000 can accelerate JPEG 2000 decoding, a format used in pathology. [14,15]
- Real add-ons often list CUDA as the supported route. For example, a QuPath deep-learning extension for kidney pathology notes NVIDIA CUDA for GPU inference. [17]

What about AMD GPUs
AMD’s ROCm platform is growing. The official docs list supported GPUs and operating systems on Linux, and Windows has a HIP SDK that brings a subset of ROCm capabilities, with framework support still more limited than on Linux. In short, it is improving, but you should pilot test before buying at scale for clinical research. [18,19,20]

How to think about GPU memory
The GPU has its own on-board memory. Think of it as the tray where models and image patches are placed before the GPU processes them. If the tray is too small, jobs will fail or need to be split into many tiny pieces, which slows everything down.

Sensible choices by need
- Light needs: viewer plus occasional AI assistance
GeForce RTX 5080 16 GB or GeForce RTX 5070 Ti 16 GB. [27,28]
- Moderate to heavy: bigger models, faster batch inference, light training
GeForce RTX 5090 32 GB for more headroom. [26]
- Workstation features or very large models
RTX 5000 Ada 32 GB or RTX 6000 Ada 48 GB for pro drivers and long duty cycles. [22,21]

What can go wrong with the wrong GPU
- Too little on-board memory means models do not fit.
- Weak or immature software support means your team fights drivers instead of doing science.

Why GPU speed matters for JPEG 2000 and tiled WSIs
GPU-accelerated JPEG 2000 decoding can be many times faster than CPU-only decoding, which improves load times and tile streaming on very large WSIs. [14,15]

4.6 Display and monitors: the window to your diagnosis

Panel types in clinic language
- TN: very fast, but weaker color and viewing angles.
- IPS: excellent color and viewing angles. This is the safest default for pathology review.
- VA: strong contrast, but colors can shift at angles.
- OLED: superb contrast and deep blacks. Needs care to reduce the risk of image retention during long static sessions. Independent long-term tests continue to monitor for burn-in with static content. [10]

A note about “LED”
LED is the backlight technology used in most LCDs. It is not a panel type like TN, IPS, or VA.

Practical specs to ask for
- Size and resolution: at least 24 inches and 1080p or higher. Many labs prefer 27 to 32 inches for comfortable viewing. [9]
- Brightness: aim for 300 to 350 cd/m² or higher, then adjust to a comfortable level in a controlled room. [9]
- Color and calibration: ask IT or Biomed to calibrate to sRGB using a colorimeter and to check a test pattern periodically. Formal guidance links better calibration with improved confidence. [9]

What can go wrong
- A cheap panel can shift colors when you move your head, which changes how nuclei or eosin look.
- Displays drift over time if they are never calibrated, which can reduce diagnostic confidence. [9]

4.7 Networking: just enough to talk to IT with confidence

Why it matters
Slides usually live on a server. Your workstation streams tiles while you pan and zoom.

Plain ask
- 1 Gb Ethernet works for basic use.
- 10 Gb Ethernet feels much better for shared WSI work, especially when several users pull large files. 10GBASE-T over Cat6a cabling can run to 100 meters and is common in hospitals. [8]

What can go wrong
Only using Wi-Fi or old cabling can make slides feel sticky. If performance is poor, ask IT about 10 Gb to your reading room and confirm cabling type and switch port options. [8]

4.8 Power, cooling, and noise: the comfort and safety pieces

Power supply
Look for an 80 PLUS rated unit. Gold or better is a good balance between efficiency and heat. Titanium is top end for efficiency. These certifications specify minimum efficiency at set loads, which reduces waste heat. [23,24]

Cooling and dust
Quiet, well-placed fans keep parts healthy and prevent thermal slowdowns.

UPS (battery box)
Ask facilities whether your room is on a protected circuit and if a small uninterruptible power supply is advised.


References

  1. DICOM Standards Committee. DICOM for Whole Slide Imaging. 2025. Available from: https://www.dicomstandard.org/dicomweb/wsi
  2. Moore J, Allan C, Burel JM, et al. OME-NGFF: a next-generation file format for bioimaging. v0.4.0; 2024. Available from: https://ngff.openmicroscopy.org
  3. QuPath developers. System requirements and performance tips. QuPath documentation; 2025. Available from: https://qupath.readthedocs.io/en/stable/docs/intro/faq.html#system-requirements-and-performance
  4. NVM Express, Inc. Why does NVMe technology have lower latency than SATA or SAS? NVMe FAQ; 2025. Available from: https://nvmexpress.org/faq-items/why-does-nvme-technology-have-lower-latency-than-sata-or-sas/
  5. Backblaze, Inc. Drive Stats for Q3 2025. 2025. Available from: https://www.backblaze.com/blog
  6. Cisco Systems. 10GBASE-T cabling and switch guidelines. 2024. Available from: https://www.cisco.com/c/en/us/support/docs/switches/campus-lan-switches-802-11/118612-technote-10gbase-t-00.html
  7. Williams BJ, Brettle D, Aslam M, et al. Guidance for remote reporting of digital pathology slides during periods of exceptional service pressure. J Pathol Inform. 2020;11:12. Appendix B display requirements. doi:10.4103/jpi.jpi_23_20
  8. RTINGS.com. OLED Monitor Long-Term Burn-In Test. Ongoing results 2023 to 2025. Available from: https://www.rtings.com/monitor/tests/long-term/oled-burn-in
  9. Schroeder B, Pinheiro E, Weber WD. DRAM errors in the wild: a large-scale field study. SIGMETRICS. 2009;37(1):193-204. Available from: https://research.google/pubs/dram-errors-in-the-wild-a-large-scale-field-study/
  10. TensorFlow team. Install TensorFlow with pip. GPU section. 2025. Available from: https://www.tensorflow.org/install/pip
  11. PyTorch team. Start Locally. CUDA and ROCm install selector. 2025. Available from: https://docs.pytorch.org/get-started/locally/
  12. NVIDIA Developer Blog. Accelerating JPEG 2000 decoding for digital pathology and satellite images using nvJPEG2000. 2021. Available from: https://developer.nvidia.com/blog/accelerating-jpeg-2000-decode-for-digital-pathology-and-satellite-images-using-nvjpeg2000/
  13. NVIDIA. nvJPEG2000 User Guide. v0.3.0; 2021. Available from: https://docs.nvidia.com/cuda/nvjpeg2000/index.html
  14. MONAI Contributors. Whole-slide image readers (cuCIM and OpenSlide). MONAI docs v1.4.0; 2024. Available from: https://docs.monai.io/en/stable/data.html#whole-slide-image-readers
  15. Zonta E, et al. GNCnn: A QuPath extension for glomerulosclerosis and glomerulonephritis characterization based on deep learning. J Pathol Inform. 2024;15:100222. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11288439/
  16. AMD. ROCm installation for Linux: System requirements and supported GPUs. 2025. Available from: https://rocm.docs.amd.com/projects/install-on-linux/en/latest/reference/system-requirements.html
  17. AMD. HIP SDK on Windows: component support and limitations. 2025. Available from: https://rocm.docs.amd.com/projects/install-on-windows/en/latest/index.html
  18. AMD. ROCm compatibility matrix. Framework support and OS lists. 2025. Available from: https://rocm.docs.amd.com/en/docs-6.2.2/compatibility/compatibility-matrix.html
  19. NVIDIA. RTX 6000 Ada Generation specifications. 2025. Available from: https://www.nvidia.com/en-us/products/workstations/rtx-6000/
  20. NVIDIA. RTX 5000 Ada Generation specifications. 2025. Available from: https://www.nvidia.com/en-us/design-visualization/rtx-5000-ada/
  21. CLEAResult. 80 PLUS Power Supply Licensing and Certification Policy. Rev. 2024-01-01. Available from: https://www.clearesult.com/shop/media/eighty_plus_policy/websites/7/80_PLUS_POWER_SUPPLY_LICENSING_AND_CERTIFICATION_POLICY_1-1-2024_3.pdf
  22. CLEAResult. What is the 80 PLUS certification program? Program details and efficiency levels. 2025. Available from: https://www.clearesult.com/80plus/index.php/program-details
  23. NVIDIA Newsroom. NVIDIA Blackwell GeForce RTX 50 Series opens new world of AI computer graphics. 2025 Jan 6. Available from: https://nvidianews.nvidia.com/news/nvidia-blackwell-geforce-rtx-50-series-opens-new-world-of-ai-computer-graphics
  24. NVIDIA. GeForce RTX 5090 graphics cards. Specs page. 2025. Available from: https://www.nvidia.com/en-us/geforce/graphics-cards/50-series/rtx-5090/
  25. NVIDIA. GeForce RTX 5080 graphics cards. Specs page. 2025. Available from: https://www.nvidia.com/en-me/geforce/graphics-cards/50-series/rtx-5080/
  26. NVIDIA. GeForce RTX 5070 family. Specs page. 2025. Available from: https://www.nvidia.com/pt-br/geforce/graphics-cards/50-series/rtx-5070-family/