AJAX Search


Innovation Delivered

Get to talk about preclinical in vivo imaging related topics, trends, events and more.

What is the value of “Out of date” review articles? Distortion of reality!

In the field of highly sophisticated pre-clinical imaging systems we all know that it’s important to publish articles, technical validations and independent peer reviewed performance evaluation papers on instrumentation. Eventually these performance evaluation, characterization or comparison articles make their way into review articles.

The "review article" is one of the most useful tools available for individuals who need to research a certain topic in the rapidly expanding body of scientific literature. According to Huth [1] a "well-conceived review written after careful and critical assessment of the literature is a valuable document” and it spares time for researchers to keep abreast of all published information. A review article should provide a critical appraisal of the subject.

It is extremely difficult to compare the performance of two imaging systems from different vendors if there is no standardized methodology that is independent of the camera design. Such a methodology should be applicable to a wide range of camera models and geometries. Fortunately for the Primary Investigators there is a NEMA standard publication for performance measurements of small animal positron emission tomographs (NEMA Standards Publication NU 4-2008 [2]) since 2008.

I myself have an engineering background and I’m always astonished how creatively sales people can distort the reality (i.e. numbers) in their marketing materials. I started an excel sheet back in 2007 by filling out numerous specifications for every small animal PET systems, either commercial or academic, when we started the design of our nanoScan small animal PET system at Mediso. Currently it lists about 40 pre-clinical systems including variants (while most of them are now obsolete or discontinued, such as the the Siemens Inveon). As part of my position I closely follow the published performance evaluation and review articles.

Balancing a system design is very delicate question – sensitivity and resolution do not walk hand in hand and it’s easy to get lost in the quagmire of different parameters: ultimately the detector design, the basic parameters and image characteristics together define the image quality. Also the image quality of a certain measurement series does not say anything about reproducibility, long term imaging performance, usability and feature sets.

Review of Review Article

My particular problem with instrumentation review articles is that they usually have a limited/selected subset of parameters which subconsciously (or consciously as I will give the benefit of doubt here) can lead to distortion of the reality. My apologies to the authors of the article by Kuntner & Stout, but this latest review article for preclinical PET imaging and may serve as example [3]. It is a really good article and lists various factors affecting the quantification accuracy of small PET systems. It’s a recommended article to read!

In the first table it shows the characteristics of preclinical PET scanners (visit to the link to view the original table)

The article was published on 28 February 2014, and was originally received on 27 November 2013. It references the Mediso’s microPET system based on an article from JNM 2011 [4]. However the performance evaluation of our next generation nanoScan PET was published online on August 29, 2013 in JNM [5]. Fortunately Spinks and his colleagues published a new paper on the quantitative performance of Albira PET with its largest axial FOV variant in February 2014 [6], so the Albira’s characteristics won’t be distorted – their ‘flagship’ variant is also listed. Lack of access to projection data by the researchers, the standard NEMA procedure could not be used for some of their measurements (e.g. sensitivity, scatter fraction, noise-equivalent counts).

Updated comparison table

So let’s include the updated characteristics in our new table and have a closer look on the parameters.

 Characteristics of preclinical PET scanners based on publications

My problems with the original Table 1 in [3]:

  1. The ‘ring diameter’ was listed in the comparison table, which is quite non-relevant unless you want disassemble the system. It’s much more useful to list the bore diameter and the transaxial FOV. The bore diameter shows how wide object you can stick into the system, while the transaxial FOV shows that actually where you will collect data from!
  2. The resolution values listed are not comparable– some of them were listed according to the NEMA NU-4 2008 standard performed with SSRB+FBP (e.g. Inveon), and some of them with iterative reconstruction methods like OSEM (e.g. Genisys4). The pre-clinical PET NEMA standard allows only the usage of the filtered back projection reconstruction method to measure the resolution. More importantly the results have to show the values in all directions: in the transverse slice in radial and tangential directions and additionally the axial resolution shall be measured across transverse slices at 5, 10, 15 and 25 mm radial distances from the center. Example from [5]:

    Currently based on the published literature the nanoScan PET subsystem from Mediso delivers the best resolution values for the NEMA NU-4 2008 measurements – even without using the sophisticated 3D Tera-Tomo Reconstruction engine. Based on the original article the reader may derive the false conclusion that the Genisys4 PET delivers the best resolution – while it’s hardly the situation. The FBP recon values had not been published for Genisys4 so far.
  3. The 2D FBP recon provides comparable information on the detector design, but not the system performance! The advanced 3D iterative reconstruction methods allow to incorporate lot of corrections and they provide better spatial resolution, image characteristics – if used properly. Let’s call these resolution values performed by ‘advanced’ reconstruction methods ‘claimed by manufacturer’ values.
  4. Please always pay attention to the energy window setting when comparing sensitivity values!


This is general remark for almost all review articles on preclinical PET systems with the exception of JNM article from Goertzen et al [8].

If I’d be interested in the acquisition of a capital equipment, which will be used for at least 10 years, I wanted to see not the peak sensitivity value of the system. This sensitivity is valid usually only in one position – in the radial and transaxial center of the field-of-view. In reality the standard imaged objects are mice, rats and other species, and not point- or line sources. The NEMA standard does contain a method of sensitivity measurement and evaluation for mouse and rat applications which encompass the central 7 cm and 15 cm axial extent. The problem is in practice that these parameters are not listed in the articles for most of the systems – while it’s a really useful value.

 Characteristics of preclinical PET scanners: updated with sensitivity

In the literature sensitivity values for mouse-sized region are listed only for 3 small animal PET systems: Albira, Inveon and nanoScan. For rat-sized object you can find value only for the Mediso’s system.

The Truth Lies in the Details.


  1. Edward J. Huth, How to Write and Publish Papers in the Medical Sciences (Williams & Wilkins, 1990).
  2. National Electrical Manufacturers Association. NEMA Standard Publication NU 4-2008: Performance Measurements of Small Animal Positron Emission Tomographs. Rosslyn, VA: National Electrical Manufacturers Association; 2008
  3. Claudia Kuntner and David B. Stout, “Quantitative Preclinical PET Imaging: Opportunities and Challenges,” Biomedical Physics 2 (2014): 12, doi:10.3389/fphy.2014.00012. http://journal.frontiersin.org/Journal/10.3389/fphy.2014.00012/full
  4. Istvan Szanda et al., “National Electrical Manufacturers Association NU-4 Performance Evaluation of the PET Component of the NanoPET/CT Preclinical PET/CT Scanner,” Journal of Nuclear Medicine: Official Publication, Society of Nuclear Medicine 52, no. 11 (November 2011): 1741–47, doi:10.2967/jnumed.111.088260. http://jnm.snmjournals.org/content/52/11/1741.long
  5. Kálmán Nagy et al., “Performance Evaluation of the Small-Animal nanoScan PET/MRI System,” Journal of Nuclear Medicine, October 1, 2013, jnumed.112.119065, doi:10.2967/jnumed.112.119065. http://jnm.snmjournals.org/content/early/2013/08/26/jnumed.112.119065
  6. T. J. Spinks et al., “Quantitative PET and SPECT Performance Characteristics of the Albira Trimodal Pre-Clinical Tomograph,” Physics in Medicine and Biology 59, no. 3 (February 7, 2014): 715, doi:10.1088/0031-9155/59/3/715. http://iopscience.iop.org/0031-9155/59/3/715
  7. Qinan Bao et al., “Performance Evaluation of the Inveon Dedicated PET Preclinical Tomograph Based on the NEMA NU-4 Standards,” Journal of Nuclear Medicine 50, no. 3 (2009): 401–8. http://jnm.snmjournals.org/content/50/3/401.short
  8. Andrew L. Goertzen et al., “NEMA NU 4-2008 Comparison of Preclinical PET Imaging Systems,” Journal of Nuclear Medicine 53, no. 8 (2012): 1300–1309. http://jnm.snmjournals.org/content/53/8/1300.short
  9. Stephen Adler, Jurgen Seidel, and Peter Choyke, “NEMA and Non-NEMA Performance Evaluation of the Bioscan BioPET/CT Pre-Clinical Small Animal Scanner,” Society of Nuclear Medicine Annual Meeting Abstracts 53, no. Supplement 1 (May 1, 2012): 2402. http://jnumedmtg.snmjournals.org/cgi/content/meeting_abstract/53/1_MeetingAbstracts/2402
  10. Ken Herrmann et al., “Evaluation of the Genisys4, a Bench-Top Preclinical PET Scanner,” Journal of Nuclear Medicine, July 1, 2013, doi:10.2967/jnumed.112.114926. http://jnm.snmjournals.org/content/early/2013/04/29/jnumed.112.114926
  11. F. Sánchez et al., “Small Animal PET Scanner Based on Monolithic LYSO Crystals: Performance Evaluation,” Medical Physics 39, no. 2 (2012): 643, doi:10.1118/1.3673771. http://link.aip.org/link/MPHYA6/v39/i2/p643/s1&Agg=doi
Continue reading
18548 Hits