Q&A with Larry York: APPL lab uses robotics, AI to advance plant science

Q: What does APPL enable that we can’t do now? 

A: The current state-of-the-art in plant science for measuring plants can be as simple as growing plants for 30 days, cutting off the shoot, drying it and weighing it, giving one measurement at one time point. But APPL can provide hundreds of different measurements of living plants over that same amount of runtime, yielding multiple variables simultaneously. Images are processed to produce numeric traits that can be interpreted, and some of the traits have direct manual analogs like height. In other cases, measurements like leaf temperature are thought to relate to water use so can be used as a proxy but require extensive validation with ground truth measurements. At APPL, we take great care to ensure measurements are properly validated.

In addition to obvious but important traits such as plant height, area and volume, we can calculate less common measurements like plant shape and mass distribution. Using color imaging, we can estimate plant greenness as related to health and photosynthesis, and using hyperspectral imaging of 1,000 wavelengths, or false colors, we can give even more detailed spectral indices related to plant growth, water content, nitrogen content and chlorophyll content. In the future, we expect to use our imagery to estimate more and more chemical aspects, such as lignin and cellulose. Using imagery of fluoresced light, we can estimate aspects of the plant’s ability to turn sunlight into food.  

We are also working to streamline image and data analysis to provide near-real-time data and insights to our collaborators. This includes deploying a new analytics server that will provide the computational power to quickly analyze thousands of images in parallel as needed using 256 processor cores, over 1 TB RAM, and two powerful NVIDIA graphics cards. A single hyperspectral image can be 500 MB, so having the ability to process quickly is important. 

At the same time, we are working to provide FAIR data to customers using new cloud computing capabilities outside APPL. In one project, we are working to automatically use APPL data in simulation models of leaf photosynthesis and in automatic genetic analysis and genomic prediction. That project is part of ORNL’s INTERSECT initiative, advancing autonomous experiments that leverage advanced computing, scientific instruments and facilities.

Q: What have been some interesting applications of APPL so far?

A: APPL was used to confirm the discovery of a gene that significantly increases biomass in poplar trees. APPL measured leaf area index using 3D laser scans of poplars containing the gene compared to the control. In another project, scientists were able to discern changes in leaf size and shape in plants that were colonized with a bacteria believed to confer heat stress tolerance. In poplar, genetic analysis has uncovered hundreds of genetic regions associated with traits measured in APPL over time, leading to a new way to think about the influence of plant genetics over the course of a plant’s development. Additional work has explored how soil fungus colonize plant roots and influenced their growth over time.

APPL has acquired more than 350,000 images across the multiple modalities over the past couple years. These images are being used by ORNL’s John Lagergren, associate staff scientist, for the development of AI tools such as vision transformers to automatically detect and quantify individual leaves. This will allow better understanding of how plant function depends on leaf location and size, for example. Just like AI is transforming many aspects of society, the influence on plant science is expected to be immense.

This Oak Ridge National Laboratory news article "Q&A with Larry York: APPL lab uses robotics, AI to advance plant science" was originally found on https://www.ornl.gov/news

 

Scroll to Top