Aerial imagery and light detection and ranging (lidar) are extensively used in forestry to map canopy cover, tree condition, and other variables. Until recently, sensors were mounted on aircraft or UAVs, but recently many low-cost mobile mapping systems for forest inventories (e.g. using smart phones or tablets) have been developed. For example, the Apple iPad/iPhone 12 is equipped with both camera and lidar sensor. New projects evaluating these sensors for rapid and inexpensive data acquisition are emerging, and this project wants to contribute to those through a collaboration with Professor Kanazawa (UC Berkeley, EECS Department), who is an expert in computer vision, computer graphics, and machine learning. We plan to evaluate mobile mapping solutions by focusing on several key trees on the Berkeley campus that are part of the RCNR Tree Trail. The Tree Trail is part of a historic guided tour of the Berkeley campus trees, and includes several key species displaying numerous tree forms. Our goal is to create a “Digital Twin” of selected trees, and thus contribute to their description, while testing several cutting-edge machine learning algorithms that use point clouds from: 1) overlapping imagery, and 2) lidar. The undergraduate who works with us will get a change to learn from data science as well as ecology.
Work with Kelly and Taylor along with collaborator in EECS (Kanazawa) to test various algorithms for production of virtual trees from 1) imagery, and 2) terrestrial lidar.
Process will involve: 1) collection of imagery and terrestrial lidar using iPhones for a selection of trees in the RCNR Tree Trail database. 2) Test a number of machine learning methods to reconstruct each tree. 3) Validation of products using existing field data from the Tree Trail database.
Ability to code in Python; some background in Computer Science; Ability to capture images and data outside; Interest in the natural world.