Professional cameras offer tremendous image quality, but they do not keep up with smartphones in the sorts of metadata that they encode. Notably missing from most cameras is GPS. Fortunately, we carry phones with us and often take pictures on them as well as our pro cameras.
Our photo libraries contain the necessary information to infer where we were when photos with no geotag were taken, so why don't we act on this inference to improve the search-ability of large photo libraries.
After my recent travels, I have found that I often do not quickly discover many of my photos from my camera when looking by location, which results in me showing people inferior photos from my phone.
Go through Apple photos library
Select date range and camera serial # “I brought this with me”
For all such photos, place on timeline alongside photos with a location from a specific device ie phone checked “I had this with me”
Based on time and location on surrounding photos, guess the location for the photos without location. If on a flight, use the flight number to find the place along the flight path
Export location timeline for use in Lightroom plugin or other places
The simplest location interpolation method, the naive approach. We simply take the 2 known timestamp location pairs immediately before and after the timestamp of our untagged photo, calculate the temporal distance between the untagged and each tagged photos (IE 20% of the time between the tagged photos), and then assume that the spatial position will be exactly 20% along the line path between the two points.
In a city, we can get the shortest (and most probable) real path between 2 locations. First we can infer walking or driving by calculating the average velocity between the nearest two known datapoints, and then do a time based point2point interpolation, but using the path defined by maps instead of the line joining the 2 known points. This can result in much greater location accuracy than the naive point2point solve.
It is feasible to create an extensive dataset for training a "geoguessing" model for photos by using a large collection of geotagged photos, as well as their metadata and other info that could be used to narrow down a more precise location than the tp2p algorithm. An example input dataset would be prior photo, next photo, timestamp, pixels, maps directions, 35mm equiv. focal length.
This technique is likely to work very well for common landmarks like the Eiffel Tower or Golden Gate Bridge, but is also susceptible to false positives, IE mistaking a powerline tower for the Eiffel Tower. It is unclear if this is the optimal approach. It's potential is in that if it successfully recognizes a landmark, the focal length and angle of the landmark would theoretically allow for an extremely precise location solve. Some general solving ability may also be possible along similar principles if, for instance, it learns the geometry of buildings and can infer the position of the untagged image relative to the view of the same building in the tagged photos.
One of the key problems in this app's practical use is erroneous data. If your friend sends you a photo tagged at the other side of town, suddely your geotags will be way off. If you already ran geotagger, it may cause self interference where locations are interpolated based on interpolated locations. To solve this, we can first filter out source photos by camera serial number (the user can pick which device is their "location source"), and then by excluding photos with a custom EXIF tag stating that their location is interpolated.
Sparse data could be solved by integrating with a service like Google Maps to access a better reading of the user's location history, or allowing some level of manual input.
A GPX file is the industry standard for location data
The ability to add geotags to Adobe XMLs would allow this application to benefit photographers using software like Lightroom or Darktable to manage their photo library.
The prototype of this app has the following limitations that prevent me from shipping it to the public
iCloud photos library queries do not allow queries by device, or by arbitrary metadata. This prevents an efficient download of just the necessary known-data photos for this application.
Ideally, the following filter would be used: "photos from device with serial number {...} and excluding exif field 'auto_geotagger_tag_source'".
Right now, the application downloads all the photos filtered by the date range of the untagged photos, and then does the remaining filtering client-side. For large batches of photos or wide date ranges, this results in long loading times an very high storage and memory usage. This application only needs to query metadata, but right now I haven't been able to implement it without loading the pixel content of the photos.
If you are aware of how to solve this, please do not hesitate to reach out!