The future opportunities of a Time Browser ecosystem are vast.
Imagine building an ‘event’ based on a group of users proximity, they were all present at a location and all recorded a talk given by someone and then later their own aspects of break-out group dialog. Pictures of diagrams, 3D scans, notes and more comes together to give a rich and connected ‘event space’. This event could join up with other events on the other side of the planet and stay connected. At any point later anyone can refer to any part of the event either as a single data point or as a wide-shot of what else was going on. Beyond this, beyond all the synchronised audio and video streams, the still pictures of people and diagrams, imagine further sensors being integrated to produce further dimensions to explore our timeline. Even further out, picture powerful computer systems with powerful AI analyse our streams to show how before an invention or major breakthrough the dialog went and how external factors such as the weather, traffic conditions on the way to the office or a myriad other factors may have influenced the outcome. Audio can be analysed for the emotions of the speaker, video also, and everything will be tagged in the uniform time stream. Imagine great philosopher’s discourse being recorded like this and in the future the system could help understand so much of the context of what they said and analyse it from multiple perspectives. Imagine school children being able to fully experience the moment Abraham Lincoln delivered the Gettysburg Address, or move around in Socrates dialogues.