The Time Browser Project

The Time Browser Project enlivens our record of the past, turning it into an immersive, flexible, interactive experience – saving us from being overwhelmed by an ever-increasing complex pile of dissociated data – and as a result giving us a more direct and compelling connection to our entire history of past events, giving us a deeper grounding foundation to sculpt a better future.

With the Time Browser, the past does not fade, but becomes increasingly clear, as we develop ever more powerful ways to record the present and interact with the resulting record.

Specific Goals:

•  Record Events With Real-World Time-Code which correspond to real-world time, not just counting inside the document.
•  Annotate The Recordings including transcription and comprehensive tagging, both automatically and manually.
•  Provide Powerful & Flexible Ways to Interact With The Resulting Timeline

 

Time

Information has by definition a myriad potential dimensions, along topics, tags, attributes – whatever we might refer to the information landscape ‘waypoints’ by. Taking the inspiration of GPS, which has provided a more precise navigable landscape for information to enhance our lives, with the Time Browser we take ‘time’ as the defining landscape, where records come together in a real-world timeline (not just a time-code for each separate piece of record which does not relate directly to the real-world timeline) and allow users to interact with this data.

 

Benefits

“Thus far we seem to be worse off than before – for we can enormously extend the record; yet even in its present bulk we can hardly consult it.”
Vannevar Bush 1945

This was true in 1945 and it is true today: It is easier to create information than to consult it.

The Time Browser aims to make this vast amounts of information usefully accessible, by making the information more ‘interact-able’, starting with how it is recorded/captured. By making sure the information ‘maps’ to the real-world timeline, and by ensuring speech is turned into searchable text and by adding meta-information, the stored information becomes powerfully interactive.

 

The Current State of the Art

Today we cannot even move to a specific time (say 2:30 on Monday the 11th) in a recording since the recording ‘document’ does not have a real-world timecode internally. This will be a prerequisite for much of the interactions we see in sci-fi movies where the hero goes through time and documents in interactive, cool ways.

 

The Immediate Capability Improvement

“What did Emily say about London in our last meeting” is a reasonable, and reasonably useful start. Asking the system to display all references Emily has made to London becomes possible, as is seeing what they were made in response to and what responses they got.

 

Future Capability Improvements

The future opportunities are truly exciting:

Imagine building an ‘event’ based on a group of users proximity, they were all present at a location and all recorded a talk given by someone and then later their own aspects of break-out group dialog. Pictures of diagrams, 3D scans, notes and more comes together to give a rich and connected ‘event space’. This event could join up with other events on the other side of the planet and stay connected. At any point later anyone can refer to any part of the event either as a single data point or as a wide-shot of what else was going on. Beyond this, beyond all the synchronised audio and video streams, the still pictures of people and diagrams, imagine further sensors being integrated to produce further dimensions to explore our timeline. Even further out, picture powerful computer systems with powerful AI analyse our streams to show how before an invention or major breakthrough the dialog went and how external factors such as the weather, traffic conditions on the way to the office or a myriad other factors may have influenced the outcome. Audio can be analysed for the emotions of the speaker, video also, and everything will be tagged in the uniform time stream. Imagine great philosopher’s discourse being recorded like this and in the future the system could help understand so much of the context of what they said and analyse it from multiple perspectives. Imagine school children being able to fully experience the moment Abraham Lincoln delivered the Gettysburg Address, or move around in Socrates dialogues.

 

Doug Engelbart

This work builds on our mentor Doug Engelbart’s pioneering work: Inventing much of the modern computer environment, through a clear vision: “By ‘augmenting human intellect’ we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems.” Douglas Engelbart 1962

 

Components 

There are two components to the Time Browser project:

  • The Time Browser Software Application for recording, processing (speech to text and other annotations/tagging) and interacting with the timeline. Minimal Actions for V1 & Interface Mockup for V1
  • The Time Browser Document Specifications of protocols and standards to store the timeline data so that anyone can build Time Browser Applications, much like anyone can build web browsers and access the same web data, such as HTML. Initial Proposed Document Specifications (draft)

 

Application & Document Standards

The diagram below illustrates the relationship between the Software Application, in the box at the bottom, and the Document Standards on top.

Reading from top down, different media can be added to the Time Browser data ‘soup’ where it is annotated (processed for meta-tagging, speech to text and more), where the data is made available to the Time Browser Software Application(s) below, which all have their own search/query engines and view engines, all competing for user attention much like web browsers do.

It is important to note that the project aims to be foundational, such as http and html is for the web, a foundation for further developments by anyone who has an interest. It is not an art project nor a for-profit closed-IP app we are working on.

 

flow


Proposed Stages

Initial Stage

The initial stage will be to build a system which records audio from a web browser and smartphone, with atomic clock level accuracy of the creation time and real-world synch to every unit of time in the in the audio file, at the resolution of the sample rate of the audio file.

The system will then transcribe the speech in the audio file to text, via computer or human system (company), as per user-preference.

The system will also provide basic interactions with the resulting text and audio file.

Second Stage

The second stage will see integration of multiple streams of audio, video and richer interactions, including basic AI capabilities.

Future Stages

Future stages are up for grabs…

 

Challenge

The field, and our project, must meet the challenge of ensuring that this continually increasing information volume provides continually increasing information value.

 

Questions to be Resolved

  • What is an event and how should events be bundled?
  • How should time zones be handled?
  • How should different curated records of events be shared and handled in larger streams?
  • Permissions of what can be shared with whom.
  • Extraction and re-integration of analysis.
  • Weaving of action items, decision statements and other human intentions.
  • How can people and other elements be represented most excitingly and usefully visually?
  • How can such dialog be converted, within the context it was presented, into specific knowledge objects that align with the larger framework of the project to which those discussions pertain?

 

The Doug Engelbart Connection

The project builds on the work of Doug Engelbart, who was our friend, mentor and great inspiration. Doug Engelbart invented much of the personal computer environment as we know it today, and he did it with (as he would call it), the ‘naive’ goal of augmenting our ability to solve urgent, complex problems collectively. His system was called NLS (OnLine System) and an important component was the Journal, of which this project is greatly inspired.

We aim to launch the project on the 50th anniversary of Doug Engelbart’s ‘Mother of all demos’, the 9th of December 2018. Doug wrote about his system in ’75:

“I wanted an underlying operational process, for use by individuals and groups, that would help bring order into the time stream of the augmented knowledge workers.”
Doug Engelbart 1975

 

 

Request for Collaboration 

The project is open source and built on open standards, which will be evolved in concert with the industry and academia. We invite you to join us.

We have a weekly chat: Weekly Google Hangout (4pm UK time Sundays)

We are collaboratively designing the initial commands and pitches for funding here: docs.google.com/spreadsheets

The guideline for the submission: https://www.newschallenge.org/faq

You can read about who we are on the Team Page.