Exploring Your Files in Docs Repository

Overview

 

Once your files are uploaded, all recordings, transcripts, highlights, themes and analytics are available in the Docs Repository. This article explains what each section shows and how to use it.

Files

 

The Files section shows all uploaded files as thumbnails. Each thumbnail displays the file name and, for video and audio recordings, the duration. Click any file to open it in the Docs Repository.

 

 

 

File Summary Tab

Opening a file shows two tabs at the top: File Summary and Analytics. The File Summary tab is the default view. It contains the file preview and two sub-tabs on the right: File Summary and Transcripts.

 

 

 

For PDFs, Word documents and presentations, the File Summary sub-tab shows the Summary and Highlights panels. The Transcripts sub-tab is available for video and audio files only.

 

File Summary

The File Summary sub-tab shows an AI-generated overview of the file content. It includes a summary of the main topics covered, followed by structured notes on key points, decisions or discussion areas. Use this to get a quick understanding of a file before going into the full transcript or document content, particularly when reviewing files you did not create yourself.

 

 

Transcripts

Video and audio files only

The Transcripts sub-tab shows the full timestamped record of everything said during the recording. Each block of speech is attributed to a speaker with a play button that jumps to that exact point in the video. The right panel shows all highlights created for this file.

Three options in the toolbar give you additional controls.

  • Translate: Translates the transcript into any of 100 or more languages. Use this when participants spoke in a different language from your working language, or when sharing the transcript with a team that works in a different language.
  • Speaker Label: Opens the Speakers List panel. Decode detects how many distinct speakers are present and lists each one with a sample audio clip so you can identify who is who. Assign each speaker a role (Moderator or Participant) and a display name. Speaker labels are used in the Analytics tab to calculate speaking time and attribute emotional signals correctly. Complete this step before reviewing Analytics to ensure the data is accurate.
  • Edit: Allows you to correct errors in the auto-generated transcript. Use this when words have been misheard or incorrectly transcribed in a way that affects the meaning of what was said.

 

 

To create a highlight, select any text in the transcript. A prompt appears where you can search for an existing tag or create a new one. Once a tag is applied, that moment becomes a highlight linked to the recording at that timestamp. The highlight appears in the right panel and in the Highlights section of the left navigation. Tags feed into the Themes section.

 

Analytics Tab

The Analytics tab shows quantitative data generated from the file. It is accessible for all file types, but the data available differs depending on whether the file is a video or audio recording or a document.

 

File Details

File Details shows the key metadata for the file. Use this to confirm you are reviewing the correct file and to check how much tagging has been done on it. 

 

For video and audio files: 

Field 

What it shows 

File name 

The name given to the file on upload. 

Number of persons detected 

How many participants were identified in the recording. 

Number of tags 

The total number of tags applied to the file. 

Type 

The file format, for example MP4 or MOV. 

Uploaded by 

The name of the team member who uploaded the file. 

Last edited 

The date the file was most recently edited. 

Total duration 

The full length of the recording. 

 

For PDFs, Word documents and presentations: 

Field 

What it shows 

File name 

The name given to the file on upload. 

Number of pages 

Total number of pages in the document. 

Number of tags 

The total number of tags applied to the file. 

Type 

The file format, for example PDF or PPTX. 

Uploaded by 

The name of the team member who uploaded the file. 

Last edited 

The date the file was most recently edited. 

 

Emotion AI Metrics

Video and audio files only 

Shows the Positive and Negative emotion percentages for the file as progress bars. Positive emotion reflects moments where the participant displayed positive facial expressions. Negative emotion reflects moments of frustration, confusion or displeasure. Use these percentages as a starting point before going into the Emotion Phase Metrics to understand when these emotions occurred during the recording.

 

 

Speaker Metrics

Video and audio files only 

A chart showing the proportion of speaking time attributed to each person in the recording. Each speaker is shown as a separate colour coded segment. In a well run session, participants typically speak more than the moderator. Use this chart to check the balance of speaking time. If the moderator's share is significantly higher than the participant's, it may indicate the session was led too heavily, which can affect the quality of participant responses.

 

Speaker Phase Metrics

Video and audio files only

A timeline showing when each speaker was active during the recording. Each person has their own row, and their speaking moments are marked across the full duration of the file. You can view this at 1 minute, 2 minute or 5 minute intervals. Use this to understand the flow of conversation, identify stretches where only the moderator was speaking, or check whether a participant was silent for an extended period.

 

 

Emotion Phase Metrics

Video and audio files only

A timeline showing when Positive, Neutral and Negative emotions occurred throughout the recording. Each emotion type has its own row. Switch between 1 minute, 2 minute or 5 minute intervals depending on how granular you need the view to be. Use this alongside the transcript to connect emotional reactions to specific moments in the conversation. If negative emotion clusters in a particular part of the recording, go to the transcript at that timestamp to see what was being discussed.

 

Text Analytics

Video and audio files only 

A word cloud showing the most frequently used terms across the transcript. Words that appear larger were used more often. Use this to understand the language participants naturally reach for when discussing the topic. Terms that appear prominently across multiple files reflect the vocabulary your audience associates with the subject, which is useful when writing survey questions, product copy or discussion guides for follow-up research.

 

 

Highlights

 

The Highlights section collects every tagged moment from every file in the Docs Repository into a single view. Rather than going back into individual files to find a specific moment, you can see all tagged evidence in one place and navigate across files efficiently.

 

Each highlight card shows the following.

File name 

Which file the highlight came from. 

Speaker 

Who was speaking at that moment. Shown for video and audio files only. 

Timestamp 

The exact point in the recording where the moment occurred. Shown for video and audio files only. 

Document content or transcript excerpt 

The text selected when the highlight was created. 

Tags 

The tags applied to the moment. 

Detected emotion 

Whether the emotion at that moment was Positive, Neutral or Negative, based on facial coding. Shown for video and audio files only. 

 

Themes

 

The Themes section is where you organise and analyse the tags created across all files. It shows all tags grouped into tag groups, along with the emotion data associated with each group. This section works best when tagging has been applied consistently across all files.

 

Tag Groups

All tags appear under a Default group automatically. You can create additional groups by clicking New Group in the top right. Tag groups are useful for organising tags by topic, research question or any other category that is meaningful to your files. To move a tag into a different group, drag and drop it from one group to another.

 

 

Each tag row shows the tag name and the number of times it was applied across all files. This count gives you a quick sense of how frequently a specific topic came up.

 

Emotion Breakdown per Group

Hovering over a tag group name reveals the emotion breakdown for that group: the proportion of Positive, Negative and Neutral emotion across all moments tagged within it. This tells you not just how often a topic came up, but how participants felt when it did.

 

Analytics

Video and audio files only

 The Analytics section brings together AI-generated insights from across all your files. It covers four areas.

 

AI Summary

The AI Summary generates an overview of all files in the Docs Repository. It covers the main topics discussed across all recordings, the overall emotional sentiment, and what the data suggests is working well and what needs attention. The summary also shows the total number of participants, sessions and combined duration. Use Show More to expand the full summary and Show Less to collapse it.

 

Four metrics appear alongside the summary.

Positive Emotion %

The percentage of moments across all files where participants displayed positive facial expressions. Use this alongside the Negative Emotion % to understand the overall emotional tone before going into specific themes.

Negative Emotion %

The percentage of moments across all files where participants displayed negative facial expressions such as confusion, frustration or displeasure. Use this alongside the Emotion Phase Metrics in the individual file Analytics tab to identify exactly when and where these reactions occurred.

Engagement Score

A measure of how actively participants engaged during the recordings, based on facial expressions. It reflects the intensity and variety of facial movement captured throughout. A high score indicates participants were visibly expressive and reactive. A low score may mean participants showed minimal facial expression, which can happen due to poor camera quality, insufficient lighting or sessions that did not prompt strong visible reactions. Shown as a percentage.

Signal Score

A composite quality rating out of 10 that tells you how reliable the emotional data from your files is overall. Click the expand icon to open a detailed breakdown across three factors: Facial Signal (quality of facial data captured), Voice Signal (clarity of audio) and Text Clarity (accuracy of the auto-generated transcript). Each factor is rated individually as Low, Fair, Good, Very Good or Excellent. A score of 6 to 7 is rated Good. A score below 4 is Low, which means the emotional data has significant reliability issues. If your Signal Score is low, treat emotion data with caution but continue to rely on transcripts and highlights for your findings.

 

Theme Analysis Panel

The Theme Analysis Panel shows AI-identified patterns from across all your files as individual theme cards. Decode analyses the transcripts and emotional data from every file and groups recurring patterns into named themes automatically. Each theme card shows a theme name, a one-line description of the pattern, the number of files it appeared in, and a bar indicating the strength of the emotional signal associated with it.

The file count on each card is the most important indicator when assessing a theme. A theme that appeared in only one file reflects a single instance. A theme that appeared across multiple files independently is a recurring pattern. Always check the file count before deciding how much weight to give a theme in your findings.

Use the tabs and filters at the top of the panel to narrow the themes shown.

  • All Themes: Shows every theme Decode identified across all files.
  • Pain Points: Shows only themes that reflect difficulty, frustration or problems.
  • Opportunities: Shows themes that suggest areas where the experience worked well or where there is potential for improvement.
  • Filter: Emotion; Narrows themes by the dominant emotion detected within them. Use this when you want to find specifically where participants expressed confusion, frustration or positive reactions.
  • Filter: File; Narrows themes to a single file. Use this when you want to understand one specific recording in depth.

Click the expand icon on any theme card to open the detailed view. The detail view shows the theme name, the dominant emotion detected, and supporting video clips with timestamps. Each clip shows a breakdown of the participant's Expression, Tone and Language at that moment.

 

Question-Wise Insights

If your files included structured questions, the Question-Wise Insights section shows how participants responded to each question across all files. Each question has its own card with an AI Summary of the collective responses. If no response was recorded for a question, the card will indicate the question was skipped, timed out or left unanswered.

Click the expand icon on any question card to open the detailed view. The left panel shows the full list of questions. Click any question on the left to see its results on the right. The right panel shows the AI Summary for that question at the top, followed by the full verbatim response from each participant. Read the summary first to understand the overall pattern, then read the individual responses to see the full range of what was said.

 

Top Opportunity Areas and Strategic Recommendations

At the bottom of the Analytics section, Decode generates two AI-powered outputs based on everything collected across all files.

 

Top Opportunity Areas

A list of the areas where feedback points to the greatest potential for improvement. Each opportunity area is drawn from patterns identified across themes, pain points and participant responses.

Strategic Recommendations

A list of specific, actionable suggestions for how the team could respond to the opportunity areas. These are generated by Decode based on the full body of evidence from the files and are intended to give the team a starting point for planning next steps.

Was this article helpful?