Efficiently generating video content annotations for creative insights using a semi-automated video annotation platform.

Efficiently generating video content annotations for creative insights using a semi-automated video annotation platform.

Efficiently generating video content annotations for creative insights using a semi-automated video annotation platform.

My Roles -

๐Ÿ” Market & User Research
๐Ÿ’ผ Product Lead, Roadmaps
๐ŸŽจ End-to-end Product Design

๐Ÿ” Market & User Research
๐Ÿ’ผ Product Lead, Roadmaps
๐ŸŽจ End-to-end Product Design

For -

Tezign -
A startup building AI-empowered creative platforms

With -

Designers, Marketing Team, Creation Team, Software Developers, AI Engineers

Duration -

3 Months
Feb.2021 - May. 2021
Launched in May. 2021

3 Months
Feb.2021 - May. 2021
Launched in May. 2021

Quick Glance ๐Ÿ‘‰

I interned as a Product Designer / Manager at Tezign, a content-tech startup creating next-gen creative content platforms that empower the content creation, optimization, and distribution.

I led the end-to-end design process of a semi-automated video annotation platform in an 8-member multi-functional team. This platform was developed to help content strategists and annotators generate product-specific, high-granularity video annotations in an effective and efficient way.

We successfully launched this platform within 3 months, resulting in an 80% reduction in video annotation time costs. Additionally, our platform played a pivotal role in validating the concept of insights-driven video marketing, garnering recognition from our top-tier clients, YSL China and Lancome China ๐ŸŽ‰.

Challenge

Why we need video content metadata and why we built the video annotation platform?

Challenge

Why we need video content metadata and why we built the video annotation platform?

Challenge

Why we need video content metadata and why we built the video annotation platform?

Context.

High-granularity video metadata needed for valuable video insights

While videos have become a popular and powerful tool for marketing for leading brands, the connection between video content and its performance remains a black box. Our top-tier clients have shown a growing demand for a more detailed understanding of video performance. However, currently video insights fail to meet their expectations due to the absence of video content metadata at a high granular level.

Problem.

Current tools fail to meet the specialized requirements for marketing video annotation

Through our research, current annotation platforms fall short in meeting the two critical requirements outlined below:

Discover

How I discovered stakeholders' needs and expectations by facilitating cross-team workshops

Discover

How I discovered stakeholders' needs and expectations by facilitating cross-team workshops

Discover

How I discovered stakeholders' needs and expectations by facilitating cross-team workshops

Background Research.

Specialized format for marketing video annotations

Through cross-team collaboration with content strategists, marketers, and engineers, Tezign has successfully validated the feasibility of a specialized format for marketing videos aimed at generating scientific video insights. This annotation format includes temporal-level segmentation and element-level structured tagging.

Current hybrid workflow

Through background research, I've learned the hybrid workflow content strategists currently use to generate video annotations that meet such specialized format.

However, such a hybrid workflow fall short in efficiency and validity.

Two areas that we can improve:

From background research, I identified 2 areas that we can improve:

Interviews w/
Stakeholders.

Understand the current hybrid workflow and the gap with usersโ€™ expectations

Before diving into building a new platform, I questioned myself:
๐Ÿค” Why do users think a new annotation platform is necessary?
๐Ÿค” What is the gap between the current condition and usersโ€™ desired outcome & user experience?

Therefore, I interviewed 3 content strategists and 4 video annotators, from which I identified unmet or unsatisfactory user needs.

๐Ÿ’ก A disorganized collaboration workflow is a significant factor contributing to unsatisfactory annotation quality and reduced efficiency.

After analyzing the current collaboration workflow, I found that content strategists and annotators currently lack an efficient collaboration method for sharing videos, updating tagging structures, reviewing annotation, and merging data.

WorkFlow Redesign #1.

Optimize the collaborative workflow to meet the desired annotation quality

I created the new workflow to streamline the collaboration process and thus ensure the format and quality of annotations. First, it empowers content strategists to directly assign tasks, share videos and tagging structures via this platform. Second, by providing real-time visibility into the ongoing annotation process, they can more effectively oversee and manage quality. Furthermore, following discussions with the team, we decided to incorporate the role of "reviewers" to assess annotations.

๐Ÿค” While quality can be better controlled with this workflow, time costs in manual annotation is still a big issue. How can we reduce the time costs in video annotation?

Annotation Process Observation.

Dig deeper into annotatorsโ€™ behaviors & cognitive process for insights

To figure out how to improve the annotation efficiency and reduce manual labor, I conducted observations to understand how annotators currently work. Participants were asked to think aloud, providing insights into their cognitive processes to help identify design opportunities.

๐Ÿ’ก A huge amount of cognitive effort and manual labor required for video annotation may reduce or replace with AI assistance!

AI scene segmentation, transcription, AI tagging, can be used to greatly reduce the cognitive cost and time costs during annotation.

WorkFlow Redesign #2.

A human-AI collaborative annotation workflow to reduce manual labor and time costs

Based on findings from the observation, I then redesigned the annotation workflow, which integrated the AI assistance. I also facilitated discussions with the development team to choose proper technical solutions, e.g., smart screen segmentation, OCR, object recognition, to actualize this concept.

ideate

How we envisioned the new annotation platform

ideate

How we envisioned the new annotation platform

ideate

How we envisioned the new annotation platform

Platform IA.

Organize the information architecture for the new video content analysis platform

I was responsible for developing the 0-1 video content analysis platform. After discussing with the product management team, we prioritized the core features for the minimum viable product, including the dataset management, tag management, and the annotation tool.

I organized the Information Architecture that structured all the important features as well as detailed information and interactions. I also marked the launching phases by facilitating discussions with the multi-functional team.

Wireframing -
Core User Flow.

Illustrate the main user flow for our MVP

I was responsible for overseeing the entire platform. To ensure a smooth user flow across different features, I created the wireframes to show to main user flow that should be covered in our MVP phase.

Wireframing -
Annotation Tool.

Illustrate the layout of the annotation tool

In determining the layout of the annotation tool, I drew inspiration from various video editing tools, considering the similarities in decoding videos into different "layers" (e.g., visual, audio, text, transcripts, etc). After discussions with content strategists, we agreed to adopt a similar layout, which will lower the learning curve for users.

refine

How I iterated the annotation tool based on user tests

refine

How I iterated the annotation tool based on user tests

refine

How I iterated the annotation tool based on user tests

Considering that data and tag management already have well-established and widely-used design solutions, I concentrated primarily on the user experience design of the annotation tool. This area entails more intricate design demands, higher technical costs, and is pivotal to improving annotation quality and efficiency.

Thus, I will only talk about the iterations I made to the annotation tool in this section.

Segmentation.

Facilitate efficient and smooth segmentation

To add a temporal annotation to the video, the first step is to identify where to segment the video. Supporting smooth segmentation is vital to annotation quality and efficiency. Here are some key iterations I made through prototyping and usability testing:

Tag Selection.

Enhance clarity and efficiency in tag selection

After segmentation, annotators need to select a tag for that segment. I iterated the interaction based on the usability testing results and user feedback:

final design

final design

final design

Step1. Import Data

Now: Content strategists can use the centralized platform to manage datasets and monitor the ongoing annotation process in real-time.

Before: Previously, content strategists had to share videos with annotators through shared folders, resulting in disorganized data and task management.

Step2. AI Pre-Annotation

Now: AI automates segmentations through screen transitions and transcripts, aiding annotators with precise and efficient annotation. Additionally, utilizing ML models like product, scene, and music recognition, the system automatically finishes some annotations.

Before: Previously annotators had to watch the video and manually locate the segments. The manual annotation process was very labor intensive and time-consuming.

Step3. Manual Annotation

Now: Annotators can use AI segments and shortcuts to quickly locate the segments. The fixed tag list makes it easier to search and add tags.

Before: Locating annotations or adding tags imposes significant cognitive burden and demands extensive manual effort.

impact

impact

impact

๐ŸŽ‰ Huge improvement in annotation efficiency and quality

The new platform reduced annotation time for a 30s video from 90min to 15min. Besides, the error rate significantly decreased from 15% to about 5%.

๐ŸŽ‰ Video insights gained recognition from top-tier clients

We successfully used our own platform to generate 10,000+ annotations within 2 weeks after the launch of the MVP. Based on which, we pitched the video insights report for YSL China and Lancome China. The concept of video tagging for insights generation gained recognition from these top-tier clients!

reflection

reflection

reflection

What I learned.

๐Ÿง  โ€‹Push myself to think from a higher level

As a young startup, Tezign provided me with significant flexibility to explore diverse areas of interest such as research, design, and product management. I was fortunate to work alongside supportive and encouraging colleagues. The crucial lesson I learned was to approach problems from a higher level, question why we are focusing on a specific issue, and clearly define the problem before devising potential solutions.

โšก๏ธ The beauty of building tools

I came to admire the beauty and fun in creating elegant and natural interactions for technical tools. My journey has taught me to do more than just observe users' behaviors; I delve into the intricacies of their cognitive processes during these interactions. In the design of tools, even subtle changes to user interactions can have a profound impact on the user experience.

Additional Thoughts.

๐Ÿง˜โ€โ™€๏ธ Art of data, science of content

I questioned myself: Can we truly decode unstructured, creative content with semantic depth? Can data genuinely guide us in creating exceptional creative content? I am still enthusiastically exploring the realm of utilizing data and AI to comprehend creative content, extract insights, and facilitate content generation. While currently our primary assessment of video quality relies on business metrics such as ROI, Clickthrough Rate, Conversion Rate, etc, I anticipate that in the future we will have a more comprehensive approach to evaluate videos, ultimately enhancing video marketing strategies.