Adobe Enhances Smart Tags Capability for Marketers to Find the Most Relevant UGC Video

Header.jpg

Hong Kong — December 27, 2018 — Adobe (Nasdaq: ADBE) has extended the existing Smart Tags capability for images in Adobe Experience Manager to user-generated content (UGC) videos. This new feature can automatically scan and identify objects in videos, helping marketers to search and filter videos easily without the trouble of sifting through pages of social media posts.  

UGC is an essential tool to help reduce content marketing costs, improve the effectiveness of campaigns, and tackle the scale issues marketers facing today. Leveraging UGC is the key to alleviating the scaling challenges brands face in an increasingly content-hungry and personalized world, as UGC is not only cost effective but also more authentic with better performance. 64 percent of social media users seek UGC before making a purchase, and UGC videos receive 10 times more views than branded videos, according to Adweek.

Turning to AI to find the best UGC for the job

Adobe is tapping into computer vision to help automate the UGC curation efforts that were previously done by hand. Smart Tags, powered by Adobe Sensei, automaticallyscans images and identifies the key objects, object categories, and aesthetic properties to use as descriptive tags. This allows marketers to filter out image content with tags that do not match their search criteria.

Yet while Smart Tags has been an effective tool for images, video is by far the most consumed media type on the web today. According to Cisco, video will account for 82 percent of all web traffic by 2021 and the number of videos posted on Instagram grew 4 times last year. Videos are much heavier and have a temporal dimension, making them more challenging than images to classify, filter and curate. 

Smart Tags for video in Adobe Experience Manager

The Video Auto Tag Adobe Sensei service producing two sets of tags for a video of up to 60 seconds in length. The first is a set corresponding to depicted objects, scenes and attributes in the video, and the second corresponds to depicted actions in the video. These tags are used to improve search and retrieval of videos, allowing marketers to filter out content with tags that do not match their search criteria.

In addition to objects, scenes, and attributes, the service can recognize temporally varying events such as actions and activities   in a video, including “drinking” and “jumping”. This is addressed by adapting the image auto-tagger to predict actions by training on a curated set of “action-rich” videos with accompanying action labels derived from user metadata from an internal Adobe video dataset. The action auto-tagger is applied across multiple frames in the video, aggregating the results over time to produce the final action tag set for the video.

About Adobe Experience Cloud

Adobe offers an end-to-end solution for content creation, marketing, advertising, analytics and commerce. Unlike legacy enterprise platforms with static, siloed customer profiles, Adobe Experience Cloud helps companies deliver consistent, continuous and compelling experiences across customer touch points and channels – all while accelerating business growth.

Adobe Experience Cloud manages more than 233 trillion data transactions annually and US$141 billion in online sales transactions annually. Industry analysts have named Adobe a clear leader in over 20 major reports focused on experience.

 About Adobe

Adobe is changing the world through digital experiences. For more information, visit www.adobe.com/hk_en/.

Press ReleasesJames Kwan