Posted:
From the moment YouTube Gaming launched in August, we’ve consistently seen a pair of requests from our community: “Where are the chat bots? Where are the stream overlays?” A number of developers were happy to oblige, and some great new tools have launched for YouTube streamers.

With those new tools have come some feedback on our APIs. Particularly, that there aren’t enough of them. So much is happening on YouTube live streams -- chatting, fan funding, sponsoring -- but there’s no good way to get the data out and into the types of apps that streamers want, like on-screen overlays, chat moderation bots and more.

Well well, what have we here? A whole bunch of new additions to the Live Streaming API, getting you access to all those great chat messages, fan funding alerts and new sponsor messages!

  • Fan Funding events, which occur when a user makes a one-time voluntary payment to support a creator.
  • Live Chat events, which allow you to read the content of a YouTube live chat in real time, as well as adding new chat messages on behalf of the authenticated channel.
  • Live Chat bans, which enable the automated application of chat “time-outs” and “bans.”
  • Sponsors, which allows access to a list of YouTube users that are sponsoring the channel. A sponsor provides recurring monetary support to a creator and receives special benefits.

In addition, we’ve closed a small gap in the LiveBroadcasts API by adding the ability to retrieve and modify the LiveBroadcast object for a channel’s “Stream now” stream.

As part of the development process we gave early access to a few folks, and we’re excited to show off some great integrations that launch today:

  • Using our new Sponsorships feature? Discord will let you offer your sponsors access to private voice and text servers.
  • Add live chat, new sponsors and new fan funding announcements to an overlay with the latest beta of Gameshow.
  • Looking for some help with moderating and managing your live chat? Try out Nightbot, a chat bot that can perform a variety of moderating tasks specifically designed to create a more efficient and friendly environment for your community.
  • Show off your live chat with an overlay in XSplit Broadcaster using their new YouTube Live Chat plugin.

We’ve also spotted some libraries and sample code on Github that might help get you started, including this chat library in Go and this one in Python.

We hope these new APIs can bring whole new categories of tools to the creator community. We’re excited to see what you build!

Marc Chambers, Developer Relations, recently watched ”ArmA 3| Episode 1|Pilot the CH53 E SS.”

Posted:
Video quality matters, and when an HD or HFR playback isn’t smooth, we notice. Chrome noticed. YouTube noticed. So we got together to make YouTube video playback smoother in Chrome, and we call it Project Butter.


For some context, our brains fill in the motion in between frames if each frame is onscreen the same amount of time - this is called motion interpolation. In other words, a 30 frames per second video won’t appear smooth unless each frame is spaced evenly each 1/30th of a second. Smoothness is more complicated than just this - you can read more about it in this article by Michael Abrash at Valve.


Frame rates, display refresh rates and cadence
Your device’s screen redraws itself at a certain frame rate. Videos present frames at a certain rate. These rates are often not the same. At YouTube we commonly see videos authored at 24, 25, 29.97, 30, 48, 50, 59.94, and 60 frames per second (fps) and these videos are viewed on displays with different refresh rates - the most common being 50Hz (Europe) and 60Hz (USA).  


For a video to be smooth we need to figure out the best, most regular way to display the frames - the best cadence. The ideal cadence is calculated as the ratio of the display rate to frame rate. For example, if we have a 60Hz display (a 1/60 second display interval) and a 30 fps clip, 60 / 30 == 2 which means each video frame should be displayed for two display intervals of total duration 2 * 1/60 second.


We played videos a bunch of different ways and scored them on smoothness.  


Smoothness score
Using off the shelf HDMI capture hardware and some special video clips we computed a percentage score based on the number of times each video frame was displayed relative to a calculated optimal display count. The higher the score, the more frames aligned with the optimal display frequency. Below is a figure showing how Chrome 43 performed when playing a 30fps clip back on a 60Hz display:


Smoothness: 68.49%, ~Dropped: 5 / 900 (0.555556%)


The y-axis is the number of times each frame was displayed, while the x-axis is the frame number. As mentioned previously the calculated ideal display count for a 30fps clip on a 60Hz display is 2. So, in an ideal situation, the graph should be a flat horizontal line at 2, yet Chrome dropped many frames and displayed certain frames for as many as 4 display cycles! The smoothness score reflects this -  only 68.49 percent of frames were displayed correctly. How could we track down what was going on?


Using some of the performance tracing tools built into Chrome, we identified timing issues inherent to the existing design for video rendering as the culprit. These issues resulted in both missed and irregular video frames on a regular basis.



There were two main problems in the interactions between Chrome’s compositor (responsible for drawing frames) and its media pipeline (responsible for generating frames) --  
  1. The compositor didn’t have a timely way of knowing when a video frame needed display. Video frames were selected on the media pipeline thread while the compositor would occasionally come along looking for them on the compositor thread, but if the compositor thread was busy it wouldn’t get the notification on time.
  2. Chrome’s media pipeline didn’t know when the compositor would be ready to draw its next new frame. This led to the media pipeline sometimes picking a frame that was too old by the time the compositor displayed it.


In Chrome 44, we re-architected the media and compositor pipelines to communicate carefully about the intent to generate and display. Additionally, we also improved which video frames to pick by using the optimal display count information. With these changes, Chrome 44 significantly improved on smoothness scores across all video frame rates and display refresh rates:
Smoothness: 99.33%, ~Dropped: 0 / 900 (0.000000%)


Smooth like butter. Read more in public design document, if you’re interested in further details.


Dale Curtis, Software Engineer, recently watched WARNING: SCARIEST GAME IN YEARS | Five Nights at Freddy's - Part 1
Richard Leider, Engineering Manager, recently watched Late Art Tutorial.
Renganathan Ramamoorthy, Product Manager, recently watched Video Game High School

Posted:

Video thumbnails are often the first things viewers see when they look for something interesting to watch. A strong, vibrant, and relevant thumbnail draws attention, giving viewers a quick preview of the content of the video, and helps them to find content more easily. Better thumbnails lead to more clicks and views for video creators.

Inspired by the recent remarkable advances of deep neural networks (DNNs) in computer vision, such as image and video classification, our team has recently launched an improved automatic YouTube "thumbnailer" in order to help creators showcase their video content. Here is how it works.

The Thumbnailer Pipeline
While a video is being uploaded to YouTube, we first sample frames from the video at one frame per second. Each sampled frame is evaluated by a quality model and assigned a single quality score. The frames with the highest scores are selected, enhanced and rendered as thumbnails with different sizes and aspect ratios. Among all the components, the quality model is the most critical and turned out to be the most challenging to develop. In the latest version of the thumbnailer algorithm, we used a DNN for the quality model. So, what is the quality model measuring, and how is the score calculated?

The main processing pipeline of the thumbnailer.

(Training) The Quality Model
Unlike the task of identifying if a video contains your favorite animal, judging the visual quality of a video frame can be very subjective - people often have very different opinions and preferences when selecting frames as video thumbnails. One of the main challenges we faced was how to collect a large set of well-annotated training examples to feed into our neural network. Fortunately, on YouTube, in addition to having algorithmically generated thumbnails, many YouTube videos also come with carefully designed custom thumbnails uploaded by creators. Those thumbnails are typically well framed, in-focus, and center on a specific subject (e.g. the main character in the video). We consider these custom thumbnails from popular videos as positive (high-quality) examples, and randomly selected video frames as negative (low-quality) examples. Some examples of the training images are shown below.

Example training images.
The visual quality model essentially solves a problem we call "binary classification": given a frame, is it of high quality or not? We trained a DNN on this set using a similar architecture to the Inception network in GoogLeNet that achieved the top performance in the ImageNet 2014 competition.

Results
Compared to the previous automatically generated thumbnails, the DNN-powered model is able to select frames with much better quality. In a human evaluation, the thumbnails produced by our new models are preferred to those from the previous thumbnailer in more than 65% of side-by-side ratings. Here are some examples of how the new quality model performs on YouTube videos:

Example frames with low and high quality score from the DNN quality model, from video “Grand Canyon Rock Squirrel”.
Thumbnails generated by old vs. new thumbnailer algorithm.

We recently launched this new thumbnailer across YouTube, which means creators can start to choose from higher quality thumbnails generated by our new thumbnailer. Next time you see an awesome YouTube thumbnail, don’t hesitate to give it a thumbs up. ;)

Posted:
Want to get all of your YouTube data in bulk? Are you hitting the quota limits while accessing analytics data one request at a time? Do you want to be able to break down reports by more dimensions? What about accessing assets and revenue data?
With the new YouTube Bulk Reports API, your authorized application can retrieve bulk data reports in the form of CSV files that contain YouTube Analytics data for a channel or content owner. Once activated, reports are generated daily and contain data for a unique, 24-hour period.

While the known YouTube Analytics API supports real-time targeted queries of much of the same data as the YouTube Bulk Reports API, the latter is designed for applications that can retrieve and import large data sets, then use their own tools to filter, sort, and mine that data.

As of now the API supports video, playlist, ad performance, estimated earnings and asset reports.

How to start developing


  • Choose your reports:
    • Video reports provide statistics for all user activity related to a channel's videos or a content owner's videos. For example, these metrics include the number of views or ratings that videos received. Some video reports for content owners also include earnings and ad performance metrics.
    • Playlist reports provide statistics that are specifically related to video views that occur in the context of a playlist.
    • Ad performance reports provide impression-based metrics for ads that ran during video playbacks. These metrics account for each ad impression, and each video playback can yield multiple impressions.
    • Estimated earnings reports provide the total earnings for videos from Google-sold advertising sources as well as from non-advertising sources. These reports also contain some ad performance metrics.
    • Asset reports provide user activity metrics related to videos that are linked to a content owners' assets. For its data to included in the report, a video must have been uploaded by the content owner and then claimed as a match of an asset in the YouTube Content ID system.

  • Schedule reports:
  1. Get an OAuth token (authentication credentials)
  2. Call the reportTypes.list method to retrieve a list of the available report types
  3. Create a new reporting job by calling jobs.create and passing the desired report type (and/or query in the future)

  • Retrieve reports:
  1. Get an OAuth token (authentication credentials)
  2. Call the jobs.list method to retrieve a list of the available reporting jobs and remember its ID.
  3. Call the reports.list method with the jobId filter parameter set to the ID found in the previous step to retrieve a list of downloadable reports that that particular job created.
  4. Creators can check the report’s last modified date to determine whether the report has been updated since the last time it was retrieved.
  5. Fetch the report from the URL obtained by step 3.

  • While using our sample code and tools
    • Client libraries for many different programming languages can help you implement the YouTube Reporting API as well as many other Google APIs.
    • Don't write code from scratch! Our Java, PHP, and Python code samples will help you get started.
    • The APIs Explorer lets you try out sample calls before writing any code.


Cheers,


Posted:

2005: YouTube is born

Me at the Zoo is the first video uploaded to YouTube

2006: Google buys YouTube

One year after YouTube launches, videos play in the FLV container with the H.263 codec at a maximum resolution of 240p. We scale videos up to 640x360, but you can still click a button to play at original size.

2007: YouTube goes mobile

YouTube is one of the original applications on the iPhone. Because it doesn't support Flash, we re-encode every single YouTube video into H.264 with the MP4 container. YouTube videos get a resolution notch to 360p.

2008: YouTube kicks it up to HD

With upload sizes and download speeds growing, videos jump in size up to 720p HD. Lower resolution files get higher quality by squeezing Main Profile H.264 into FLVs.

2009: YouTube enters the third dimension

YouTube supports 3D videos, 1080p and live streaming.

2010: YouTube's on TV

The biggest screen in your house now gets YouTube courtesy of Flash Lite and ActionScript 2. 2010 also sees the first playbacks with HTML5 <video> thanks to VP8, an open source video codec. We bump up the maximum resolution to 4K, known as "Original" at the time.

2011: YouTube slices bread (and videos) to battle buffering

We launch Sliced Bread, codename for a project that enables adaptive bitrate in the Flash player by requesting videos a little piece at a time. Users see higher quality videos more often and buffering less often.

2012: YouTube live streaming hits prime time

We scale up our live streaming infrastructure to support the 2012 Summer Olympics, with over 1,200 events. In October, over 8 million people watch live as Felix Baumgartner jumps from the stratosphere.

2013: YouTube's first taste of VP9

We start our first experiments with VP9 in Chrome, which brings higher quality video at less bandwidth. Adaptive bitrate streaming in the HTML5 and Flash players moves to the DASH standard using both FMP4 and MKV video containers.

2014: Silky smooth 60fps comes to YouTube

High frame rate isn't just for games anymore: YouTube now supports videos that play in up to 60fps. Gangnam Style becomes the first YouTube video to break the MAX_INT barrier with more than 232 / 2 - 1 views.

2015: YouTube adds spherical video (look behind you!)

You can now upload videos that wrap 360 degrees around the viewer. Even 4K videos can play up to 60fps. HTML5 becomes the default YouTube web player.

Richard Leider, Engineering Manager, recently watched David Bowie - Oh You Pretty Things
Jonathan Levine, Product Manager, recently watched Candide Thovex - One of those days 2

Posted:
UPDATE 08/03/15: Starting today, API v2 of comments, captions and video flagging services are turned down.
------------------------------------------------------------------------------------------------------------------------------------------------------UPDATE 06/03/15: Starting today, most YouTube Data API v2 calls will receive 410 Gone HTTP responses.
------------------------------------------------------------------------------------------------------------------------------------------------------
UPDATE 05/06/15: Starting today, YouTube Data API v2 video feeds will only return the support video.
------------------------------------------------------------------------------------------------------------------------------------------------------
UPDATE: With the launch of video abuse reporting and video search for developer, the Data API v3 supports every feature scheduled to be migrated from the soon-to-be-turned-down Data API v2.
------------------------------------------------------------------------------------------------------------------------------------------------------

With the recent additions of comments, captions, and RSS push notifications, the Data API v3 supports almost every feature scheduled to be migrated from the soon-to-be-turned-down Data API v2. The only remaining feature to be migrated is video flagging, which will launch in the coming days. The new API brings in many features from the latest version of YouTube, making sure your users are getting the best YouTube experience on any screen.

For a quick memory lane trip, in March 2014, we announced that the Data API v2 would be retired on April 20, 2015, and would be shut down soon thereafter. To help with your migration, we launched the migration guide in September 2014, and have also been giving you regular notices on v3 feature updates.

Retirement plan
If you’re still using the Data API v2, today we’ll start showing a video at the top of your users’ video feeds that will notify them of how they might be affected. Apart from that, your apps will work as usual.
In early May, Data API v2 video calls will start returning only the warning video introduced on April 20. Users will not be able to view other videos on apps that use the v2 API video calls. See youtube.com/devicesupport for affected devices.

By late May, v2 API calls except for comments and captions will receive 410 Gone HTTP responses. You can test your application’s reaction to this response by pointing the application at eol.gdata.youtube.com instead of gdata.youtube.com. While you should migrate your app as soon as possible, these features will work in the Data API v2 until the end of July 2015 to avoid any outages.

How you can migrate
Check out the frequently asked questions and migration guide for the most up-to-date instructions on how to update specific features to use the Data API v3. The guide now lists all of the Data API v2 functionality that is being deprecated and won't be offered in the Data API v3. It also includes updated instructions for a few newly migrated features, like comments, captions, and video flagging.

- Ibrahim Ulukaya, and the YouTube for Developers team

Posted:
YouTube Sentiment Analysis Demo
Cindy 3 hours ago
I wish my app could manage YouTube comments.

Ibrahim 2 hours ago
Then it's your day today. With the new YouTube Data API (v3) you can now have comments in your app. Just register your application to use the v3 API and then check out the documentation for the  Comments and CommentThreads resources and their methods.

Andy 2 hours ago
+Cindy R u still on v2? U know the v2 API is being deprecated on April 20, and you’ve updated to v3 right?

Andy 1 hour ago
+Ibrahim I can haz client libraries, too?

Ibrahim 30 minutes ago
Yes, there are client libraries for many different programming languages, and there are already Java, PHP, and Python code samples.

Matt 20 minutes ago
My brother had a python and he used to feed it mice. Pretty gross!

Cindy 10 minutes ago
Thanks, +Ibrahim. This is very cool. The APIs Explorer lets you try out sample calls before writing any code, too.

Ibrahim 5 minutes ago
Check out this interactive demo that uses the new comments retrieval feature and Google Prediction APIs. The demo displays audience sentiment against any video by retrieving the video's comments and feeding them to the Cloud Prediction API for the sentiment analysis.