Technology Archives - VdoCipher Blog https://www.vdocipher.com/blog/category/technology/ Secure Video Streaming Thu, 11 Jul 2024 11:52:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.5 https://www.vdocipher.com/blog/wp-content/uploads/2016/11/cropped-VdoCipher-logo2-32x32.png Technology Archives - VdoCipher Blog https://www.vdocipher.com/blog/category/technology/ 32 32 What is Video Bitrate, and How Does it Affect Video Quality? https://www.vdocipher.com/blog/2020/09/video-quality-bitrate-pixels/ https://www.vdocipher.com/blog/2020/09/video-quality-bitrate-pixels/#comments Tue, 25 Jun 2024 00:43:45 +0000 https://www.vdocipher.com/blog/?p=3908 Some of the most common questions I get asked on video quality are- What is the size of a video for a certain pixel quality? (Say 1 hour 1080p video) What does kbps mean? How much video bandwidth will 1 hour 1000 kbps video consume? I want my videos in 1080/720/480/360 p like Youtube. How does […]

The post What is Video Bitrate, and How Does it Affect Video Quality? appeared first on VdoCipher Blog.

]]>
Some of the most common questions I get asked on video quality are-

  • What is the size of a video for a certain pixel quality? (Say 1 hour 1080p video)
  • What does kbps mean? How much video bandwidth will 1 hour 1000 kbps video consume?
  • I want my videos in 1080/720/480/360 p like Youtube. How does VdoCipher enable that?
  • How many video quality options I need to have to ensure smooth playback across the world?
  • Why does VdoCipher provide video bitrate (kbps) as quality options and not pixels (p)?

Here are some key definitions and concepts to help everybody better understand video quality parameters.

What is Video Quality?

Video quality is how good a video and its individual frame look on your screen. It basically means the level of accuracy of the images and detail that a video displays.

Video quality is influenced by different parameters such as:

Video Pixels: Video pixels define the resolution of a video, telling us how many pixels are in the video both horizontally and vertically. For eg, a Full HD video will have the resolution- 1920×1080, which indicates the pixels present. Higher pixel counts generally mean a clearer and sharper image, as there is more detail present.

Video Bitrate: Bitrate is the amount of data processed in a unit of time, typically measured in kilobits per second (Kbps) or megabits per second (Mbps). A higher bitrate usually translates to higher video quality, as more data is used to represent each second of video. However, it also means larger file sizes and may require more bandwidth for streaming.

Frame Rate: Measured in frames per second (fps), it determines how many individual frames are displayed per second. A higher video frame rate generally results in smoother motion in the video.

Compression: Video compression reduces file size, which can be crucial for storage and streaming. However, excessive compression can degrade video quality, causing artifacts and loss of detail.

Color Depth: Color Depth is the number of bits used to represent the color of each pixel. Higher color depth allows for a broader range of colors and more detailed color gradation.

Aspect Ratio: The ratio of width to height (e.g., 16:9). It doesn’t directly affect quality but choosing the wrong aspect ratio can result in a stretched or compressed image.

Codec: Video Codecs are used to compress and decompress video files. Different codecs may handle colors, motion, and detail differently, impacting the final quality.

Noise: Unwanted random variations in brightness or color information in images, which can degrade video quality.

Balancing these parameters is crucial to achieving optimal video quality while managing file sizes and streaming requirements.

VIdeo Quality; the prameters which affects it

What are Pixels (p) – 1080 / 720 / 480 / 360 / 240 p

Pixels as a quality index essentially mean the height of the video in the number of pixels. Thus, a video quality of 1080p means – the height of the video is 1080 pixels. Now the next question is what is the width of the video? There is no exact technically right answer but in common convention, the video is 16:9 for online playback. Thus the width of the video becomes 1080*16/9 – 1920 pixels.

So, in common convention 1080p means 1080 pixels as the height of the video and 1920 as the width of the video. Similarly, 480 p will mean 480 pixels in height and 480*16/9 -853 pixels as the width of the video. However this may vary if the aspect ratio of video is different (like 4:3). So, until you know the video aspect ratio, you can’t determine the exact number of pixels in a video.

Does pixel directly correspond to a size? What will be the size of 1 hour,1080 p video?

Surprisingly there is no fixed answer to that. If pixel quality is fixed, then the number of pixels in a video is fixed. E.g a 1080 p will have 1080*1920 = 2073600 pixels. But how much data is captured in the video, is not determined by number of pixels exactly. Thus, Youtube , Vimeo and Vdocipher can provide different size of videos for the same length and same pixel quality. It is even possible that say 1 hour, 1080p video of vdocipher is of lower size than 720 p of Vimeo.

So, the next obvious question is what exactly determines the video size? It is video bitrate (generally represented by kbps like 1500 kbps etc). Please read the next section to understand bitrate as a quality index.

Video Pixel: what is video quality

Explore More ✅

Host Your Videos With Best Quality

Vdocipher helps several video Platform to host their videos securely, helping them to boost their video revenues.

What is Video Bitrate (kbps) – 1500/ 1000/ 600 /300 kbps

Video bitrate is the video data transferred at a particular time. A high video bitrate is one of the most vital factors in the quality of a video. A good looking video consists of a high video bitrate along with a high video resolution and frame rate. So, I hope that answers what is bitrate in the video. For a particular video, you can use constant bitrate or variable bitrate, find out more about CBR vs VBR in the article linked.

How is video bitrate linked to the size of the video?

Bitrate is generally represented by kbps which essentially means kb (kilobits) of data per second. So, the size of a 1hour 1500 kbps video will be – 1500*60*60 -kilobits= 1500*60*60/8000 MBs of data = 675 MB per hour of video data.

Similarly, a 1000 kbps 1 hour video will be 450 MB in size , 600 kbps will be 270 MB in size.

What is the Relation between video pixels(p) and video bitrate (kbps)?

There is no precise technical relation between pixels and bitrate. For the same streaming provider; the higher the pixels, the higher the bitrate, and vice-versa. Though as I already mentioned different service providers can offer different pixels even at the same video bitrate. Pixels define the resolution of the video, while bitrate is average data size for a video file expressed per second of video. There can be high-resolution videos with low bitrate and low resolution videos at very high video bitrate . This imbalance is because of some complicated maths that is used to express what to display on a video with least amount of file size. These maths can force the bitrate for a video at any arbitrary small value while compromising on the quality.

What bitrate/pixels does VdoCipher use for HD streaming?

VdoCipher has a wide range for 1080p HD in terms of video bitrate. It depends on what size the customer uploads and what is type of content – Media, E-learning (In e-learning, if it is screen capture, animation or class recording etc.) Video bitrate for 1080p is at maximum kept at 2000 kbps for video course content , for certain high motion movies video bitrate for 1080p is kept at 2500 to 5000 kbps range. For certain cases, low motion lectures, video bitrate for 1080p can be as low as 600 kbps. You can read more about SD vs HD in this blog.

How does Video Bitrate affect Video quality?

Video quality is directly related to the video bitrate, generally, a higher video bitrate would mean higher quality, and a lower bitrate would mean lower quality. 

Although it might not be the case always, an unreasonable high bitrate can cause buffering issues or glitches, as the user’s computer and data bandwidth might not be able to process the file. This is why it is quite important to find out the optimum bitrate for your videos. 

Even with the same resolution, your optimum bitrate might vary depending on the video frame rate required for the video. For video lectures with about 30fps it can go up to 2000kbps, while with video content with a higher framerate like 60fps or more it can go up to 5000kbps.

Other factors that affect Video Quality

Apart from video bitrate, there are other parameters that affect the quality of the video. Generally, there is a direct correlation between the size and the quality of the video. Video frame rate and video resolution also have a good impact on video quality. 

  • Video frame rate is basically the number of different frames or images used to play the video. It is measured in fps or frames per second, which defines how many frames or images there would be in the video. So a 30 fps video would have 30 different frames in a second, as the number of frames increases, you can show much more details in that video. Sports, animation, and gaming videos would generally have a higher frame rate. 
  • Video resolution is the number of pixels in the frames of video. A video with more pixels would have better video quality. This is the most common term used in order to define video quality. In most video platforms, you can see the option to choose different resolutions. Mostly used video resolutions are 480p, 720p,1080p, and 4k.

Why does YouTube use pixels as a quality parameter, while VdoCipher uses video bitrate?

There are two primary reasons –

  1. Video Bitrate directly corresponds to size and hence bandwidth consumption & costs. 1000 kbps video will consume double bandwidth as compared to 500 kbps video. Pixels don’t have such direct co-relation.
  2. VdoCipher can provide higher pixel quality even at low bitrates. So, for many cases , VdoCipher can provide 1080p or 720 p HD even at 500-900 kbps range. Thus, there is no need for a lower pixel parameter.


How many video quality options I need to have to ensure smooth playback across the world?

From VdoCipher experience –

  1. For Movies/Serials with a lot of motion – 3 or maximum 4 qualities. We typically do  4000/5000 kbps, 2000 kbps ,800 kbps, 600 kbps
  2. For Educational content – 3 in most cases. 2000, 800, 400 kbps Or sometimes – 1500, 800, 300 kbps.

VdoCipher has customers across all 6 continents. (Sorry, we don’t have a customer in Antarctica yet :D) . The bitrate and quality optimizations are made keeping in mind slow connections of Asia & African users. Over the time, they have worked well for all geographical distributions ensuring a great viewing experience.

Video Bitrate and Data usage for HD (720p), FHD (1080p) and UHD (2160p) video streaming

Video bitrate for HD, FHD (Full HD), and UHD (Ultra HD) video content streaming depends on various factors. It includes codec (e.g., H.264, H.265, VP9), frame rate, and compression efficiency. However, the below figures are often cited for video streaming services.

Resolutions Typical Bitrate Range
Data Usage at an average bitrate
1280×720(HD) – 720p  2 Mbps to 4 Mbps
1.35 GB/hr @ 3 Mbps
1920X1080(FHD) – 1080p 4 Mbps to 8 Mbps
2.25 GB/hr @ 5 Mbps
3840×2160 (UHD) – 2160p/4K  15 Mbps to 68 Mbps
11.25 GB/hr @ 25 Mbps

Note: The above figures can vary. For example, H.265 (HEVC) is more efficient than H.264. It can deliver similar video quality at about half the bitrate. Moreover, the actual bitrate can change dynamically based on your network conditions if you use adaptive bitrate streaming.

Video Enhancement: Uplifting the Viewing Experience

Video quality plays a vital role when it comes to the viewing experience. But what to do when the he raw footage isn’t up to par?  This is where video enhancement comes to the rescue. 

Video enhancement means improving and optimizing the quality of video through various means. It can varu from making colors pop to smoothing out shaky footage, enhancement techniques can turn subpar footage into professional-grade content.

Here you’ll find some steps on how you can use video enhancement to improve the quality of video/ Thus ultimately uplifting your users’ video experience.

Video Enhancement Techniques

Several different video enhancement techniques can be employed to enhance video quality:

Upscaling:

Upscaling involves increasing the resolution of a video, making it more clear and detailed. Although it doesn’t add more detail than was in the original video, it does make the video more compatible with higher resolution displays. Major difference you’ll notice before and after upscaling is you can see a low res video on a high res display.

You can Increase your video’s resolution to make it sharper and more detailed using software like Adobe Premiere Pro or PowerDirector. These applications have built-in upscaling features. Remember, while upscaling won’t add more detail than was present in the original footage, it will improve compatibility with higher-resolution displays.

Noise Reduction: 

Digital noise, such as grain or specks that distort the video, can often occur during video recording, particularly in low-light situations. Noise reduction tools can minimize this noise, leading to a smoother, cleaner video. 

In such cases Video is processed frame-by-frame using three video frames (previous, current and next) as input. An enhanced frame is produced via inference using a pre-trained neural network model

You can clean up digital noise such as grain or specks with noise reduction features found in video editing software. For instance, you can do noise reduction in Adobe Premiere Pro, you can find this under the ‘Effects’ tab, labeled as ‘Denoise’.

Stabilization:

Shaky footage can often be an issue with handheld recordings. Stabilization helps to smooth out these shakes, creating a more pleasing and professional-looking video.

Make shaky footage steadier with stabilization options in your editing software. In iMovie, for example, this option is found under the ‘Stabilization’ tab after you’ve selected a clip in your timeline.

You can also user adobe premier pro for stabilization, Warp Stabilizer effect smooths out unwanted camera shake in just a few clicks, with precision fine-tuning so you can get exactly the look and feel you want.

Color Correction: 

This involves adjusting the colors in your video to make them appear more natural or to achieve a specific visual aesthetic. Color correction can make your videos more visually pleasing and engaging.

Adjust the colors in your video for a more natural or aesthetically pleasing look. Tools like Final Cut Pro and PowerDirector have robust color correction features.

Video Enhancement Tools and Software

There are numerous video enhancement software options available, from professional-grade software like Adobe Premiere Pro and Final Cut Pro, to more user-friendly options like iMovie or PowerDirector. These tools offer a range of enhancement features, allowing you to adjust various aspects of your video to improve its overall quality.

Impact of Video Enhancement

Improving video quality through enhancement can greatly impact viewer engagement. A high-quality video keeps the audience’s attention, reducing bounce rates and improving overall satisfaction. In an era where viewer expectations are higher than ever, delivering high-quality video content is crucial.

By addressing these facets of video enhancement, you’ll be better equipped to optimize your video content, delivering a viewing experience that resonates with audiences and meets today’s high standards of video quality.

How To Enhance Video Quality?

Enhancing video quality involves several techniques that can improve the visual experience for viewers. Here are some effective methods:

1. Increase Bitrate

  • Definition: Bitrate refers to the amount of data processed per unit of time in a video file.
  • Implementation: Higher bitrates can lead to better quality because more data is available to create each frame. However, it also results in larger file sizes.
  • Recommendation: Aim for a bitrate that balances quality and file size. For example, a 1080p video typically requires a bitrate between 8,000 and 12,000 kbps.

2. Use High-Resolution Source Files

  • Definition: Resolution is the number of pixels in each dimension that a video displays.
  • Implementation: Start with high-resolution footage, such as 4K or at least 1080p. Higher resolutions provide more detail and clarity.
  • Recommendation: Always capture and edit in the highest resolution possible, then downscale if necessary.

3. Optimize Encoding Settings

  • Definition: Encoding is the process of converting video files into a digital format.
  • Implementation: Use modern codecs like H.264 or H.265, which offer high quality at lower bitrates.
  • Recommendation: Adjust settings like profile, level, and compression rate for optimal quality. Tools like HandBrake can help optimize these settings.

4. Apply Filters and Enhancements

  • Definition: Filters and enhancements can improve video clarity, brightness, and color accuracy.
  • Implementation: Use video editing software to apply sharpening, noise reduction, color correction, and contrast enhancement.
  • Recommendation: Tools like Adobe Premiere Pro, Final Cut Pro, and DaVinci Resolve offer advanced features for video enhancement.

5. Ensure Good Lighting and Equipment

  • Definition: Proper lighting and quality equipment significantly affect video quality.
  • Implementation: Use adequate lighting to avoid shadows and grainy footage. High-quality cameras and lenses capture better details and colors.
  • Recommendation: Invest in good lighting setups and use cameras capable of recording at high resolutions and bitrates.

6. Improve Internet Bandwidth

  • Definition: For streaming videos, internet bandwidth affects the quality viewers experience.
  • Implementation: Ensure a stable and high-speed internet connection to stream videos at higher resolutions without buffering.
  • Recommendation: For HD streaming, a minimum upload speed of 5 Mbps is recommended, while 4K streaming may require 25 Mbps or higher.

How To Change Video Quality?

Changing video quality allows viewers to adjust the resolution and bitrate based on their internet speed and device capabilities. Here’s how to do it:

1. In Video Players

  • Implementation: Most video players (like YouTube, VLC, etc.) offer quality settings within their interface.
  • Steps:
    1. Click on the settings icon (usually a gear symbol).
    2. Select the ‘Quality’ option.
    3. Choose the desired resolution (e.g., 144p, 360p, 720p, 1080p).
  • Recommendation: Allow automatic quality adjustment based on the viewer’s internet speed for the best experience.

2. Through Video Editing Software

  • Implementation: Use editing software to export videos at different quality settings.
  • Steps:
    1. Import the video into the software (e.g., Adobe Premiere Pro, Final Cut Pro).
    2. Choose ‘Export’ and select the desired resolution and bitrate.
    3. Save the new file with the adjusted quality.
  • Recommendation: Create multiple versions of your video to cater to different audience needs.

3. Using Online Converters

  • Implementation: Online tools like HandBrake, Clipchamp, and Online-Convert allow for easy quality adjustments.
  • Steps:
    1. Upload the video to the converter.
    2. Select the desired output resolution and bitrate.
    3. Download the converted video.
  • Recommendation: Ensure the chosen converter maintains the video’s original aspect ratio and quality as much as possible.

By following these guidelines, you can significantly enhance and manage video quality, providing a better viewing experience for your audience.

Video Quality FAQ Summary 

What is the video bitrate for 1080 p ?

There is no precise technical relation between pixels and bitrate. For the same streaming provider; higher the pixels, higher the bitrate and vice-versa. Different service providers can offer different pixels even at the same bitrate. If pixel quality is fixed, then the number of pixels in a video is fixed. E.g a 1080 p will have 1080*1920 = 2073600 pixels. But how much of data/video size that has is not determined directly by number of pixels. Thus, Youtube, Vimeo, and Vdocipher can provide different bitrate/size of videos for the same length and same pixel quality.

What is the relation between pixels(p) and bitrate (kbps)?

There is no precise technical correlation between pixels and bitrate. For the same streaming provider; higher the pixels, higher the bitrate and vice-versa. Different video hosting providers can offer different pixels even at the same bitrate. Pixels define the resolution of video, while video bitrate is average data size for a video file expressed per second of video. There can be high-resolution videos with low bitrate and low-resolution videos at a very high video bitrate. This imbalance is because of some complicated maths that is used to express what to display on a video with least amount of file size. These maths can force the bitrate for a video at any arbitrary small value while compromising on the quality.You should choose the pixels based on the content of video and target display. Afterwards, choose a bitrate based on limitations of the transmission medium, say internet speed.

Does video bitrate affect quality ?

Yes. Video bitrate is directly correlated to video quality. Higher the bitrate, higher the video quality. But bitrate is not the only parameter affecting visual quality, pixel also plays a role in video quality. Bitrate is generally represented by kbps which essentially means kb (kilobit) of data per second. So, the size of a 1hour 1500 kbps video will be - 1500*60*60 -kilobits= 1500*60*60/8000 MBs of data = 675 MB per hour of video data. Similarly, a 1000 kbps 1 hour video will be 450 MB in size , 600 kbps will be 270 MB in size.

How is bitrate linked to the size of the video?

Video bitrate is generally represented by kbps which essentially means kb (kilobits) of data per second. So, the size of a 1hour 1500 kbps video will be - 1500*60*60 -kilobits = 1500*60*60/8000 MBs of data = 675 MB per hour of video data. Similarly, a 1000 kbps 1 hour video will be 450 MB in size , 600 kbps will be 270 MB in size.

The post What is Video Bitrate, and How Does it Affect Video Quality? appeared first on VdoCipher Blog.

]]>
https://www.vdocipher.com/blog/2020/09/video-quality-bitrate-pixels/feed/ 1
Apple FairPlay DRM: Video Protection on iOS & Safari in 2024 https://www.vdocipher.com/blog/fairplay-drm-ios-safari-html5/ Mon, 03 Jun 2024 05:08:33 +0000 https://www.vdocipher.com/blog/?p=9601 Fairplay DRM is the trusted studio-approved DRM for secure playback in the Apple IOS app, IOS Safari, Mac Safari. In this post, we present a complete guide for implementing Apple FairPlay DRM. FairPlay DRM protects videos from download and also stops screen capture of videos. The second half of the article explains the technology behind […]

The post Apple FairPlay DRM: Video Protection on iOS & Safari in 2024 appeared first on VdoCipher Blog.

]]>
Fairplay DRM is the trusted studio-approved DRM for secure playback in the Apple IOS app, IOS Safari, Mac Safari. In this post, we present a complete guide for implementing Apple FairPlay DRM. FairPlay DRM protects videos from download and also stops screen capture of videos. The second half of the article explains the technology behind Fairplay DRM.

The content owner/distributor has to obtain the required license from Apple to use this. As your streaming partner, we provide the encryption and licensing service to use your FairPlay keys. The complete integration setup is handled directly by VdoCipher, you only need to apply for a license and get the keys.

What is FairPlay DRM?

Fairplay is Apple’s DRM technology, which is used by Apple exclusively to stream content securely on iOS app, iOS safari, macOS safari as well as TV OS.

Fairplay streaming(FPS) securely delivers encrypted content through HTTP Live Streaming(HLS) and CBCS protocol .

Apple Fairplay DRM prevents video download as well as ensures screen recording protection.

Apple Fairplay DRM Compatibility

Fairplay DRM is compatible with the following devices:

  • Mac Safari
  • iOS Safari (iOS >11.2)
  • iOS App. Native Apps are supported, web view apps are not supported.

Difference between default VdoCipher encryption security & Fairplay DRM Encryption – VdoCipher provides default encryption security for ios and Safari to prevent downloads. Apple Fairplay DRM is approved by studios and has an additional advantage of prevention from screen capture. VdoCipher helps our customers to apply to Apple for a Fairplay license and also then integrate it for your videos without any extra steps needed from your side.

Features of Apple FairPlay DRM

Apple FairPlay DRM (Digital Rights Management) provides a robust framework designed to secure multimedia content across Apple devices. This system includes several key features that ensure high-security standards suitable for premium content, such as early-window Hollywood movies. Here’s an overview of the fundamental features of Apple FairPlay DRM and how they enhance content security:

Hardware DRM Support

Apple devices such as macOS, iOS, watchOS, and tvOS come equipped with hardware-level security, making them highly secure environments for deploying FairPlay DRM (FPS). Unlike other DRM systems like Widevine, which can also operate on Apple devices through browsers or SDKs, FPS offers native hardware DRM support. This hardware integration is crucial for securing premium content, as it provides a higher security level that is not readily available when using solutions like Widevine CDM SDK for iOS or Chrome’s Widevine on macOS.

Apple AirPlay Compatibility

One of the standout features of FPS DRM is its native support for Apple AirPlay, the technology that enables wireless streaming of content from Apple devices to Apple TV. This seamless compatibility allows FPS content to be easily played through AirPlay without requiring additional programming. Moreover, when FPS content is streamed to an Apple TV via AirPlay, the key delivery and decryption processes occur directly on the Apple TV. This maintains the same high security level as if the content were being played directly from the source device, such as an iPhone.

Download and Offline Playback

Starting from iOS 10, Apple has enhanced the FPS capabilities to include support for downloading and offline playback. This feature is particularly beneficial for users who wish to access content without an internet connection. The relevant APIs provided by Apple’s operating system facilitate the handling of downloading HLS content along with managing offline licenses. This ensures that even when content is accessed offline, it remains protected under the stringent security measures enforced by FPS DRM.

Supported Ecosystem of Apple FairPlay Streaming

To effectively deploy Apple FairPlay Streaming (FPS), it is essential to understand the supported platforms and versions that can leverage this DRM technology. Here is a detailed table outlining the compatibility across different Apple devices and operating systems:

Platform Supported Version and Requirements
PC macOS 10.10 or later: Safari browser
Mobile iOS 9.0 or later: iOS native app
iOS 11.2 or later: iOS Safari browser
iPadOS 13.1 or later
Watch watchOS 7 or later
OTT Apple TV: tvOS 10.0 or later

Support Formats for Apple FairPlay

Apple FairPlay supports a variety of streaming formats and protocols to ensure broad compatibility and high-quality streaming experiences. Below is a table that summarizes the supported formats and protocols under the FairPlay DRM:

Type Supported Formats/Protocols
Streaming HLS, CMAF
Video TS, fMP4 container
Video Codec AVC (H.264), HEVC (H.265)
Audio Codec AAC, MP3

How To Request Apple FairPlay DRM Production License?

IMPORTANT: Below are some key steps, but it is recommended to mail us at support@vdocipher.com and we will guide you on the procedure to apply to Apple for the license. 

  1. Please go to the Apple FairPlay page.
  2. Click on the link to Request Deployment Package. You need to have a developer account before this.
  3. If you are an organization you should use the organization account for this purpose. Companies outside the USA need to obtain a DUNS number in order to create an organization account.
  4. After proceeding further, you should see a form to request the deployment package.
  5. Enter your company and content details. Please take our help (support@vdocipher.com) to ensure that Apple doesn’t reject your use case as it can do for many cases.
  6. If asked, you can enter our name “VdoCipher” in “Streaming Distribution Partner Name”
  7. Confirm that you already have a “Keyserver module” setup and tested. You now need the “deployment package” for production.

drm server system by vdocipher
Note that Fairplay DRM is only allowed for entities who are the content owner or have distribution rights to the content. Apple only provides Fairpay license when the video content is premium i.e. can only be accessed after payment.

FairPlay Streaming: Key Components

Apple FairPlay Streaming (FPS) is an essential technology for securing digital rights management (DRM) on Apple devices. To implement FPS effectively, several critical components must be integrated within your content delivery architecture. Here’s a breakdown of these components and how they interact to protect your streaming content:

Key Server and Key Security Module (KSM)

At the heart of the FairPlay implementation is the Key Server, which is responsible for managing the encryption and decryption keys. These keys are crucial for the secure delivery of DRM-protected content. Content service providers have the option to integrate a Key Security Module (KSM) directly on their key server. Apple provides a KSM sample that can be referred to during implementation, facilitating the validation and secure transmission of key request data from clients.

Client Application

The client application is designed to run on various Apple operating systems, including iOS, tvOS, watchOS, and macOS. This FPS client app communicates with the key server to request the necessary decryption keys for accessing protected content. Developers can utilize Apple’s provided sample code to build their own FPS client applications or opt for a comprehensive FPS SDK from a DRM solution provider. This flexibility allows content service providers to tailor the client app to meet specific operational needs or user experiences.

FPS Content and Encryption

For content that utilizes FPS, it is essential to encrypt each HLS (HTTP Live Streaming) segment using the SAMPLE-AES encryption method. The standard encryption protocol used here is AES-128 CBCS. To manage this, content providers can employ tools like the Shaka Packager, which facilitates FPS packaging by adding the necessary KEY tag to the m3u8 playlist. This tag includes all pertinent information required for decrypting the HLS content, ensuring that the encryption aligns with Apple’s stringent security standards.

How To Use Fairplay DRM Deployment Package?

You should have received an FPS_deployment package file from Apple. Open the zip file. You should find a PDF document titled: “FPSCertificateCreationGuide.pdf”.

This pdf describes the process of creation of an RSA key-pair and then getting the public key signed by Apple. In the process, it also generates an ASK. This key is a 32 character alphanumeric string associated with your Fairplay DRM. Once the process is complete, you can share your private key, challenge password, signed certificate, and the ASK.

Checklist before proceeding:

  1. Make sure you understand the overall process.
  2. Make sure your hardware and OS is stable enough and has power backup so that it does not shut down unexpectedly. You can not recreate the keys if anything goes wrong, so prepare for such events.
  3. In case of any issue, we are always there for help. If you need help with the key generation and signing process, we can offer guidance through a remote desktop session or skype.
  4. Understand that it is your responsibility for the safe-keeping of generated keys.

How do we use the above keys?

The Apple FairPlay DRM is a multi-component system. It also requires us to maintain the media keys in our database.

– When the player loads, it requests the signed public certificate.
– The FairPlay DRM device uses the certificate to create a license request.
– The license servers can read the “license request” using the “private key” and corresponding challenge password.
– The ASK is used to create the license containing the content keys.

How do we store your keys?

– We have dedicated license servers and licensing database separate from the rest of our infrastructure. The license database is heavily access controlled.
– We save your encrypted private key for FairPlay DRM in Google Storage or AWS S3 for Video Streaming.
– Private keys and challenge passwords are only accessible from license servers.
– The challenge password and ASK is stored in MySQL Database encrypted by a session key held in license server application.
– The signed certificate is kept in separate S3 and is public readable from a CDN. The FairPlay DRM in the player will load this certificate on your website or mobile app.
– We have set up encrypted backups every 6 hours.

Safe-keeping

Although we take extreme care of your keys, we do not allow retrieving the keys in future. We expect you to safe keep all your keys. You should make sure backups of the keys and ensure that they remain accessible to only authorised persons. As a checklist, here is a list of things to keep for FairPlay DRM.

  1. The private key (file)
  2. Challenge password (string)
  3. ASK (string)
  4. Signed certificate (file)

It is recommended not to trust your memory and keep all the files and associated passwords in digital format.

The steps for generating and signing keys

Step 1. Generating key pair with private key (.pem) and signing request(.csr) files.

i) When asked to enter a challenge password, you should first write down the password somewhere safe.
ii) Copy it from there once.
iii) When asked for verification type the password without pasting.

Note that when typing in the terminal, you should not see anything on the screen. That is how the terminal hides passwords.

Step 2. Signing the key requires an active Apple developer program membership.

  • Follow the exact process as described in the PDF document provided by Apple.
  • You should receive the ASK and need to type it again. Make sure you have it copied to a safe place before typing it again.
  • After proceeding, it should ask you to download the certificate file. (.cer)
  • The document should ask you to save the certificate in Keychain. This step is only for safekeeping. It does not affect any functionality.
Screenshot when Fairplay DRM ASK is created
Screenshot when Fairplay DRM ASK is created

 

Screenshot where Fairplay DRM signed certificate is downloaded
Screenshot where FairPlay DRM signed certificate is downloaded

The process is now complete. In the end, you should have the following files safe:

  1. Private key file (.pem)
  2. Challenge password for the private key
  3. ASK
  4. Certificate file (*.cer)

Send your Apple Fairplay DRM keys to VdoCipher:

1. To share the above keys with us, use our email info [at] vdocipher.com. Do NOT use any other email or cc another email to the email. This process is to ensure that the files and passwords remain within our systems.
2. You should delete the email from your email servers after receiving confirmation from us.

How To Publish Videos On Your Site/app with Fairplay DRM & VdoCipher

Once you have shared the keys with VdoCipher, we will integrate it with streaming for your account at the backend. You don’t need to do any modifications to integrate VdoCipher. With our standard APIs or plugins, you can integrate our streaming player and enjoy secure embeds in the site or app.

What is The Technology Architecture behind Apple Fairplay DRM

The security of the content stream lies in the way encrypted content is transferred over the internet in a highly secure manner with a black-boxed key exchange mechanism.

FairPlay DRM files are encrypted using the AES algorithm on mp4 container files. The security of any encryption technology lies in the openness/closeness of its key exchange mechanism. For Fairplay DRM, the key for decryption is kept again in encrypted format in a closed box environment. The reason this close box is high secure is that Apple can control the total device and browser environment (Mac & iPhone). It is the same reason that the same DRM can’t work on android or chrome, because Apple can’t implement a hidden box environment in such cases.

Here are some details of DRM + Streaming infrastructure with VdoCipher

  • Video Ingestion – You can upload videos through the dashboard, or using our upload APIs.
  • Video Transcoding  –
    • Encoding videos to multiple sizes for different devices and net speed.
    • Encrypting the video (CENC).
    • Video File packaging and Key generation from the DRM license server
  • Apis or plugins for Video Management 
  • Encrypted video files are streamed through Amazon AWS Cloudfront and Google Cloud Platform CDN Edge locations to ensure fast video streaming
  • Secure Online Video playback
    • Embed Code to generate Dynamic URLs (HTTP Post request including client secret key to get unique OTP)
    • Unique OTP is then sent by the DRM Server
    • The encrypted video file is decrypted in Browser/ Device’s trusted environment. The video is rendered via the video player, which can switch across different streams of different bitrates.VdoCipher implements Fairplay DRM video security
  • Multi-DRM: For content creators wishing to stream across all devices and software, they need a multi-DRM strategy. At VdoCipher we provide Widevine for Chrome, Fairplay for Apple devices, with Flash as a fallback. This multi DRM solution ensures that content providers can fully rely on VdoCipher for distributing content on all devices.

Key Features And Benefits of Apple Fairplay DRM

These are some of the most important features of Fairplay DRM and their benefit.

Hardware DRM support

This feature is similar to widevine’s L1 security. Here you have security at the hardware level. This includes all the client environments that are compatible with Fairplay DRM. Through this you can ensure that screen recording is completely blocked.

Content Key Expiration

Fairplay DRM allows you to create expiring content keys. These expiring content keys allow you to allow playback for a limited period of time. A good example for this would be rental videos online. Also you can fix the number of simultaneous video playback for a single user account. Using this you can restrict the number of users similar to what Netflix does. 

Offline Playback

Fairplay DRM supports the download and offline playback of videos through native app. Apple provides the relevant APIs to handle the downloading of videos and managing the hls content through offline licenses.

How Does FairPlay Streaming Work?

Let’s have a look on how various elements in FairPlay work with each other stream encrypted content.

Apple FairPlay DRM Streaming Communication Sequence

  1. User interacts with the video player on the content provider’s app. 
  2. Application then notifies AVFoundation about the video playback
  3. AVFoundation downloads the HLS playlist for streaming.
  4. AVFoundation then checks the KEY tag in the HLS playlist and ensures if the video file is encrypted. 
  5. After the confirmation, AVFoundation requests the encryption key from the AVFoundation app delegate.
  6. App delegate in turn asks for Server Playback Context(SPC) data from AVFoundation.
  7. Upon receiving the SPC, App Delegate sends the SPC to the key server.
  8.  Afer interpreting the SPC data through KSM module, it retrieves the key from the keydatabase.
  9. Key server then sends the key to the AVFoundation delegate in the form of Content key context(CKC)
  10.  AVFoundation delegate then pushes the CKC data to AVFoundation
  11. AVFoundation then decrypts the key and streams the content securely

FAQs on Apple FairPlay DRM iOS

By now you must have got a fair enough idea on Apple FairPlay DRM iOS and Safari Video Security. It is a must-have for video protection on Apple devices.However, if you still have any doubts left about it and want to know more, then here we have mentioned some frequently asked questions. This will give you more understanding of Apple FairPlay DRM iOS:

Does Apple still use Fairplay?

Yes, Apple uses Fairplay DRM to secure its music content and Movie platforms also use Fairplay DRM to secure videos on Mac and IOS.

Does Fairplay DRM support Safari?

Yes Apple Fairplay DRM supports high secure playback in Mac Safari, IOS Safari and IOS App.

How can I get Fairplay License from Apple?

Please contact support@vdocipher.com for a detailed guideline from VdoCipher on applying and integrating Apple Fairplay DRM.

Does Fairplay DRM prevent video downloads?

Yes Fairplay DRM prevents illegal video downloads because of its strong encryption.

Does Fairplay DRM prevent screen capture?

Yes Fairplay DRM also blocks screen capture in Safari & IOS App.

Is Fairplay DRM free?

Apple Fairplay DRM integration is technically handled by DRM companies like VdoCipher to ensure the highest security on IOS and Mac.

How to secure videos from piracy in IOS App?

The highest security in the IOS app is ensured with the integration of Fairplay DRM. VdoCipher provides integration for Fairplay DRM

How to secure videos from piracy in IOS?

The highest security in IOS is ensured with the integration of Fairplay DRM. VdoCipher provides integration for Fairplay DRM.


We’ve also written a blog on how to stream videos on iOS using AVPlayer, do check it out to know more about video streaming in iOS.

The post Apple FairPlay DRM: Video Protection on iOS & Safari in 2024 appeared first on VdoCipher Blog.

]]>
Flutter Video Streaming with Adaptive and Secure Playback https://www.vdocipher.com/blog/flutter-video-streaming/ Fri, 26 Apr 2024 16:55:43 +0000 https://www.vdocipher.com/blog/?p=16028 With the growth and acceptance of Flutter as a cross-platform development tool, complex demands like setting up video streaming solutions are also on the rise. Google has already taken care of the default plugin for video playback but it missed essential features for a smooth experience. To stream video with a Flutter plugin, you’ll need […]

The post Flutter Video Streaming with Adaptive and Secure Playback appeared first on VdoCipher Blog.

]]>
With the growth and acceptance of Flutter as a cross-platform development tool, complex demands like setting up video streaming solutions are also on the rise. Google has already taken care of the default plugin for video playback but it missed essential features for a smooth experience. To stream video with a Flutter plugin, you’ll need to integrate your Flutter project, ensuring secure and DRM-protected video delivery. The key benefit over basic video plugins is not only security but also features like Dynamic Watermarking, Offline Playback, Advanced Analytics, Global CDN, and Multi-Device Compatibility. We will also discuss the Flutter Video Streaming integration following easy steps but let us start with an overview of Flutter.

What is Flutter?

Flutter is an open-source UI software development kit created by Google. It’s used for developing cross-platform applications from a single codebase, meaning you can create apps for Android, iOS, Linux, Mac, Windows, Google Fuchsia, and the web from the same source code. Flutter enables developers to deliver high-performance, natively compiled applications with a rich set of pre-designed widgets and tools that make it easier to build visually attractive and smoothly interactive user interfaces.

VdoCipher empowers course creators, event organizers and broadcasters with expert live video streaming, ensuring smooth playback globally.

Key aspects of Flutter include:

  • Dart programming language: Flutter uses Dart, which is optimized for fast apps on any platform.
  • Widgets: Everything in Flutter is a widget, from a simple text to complex layouts. Widgets describe what their view should look like given their current configuration and state.
  • Hot Reload: This feature allows developers to see the effects of their changes almost instantly, without losing the current application state. It significantly speeds up the development process.
  • Rich animation libraries: These make it easy to add smooth and complex animations to your app, enhancing the user experience.

Why is Flutter getting popularity?

Flutter is gaining popularity and being used by developers worldwide for several reasons:

  • Cross-platform development: Flutter allows for code reusability across multiple platforms, which saves significant development time and resources.
  • Performance: Applications built with Flutter are compiled to native code, which helps achieve performance that is comparable to native applications.
  • Productivity: With features like Hot Reload, developers can make changes to the codebase and see the results instantly, which greatly improves the development workflow.
  • UI Flexibility and Customization: Flutter’s widget-based architecture enables the creation of complex and custom UI designs, making it easier to bring creative ideas to life without being limited by the framework.
  • Growing community and support: Being an open-source project, Flutter has a rapidly growing community and a wealth of resources, including documentation, tutorials, and third-party packages, which provide additional functionality and make development easier.
  • Google’s backing: Flutter benefits from strong support from Google, ensuring continuous updates, improvements, and the addition of new features.

Steps required for Live Streaming in Flutter

From video capture to broadcasting, Live streaming in a Flutter application involves a series of steps. If you are looking to integrate Live streaming directly into your Flutter app via embedding, a third-party provider like VdoCipher is the way to go. Otherwise, here’s a simplified breakdown of the process.

  • Capture – The live video and audio are captured using a streaming device’s camera and microphone. Flutter has a ‘camera’ package for this purpose. It has tools to get the list of available cameras, display a preview from a specific camera, and record. Doc – https://docs.flutter.dev/cookbook/plugins/picture-using-camera
  • Encode – The captured raw video and audio data is encoded into a format suitable for transmission over the internet. It compresses the media size to reduce bandwidth requirements and facilitate easy transmission. Packages like flutter_ffmpeg can be used for encoding media into various formats.
  • Transmit – The encoded video and audio are sent to a streaming server or service. This server is responsible for receiving the live feed from your app. You might use packages like ‘flutter_rtmp_publisher’ to send the stream to an RTMP server or flutter_webrtc if you are using WebRTC for real-time streaming.
  • Transcoding – Once the stream reaches the server, it undergoes transcoding. This process involves decoding the incoming stream to a raw format and converting the stream into multiple formats, resolutions, and bitrates. This is essential for adaptive bitrate streaming, which allows the stream quality to dynamically adjust based on each viewer’s internet speed and device capabilities.
  • Distributing the stream – The transcoded streams are then packaged into different formats (like HLS or DASH) and distributed to viewers via content delivery networks (CDNs). The step is mostly handled by the streaming server or platform and doesn’t require direct handling within the Flutter app.
  • Playback – Stream viewers will see the live video and hear the audio on their devices. You can use the ‘video_player’ plugin to play videos stored on the file system, as an asset, or from the internet. Doc link – https://docs.flutter.dev/cookbook/plugins/play-video

Live Streaming Protocols in Flutter

For live streaming in Flutter apps, the choice of streaming protocols depends on the application requirements like compatibility, latency, and scalability. The commonly used protocols are:

  • HLS (HTTP Live Streaming) – HLS streaming in Flutter is via plugins and packages such as ‘video_player or ‘flutter_hls_parser’
  • DASH – DASH can be implemented in Flutter through various media player libraries supporting DASH streaming, ensuring compatibility with a range of devices, and providing adaptive streaming capabilities.
  • RTMP (Real-Time Messaging Protocol) – While native support for RTMP might not be extensively available in Flutter, third-party plugins like flutter_rtmp_publisher can be used to send streams to RTMP servers. For playback, packages that interface with native video players that support RTMP can be utilized.
  • WebRTC (Web Real-Time Communication) – Flutter strongly supports WebRTC via plugins like ‘flutter_webrtc’, to implement real-time, peer-to-peer streaming.
Protocols Key Features Typical Use Cases
HLS Adaptive bitrate streaming Varied network conditions, general streaming
DASH Adaptive bitrate streaming Varied network conditions, general streaming
RTMP Low latency streaming Live auctions, interactive broadcasts
WebRTC Very low latency, peer-to-peer connections Live collaborative tools, video conferencing apps

How to Stream Videos in Flutter Player?

Playing videos in a Flutter application involves using the video_player plugin, which provides a widget to display video content. The plugin supports both network and asset videos, giving you flexibility in how you incorporate video playback into your app.

Step 1: Add the video_player dependency

First, you need to add the video_player plugin to your pubspec.yaml file:

dependencies:

  flutter:

    sdk: flutter

  video_player: ^latest_version

Replace ^latest_version with the latest version of the video_player plugin available on pub.dev.

Step 2: Import the package

Import the video_player package into your Dart file where you want to play the video:

import 'package:video_player/video_player.dart';

Step 3: Initialize the VideoPlayerController

Create a VideoPlayerController and initialize it with a video source. This can be a network URL or a local asset. For this example, we’ll use a network video:

late VideoPlayerController _controller;

@override

void initState() {

  super.initState();

  _controller = VideoPlayerController.network(

    'https://www.example.com/video.mp4', // Replace with your video URL or asset path

  )..initialize().then((_) {

    setState(() {}); // Ensure the first frame is shown after the video is initialized

  });

}

Step 4: Display the video

Use the VideoPlayer widget to display the video controlled by your VideoPlayerController. You can also add controls with the VideoProgressIndicator widget:

@override

Widget build(BuildContext context) {

  return Scaffold(

    body: Center(

      child: _controller.value.isInitialized

          ? AspectRatio(

              aspectRatio: _controller.value.aspectRatio,

              child: VideoPlayer(_controller),

            )

          : CircularProgressIndicator(), // Show loading spinner until the video is initialized

    ),

    floatingActionButton: FloatingActionButton(

      onPressed: () {

        setState(() {

          if (_controller.value.isPlaying) {

            _controller.pause();

          } else {

            _controller.play();

          }

        });

      },

      child: Icon(

        _controller.value.isPlaying ? Icons.pause : Icons.play_arrow,

      ),

    ),

  );

}

Step 5: Dispose the controller

It’s important to dispose of the VideoPlayerController when it’s no longer needed to free up resources:

@override

void dispose() {

  super.dispose();

  _controller.dispose();

}

This basic setup allows you to play videos in your Flutter application. You can customize the UI and controls further based on your app’s requirements. Remember to check the documentation for the video_player plugin on pub.dev for more advanced features and updates.

Considerations for Smooth and Resource Efficient Video Streaming in Flutter

For video streaming in Flutter, especially when dealing with a large user base or large video files, there are additional considerations to ensure smooth playback and efficient resource usage. Streaming video efficiently requires careful handling of video data, possibly adapting to different network conditions, and ensuring that your app can handle video data streams without causing performance issues. Here are steps and considerations for setting up video streaming in Flutter:

  • Choosing the Right Video Streaming Format – HLS (HTTP Live Streaming) and DASH (Dynamic Adaptive Streaming over HTTP) are the most common formats for video streaming. These formats allow for adaptive bitrate streaming, which adjusts the video quality in real-time based on the user’s internet speed, ensuring smoother playback under varying network conditions.
  • Use a Video Player that Supports Streaming – Ensure that the video player package you choose supports the streaming format you plan to use. The video_player plugin can handle network videos, but for advanced streaming features like adaptive bitrate streaming (HLS or DASH), you might need a more specialized plugin. For Flutter, plugins like chewie (which is a wrapper around video_player) or flutter_video_player (not to be confused with video_player) might offer additional functionality or ease of use for streaming scenarios.
  • Implementing Adaptive Bitrate Streaming – If your video content is available in multiple qualities, implement adaptive bitrate streaming to dynamically adjust the video quality based on the current network speed.
  • Pre-buffering and Caching Strategies – Implement pre-buffering to start loading the video a few seconds before playback begins. This can help avoid initial buffering delays. Consider caching parts of the video as they are streamed. Caching can reduce data usage for videos that are watched multiple times and improve playback start times. Be mindful of device storage limitations.
  • Handling Network Fluctuations – Monitor network state changes and adjust the video quality accordingly. You may also need to implement custom logic to pause, buffer, or alert the user depending on the network conditions.
  • Testing Across Devices and Network Conditions – Test your streaming implementation across a range of devices with different capabilities and screen sizes. Simulate various network conditions (e.g., 3G, 4G, WiFi, and low signal areas) to ensure your app provides a consistent and smooth video playback experience.
  • Legal and DRM Considerations – If you’re streaming copyrighted content, ensure you have the rights to do so. Additionally, consider implementing Digital Rights Management (DRM) to protect the content. Flutter plugins like flutter_video_encrypt can help with video encryption, but DRM often requires more complex solutions like VdoCipher.
  • Using a Video Streaming Service – For complex streaming needs, consider using a third-party video streaming service like VdoCipher. These services can handle video encoding, DRM, dynamic watermarking, adaptive streaming, and provide a content delivery network (CDN) to ensure fast and reliable video delivery worldwide.

Top Video Streaming Flutter Players

In the Flutter ecosystem, there are several video plugins, each offering unique features like,

  • Vdocipher_flutter: The vdocipher_flutter plugin supports video streaming in Flutter applications, providing a way to serve content with Hollywood-grade DRM security to prevent video piracy. It enables video playback functionality by leveraging native libraries based on the platform, supporting Android, iOS, and web.
  • video_player: The official Flutter plugin for video playback, supporting both network and asset videos but without DRM protection.
  • chewie: Provides a wrapper around video_player for a customizable video player experience, including fullscreen and playback controls, without built-in DRM.
  • flutter_video_info: Extracts video metadata but does not handle playback. Useful for managing video files within an app.
  • better_player: An advanced video player based on video_player, offering extended functionalities like HLS, DASH, and more customization options, though DRM support is limited compared to VdoCipher.

Each plugin caters to different requirements, from simple playback to complex video management needs, with VdoCipher standing out for its DRM and security features.

Feature VdoCipher video_player chewie better_player
DRM Protection Yes No No Limited
Encrypted Streaming Yes No No No
Dynamic Watermarking Yes No No No
Offline Playback Yes Yes Yes Yes
Customizable Player Yes Limited Yes Yes
Analytics Advanced No No Limited
Live Streaming Support Yes No No Yes
Global CDN Yes No No No
Multi-Device Compatibility Yes Yes Yes Yes

How to Stream Videos in Flutter using VdoCipher

To stream video with the VdoCipher Flutter plugin, you’ll integrate it into your Flutter project, ensuring secure and DRM-protected video delivery. The plugin offers features like offline playback, player customization, and encrypted streaming. The key benefit over basic video plugins is its focus on security, making it ideal for content creators needing to protect their videos from piracy. For implementation, you’ll use the VdoPlayer widget, initializing it with your video’s OTP and playback info. By choosing VdoCipher over more basic video plugins, you benefit from enhanced security measures, support for DRM, and a tailored solution for protected content distribution.

Usage

To use VdoCipher in Flutter, first add the dependency to pubspec.yaml. Initialize VdoPlayerController with a video ID, OTP, and playback info obtained from the VdoCipher API. Then, use VdoPlayer widget for display. This plugin offers DRM protection, ensuring content security beyond what basic video plugins provide. It’s ideal for applications requiring stringent content protection, offering features like offline playback and player customization.

class PlayerView extends StatefulWidget {

PlayerView({super.key});

@override

State<PlayerView> createState() => _PlayerViewState();

}




class _PlayerViewState extends State<PlayerView> {

VdoPlayerController? _controller;

@override

Widget build(BuildContext context) {

EmbedInfo embedInfo = EmbedInfo.streaming(otp: "YOUR_OTP", playbackInfo: "YOUR_PLAYBACK_INFO");

return VdoPlayer(

embedInfo: embedInfo,

aspectRatio: 16 / 9,

onError: (error) {},

onFullscreenChange: (isFullscreen) {},

onPlayerCreated: _onPlayerCreated,

);

}




_onPlayerCreated(VdoPlayerController? controller) {

setState(() {

_controller = controller;

});

_controller?.addListener(() {});

}

}

For detailed implementation, refer to the plugin's official guidelines.

FAQs

What is Flutter video streaming?

Flutter video streaming involves using Flutter plugins to play video content directly from the internet without downloading the entire file first.

Can I use DRM with Flutter for video streaming?

Yes, using plugins like VdoCipher, you can implement DRM (Digital Rights Management) to protect your video content in Flutter applications.

Is it possible to customize video players in Flutter?

Absolutely. Many Flutter video plugins offer customizable video players, allowing you to adjust controls, appearances, and behaviors to fit your app’s design.

How do I handle video streaming in poor network conditions?

Consider plugins that support adaptive bitrate streaming, which adjusts video quality based on the user’s current network speed to ensure smooth playback.

References

  • Flutter Video Plugin – link
  • VdoCipher Flutter Plugin – pub.dev
  • Flutter Wikipedia – link
  • Flutter Chewie Plugin – link

The post Flutter Video Streaming with Adaptive and Secure Playback appeared first on VdoCipher Blog.

]]>
HLS DRM, HLS Streaming & HLS Encryption for Content Security https://www.vdocipher.com/blog/2017/08/hls-streaming-hls-encryption-secure-hls-drm/ https://www.vdocipher.com/blog/2017/08/hls-streaming-hls-encryption-secure-hls-drm/#respond Wed, 24 Apr 2024 09:53:47 +0000 https://www.vdocipher.com/blog/?p=2111 HTTP Live Streaming (HLS streaming), developed by Apple, was designed to replace the Flash player on iPhones. HLS is adaptive to network conditions, making it a favored protocol among streaming services. It automatically adjusts to different screen sizes and the bandwidth available on a user’s network, which enhances viewing experiences across various devices. Supported by […]

The post HLS DRM, HLS Streaming & HLS Encryption for Content Security appeared first on VdoCipher Blog.

]]>
HTTP Live Streaming (HLS streaming), developed by Apple, was designed to replace the Flash player on iPhones. HLS is adaptive to network conditions, making it a favored protocol among streaming services. It automatically adjusts to different screen sizes and the bandwidth available on a user’s network, which enhances viewing experiences across various devices. Supported by HTML5 video players, HLS enables streaming at the optimal bitrate for a user’s connection without interrupting playback. This feature is crucial for video content, as it allows seamless scaling of video quality.

What is HLS Streaming?

HLS Streaming ( HTTP Live Streaming) is a video streaming protocol used for video content across desktop and mobile devices. HLS is developed by Apple, which forms the biggest use case for the streaming protocol. Beyond Apple, there is wide support for HLS streaming across Android devices and browsers. Indeed, HLS can be used as a streaming protocol for all major browsers, including Chrome and Firefox.

In HLS Encryption the video files are encrypted using a secure AES-128 algorithm. The AES 128 encryption is the only publicly available security algorithm that is used by the NSA for encrypting its top-secret classified information.

HLS streaming and HLS Encryption can be used for both the cases of live streaming and for Video on Demand streaming (VOD). Because video streaming is over HTTPS, there is no need for a streaming server, unlike RTMP, which requires its own streaming server.

HLS Streaming Protocol is not blocked by firewalls, unlike RTMP streaming protocol

How & Why Apple Developed HLS Streaming ?

Until about 2010, Flash was the most popular video streaming application. It was supported by all desktop browsers. Because Flash utilized the same runtime across all browsers, it meant that video streamers did not have to create separate workflows for different devices. DRM and encryption were also supported by Flash.

Flash was however plagued by security issues. Video playback on Flash was processor-intensive, which caused phone overheating & mobile batteries to drain very fast. For these reasons Apple did not support Flash in the iPhone and in iPad, instead including support for native HTML5 video playback.

Apple created its specifications for video streaming, which could by both live streaming platforms and for pre-recorded video streaming platforms. Android OS followed suit by blocking flash playback from browsers on Android. From the introduction of the smartphone to the emergence of MPEG-DASH around 2015, Apple’s HLS streaming has been the most widely used protocol.

Because of Apple’s continued support for the protocol, encoding for HLS player is an integral element of any video streaming provider’s workflow.

VdoCipher empowers course creators, film makers and trainers with multi-DRM protected video streaming, ensuring piracy protection and smooth playback globally.

How does HLS streaming work?

In plain vanilla HTML5 video streaming, only a single video file is available for streaming. The download of the complete video file is initiated every time the stream is played. Even if a viewer watches only 2 minutes of a 30-minute video, the full video would be downloaded, causing data wastage at both the server and the user end.

Streaming protocols remove this inefficiency in video streaming. Streaming protocols such as HLS effectively break down a video file into multiple chunks when streaming, and these video files are downloaded over HTTP in succession. HLS streaming uses the same workflow for both live and for on-demand content. The core idea in multi-bitrate streaming is that multiple renditions of each video, of varying resolution, are encoded. High-resolution videos are delivered to large screen devices having high network bandwidth, whereas lower resolution videos are encoded for mobile phones. Encoding for low resolutions also ensures continuous video streaming when the network connection speed drops.

Progressive streaming using HLS AES-128 Protocol

When the user decides to change video resolution, or when the network bandwidth changes, video streams can be manually (or automatically) switched. HLS video streams are encoded using the H.264 standard, which can be played across all devices. Each of the video copies is broken into multiple chunks having the .ts (transport stream) extension.

There is a main index file, called the manifest file (.m3u8 file format), associated with the video stream. The main manifest file contains links to the specific manifest files associated with each unique video stream. Each of these specific manifest files in its place directs the video stream to the correct URL for video playback when streams are switched. This ensures that stream switching is seamless. This process of a manifest file referring to the video stream is the same for both live video streaming and for on-demand video streaming. The only difference for live video is simply that the video files are being encoded in real-time.

Streaming over HTTP has many advantages over using a separate server. For example, firewalls that may be used to block ports used for RTMP are unlikely to affect video streaming over HTTP. No additional cost are required for streaming over an HTTP server.

Mobile Video Streaming Using HLS Protocol

When users upload a video to a server, it undergoes several phases of processing. Initially, the video is encoded in various resolutions and then segmented into containers, where each segment is indexed in the M3U8 format. This index file is crucial as it is hosted on a server and accessed by mobile applications to retrieve video chunks.

Server Components
Key elements of the server include the encoder and segmenter. The encoder receives the input stream during video upload and encodes it into different formats such as H.264 + MP3 and MPEG-2, creating multiple output streams. These streams are then passed to the segmenter, which divides them into video chunks and generates corresponding index files. Each stream has its distinct index file.

M3U8 File Format
The M3U8 file format is essential for indexing multimedia files. It contains pointers to the locations of video files saved with a .ts extension. These index files are generated by the segmenter and also specify the duration of video chunks, typically set to 10 seconds. They enable dynamic switching between video streams depending on the user’s network bandwidth. The client software autonomously decides the optimal times to switch streams based on network conditions.

Video Streaming through HLS protocol

Mobile Application Interaction
Mobile applications retrieve the M3U8 index file from the server, which directs them to the required video streams. The application downloads these streams in a sequential manner, and playback begins once enough segments are buffered. As one index file is exhausted, the application proceeds to scan the next until the ‘endlist’ tag is reached.

System Implementation
The development of the mobile application is geared towards enabling users to share videos seamlessly. Users upload videos, which are then encoded and segmented by the HLS server into streamable video slices saved in .ts format. Index files in M3U8 format are generated and uploaded to a storage database. When a user wishes to watch a video, the application sends a request to the server to retrieve the video through the storage, ensuring that the video plays smoothly on the device’s native media player API.

In conclusion, HLS protocol facilitates the streaming of high-quality videos that adapt to varying network conditions. By managing video segments through a manifest file, the mobile application ensures that users can access the best possible video quality based on their current network environment, providing a robust and uninterrupted streaming experience.

What is HLS Encryption? Is HLS Encryption effectively secure against piracy?

HLS AES-128 encryption refers to video streams using HLS streaming protocol wherein the video files are encrypted using the AES-128 algorithms. The key exchange happens through the secure HTTPS protocol. If done in a rudimentary way the key for decryption can be seen from the network console by accessing the manifest file. A poor implementation of HLS encryption would result in plugins automatically finding the key and decrypting the HLS encrypted stream, rendering video security ineffective.

Basic HLS Encryption where the key is in the manifest file

There are however methods to strengthen the HLS Encrypted stream. The challenge is to make sure that the key is not exposed directly. These are the options for additional security in HLS Encryption:

  • Not including URL to decryption key in Manifest File

Implementations for this vary widely, and are quite difficult by themselves. This method for protecting HLS content may also cause compatibility issues on devices. If done properly however it is definitely a major improvement in video security.

  • Using authenticated cookies for HLS Encryption streaming

In this method, the browser of authorized users stores authentication cookies. These cookies are stored with a digital signature, to ensure that they are not tampered with. This ensures that only the authorised user (and not some external plugin) is seeking to fetch content. The following workflow is used for configuring authentication cookies for HLS encryption:

  1. Trusted signers are configured, who have permission to create authentication cookies. This configuration is done at the edge location (content delivery network)
  2. Application is developed to send set-cookie headers to authorized viewers
  3. Authorized users store name-value pairs in the cookie
  4. When user requests protected content, the browser adds the name-value pair in the cookie header to the request
  5. The video CDN uses the public key to verify the digital signature in the name-value pair
  6. If the authentication cookie is verified, the CDN looks at the authentication cookie’s policy statement. The policy statement determines if the access request is valid. For example, the policy statement could include the beginning and end time for cookie validity.

Advanced HLS Encryption, using authentication cookies/ signed URLs
For further information on authentication cookies for content protection, you can have a look at Amazon Cloudfront’s documentation.

  • Signed URLs can be generated for authorized users

The following workflow is used for configuring signed URLs for HLS encryption:

In the CDN trusted signers are created, who have permission to create signed URLs

  1. Develop an application to create signed URLs for protected content
  2. When user requests protected content by signed URLs, the application verifies if they have the authorization to access it
  3. If verified, the application creates a signed URL and sends it to the requesting user
  4. On accessing content through a signed URL, the CDN verifies that the URL has not been tampered with. This is done by using the Public Key to verify the digital signature of the URL
  5. If the signed URL is valid,
  6. The CDN uses the public key to verify the digital signature in the name-value pair
  7. If the signed URL is verified, the CDN looks at the signed URL’s policy statement. The policy statement determines if the access request is valid. For example, the policy statement could include the beginning and end times for the signed URL. For protecting content, this period of validity of URLs should be short – as little as a few minutes is optimal. For this you can create dynamic URLs, that change every few minutes.

For further information on signed URLs for content protection, you can have a look at Amazon Cloudfront’s documentation.

All these 3 steps make the video stream considerably immune to direct download through plugins. However, these methods are still breakable by already available codes and tech hacks.

Technical overview of HLS Streaming

HLS streaming or HTTP live streaming is a video streaming protocol to stream audio and video across all major browsers and devices. Here’s a brief overview of how HLS Streaming works.

Source and Encoding: Video content can be either live or recorded and is first encoded into all the relevant formats and quality. The video is also compressed to ensure it can be streamed easily, as raw files are usually pretty big. 

Segmentation and Multiple Formats: After the encoding is done the video is further split into segments of about 10 seconds each. Segmentation makes it easier to switch between video quality, this is done dynamically based on the user’s internet speed. 

Creation of M3U8 Playlist: An M3U8 playlist file is created, which contains information about the video segments of all the different qualities. It guides the player to pick the right segment based on internet speed.

Delivery and Adaptation: All the video chunks and the m3u8 playlist are saved on an HTTP server. When someone streams the video, the m3u8 playlist is downloaded and the video chunks are downloaded. As the internet speed changes, the player chooses the higher or lower-quality video chunks based on it. It ensures a smooth viewing experience with minimal buffering. 

Key Benefits of HLS Streaming

Here are the key benefits of HLS (HTTP Live Streaming):

Compatibility: HLS is compatible with almost all browsers and devices. Initially HLS was limited to Apple devices now it has a much broader range browser it supports.

hls streaming compatibility

Adaptive Streaming: HLS ensures a smooth viewing experience by scaling the video quality. It does this based on the user’s internet speed, the quality scales up or down based on the internet speed. This ensures that there is no buffering at lower speeds.

Live and On-Demand: HLS works for both live and recorded streaming. This makes it pretty versatile for streaming different types of content. 

Security: HLS uses AES-128 encryption to encrypt the video chunk, to protect videos from unauthorized access. 

Scalability: HLS offers good scalability to deliver live and record content across global CDN. These CDNs distribute the streaming load among various servers. This distribution strategy efficiently handles sudden increases in viewers, such as unexpected large live audiences, ensuring a stable streaming experience.

Compare HLS with other streaming protocols

Comparing HLS (HTTP Live Streaming) to other video streaming protocols:

MPEG-DASH:

MPEG-DASH is similar to HLS in providing adaptive streaming. But MPEG-DASH is more flexible with different codecs and containers. HLS is more widely supported, especially on Apple devices. MPEG-DASH is gaining popularity due to its open standard nature.

RTMP (Real-Time Messaging Protocol): 

RTMP is older and great for low-latency streaming, like live broadcasts. However, it doesn’t support adaptive streaming and is less compatible with modern devices. HLS, while having slightly higher latency, offers better device compatibility and adaptive streaming.

Microsoft Smooth Streaming: 

This is Microsoft’s version of adaptive streaming. It works well with Microsoft devices and software. However, HLS has wider support across various platforms and devices compared to Microsoft Smooth Streaming.

HDS (HTTP Dynamic Streaming): 

HDS is Adobe’s streaming protocol. It’s similar to HLS in adaptive streaming but is less common. HLS has broader support and is more widely used than HDS.

How is DRM level security for HLS Encryption possible?

DRM requires that the key exchange and licensing mechanism is highly secure and is always out of reach of external tools and hackers. A DRM technology also has additional elements. It delivers a license file, which also specifies the usage rights of the viewer. Usage rights specify the conditions in which the video playback is allowed.
Implementation of these usage rights ensures that the signed key used for decryption can only be used for playback on the viewer’s device. The key would simply fail to decrypt the video stream if the video file is copied to any other device.

DRM adds complex layers of workflow for license management. This workflow includes:

Specifying highly detailed usage rights such as

  • Limiting video playback on a device to only a fixed number of times
  • Video access can expire after a period of days if the subscription is not renewed
  • Limiting the device or screen on which the video can be played. For example usage rights can be used to restrict users to cast their video playback on an external device such as a Smart TV.

The license database is also bound to the user’s device, which means that if shared the license and decryption key becomes redundant.

Licenses are also signed with the digital signature, which means that they cannot tamper with either during transit over HTTP or when stored locally on the device.

Implementing DRM along with HLS streaming entails considerable modification of the HLS Encryption infrastructure.  At VdoCipher, we have been able to do that and provide a full-fledged proprietary + HLS DRM. We cannot technically say that we are streaming an HLS encrypted stream as it is highly modified. We use a combination of other technologies based on different platforms and are able to roll out a cross-device, cross-browser compatible DRM.

VdoCipher HLS DRM Infrastructure Details

  1. Upload of Videos (All common formats are supported )
    The content can be uploaded through Dashboard or APIs. Upload from desktop, FTP, DropBox, Box, URL, Server all is supported.
  2. Encryption & Transcoding for DRM streaming
    Videos are converted into encrypted files, and multiple qualities & versions for ensuring delivery of quality content at all devices, browsers, and all connection speeds. The encrypted content is stored at our AWS S3 servers and raw videos are never exposed. We have set up our custom EC2 instances for the encoding pipeline, and the resultant files are hosted securely on AWS S3 servers.
  3. Encrypted Video Streaming (Modified HLS Encryption & Streaming)
    As discussed above the high-security key and license exchange mechanism supports the transfer of encrypted video data, ensuring HLS DRM level security. Dynamic URLs ensure that each playback is authenticated and the URL cannot be extracted outside the website or app for pirated playback. We use multiple top tier CDNs – Cloudfront, Akamai, Google CDN, Verizon to ensure smooth delivery of content all across the globe
  4. Decryption in Video Player & Watermarking
    There is a private communication between our API & the client website. This ensures that its not possible for hackers to decrypt our streams. The One Time encryption that we use is theoretically and practically hack-proof. The website embedding the video content requests a One-time password from the VdoCipher web server using the API. This OTP request is made only after the user is authenticated. The VdoCipher API returns the OTP, which is used to render the embed code. This embed code is valid for a single playback session only. Along with the key a usage policy is specified, ensuring that only a logged-in and authenticated user is allowed to playback the encrypted video. The video would simply fail to play if an external plugin or downloader is used to try to access the video file.We have timely modifications to our licensing and authentication mechanism to keep security updated.
  5. Watermarking
    Video licensing and playback are combined to generate customisable viewer specific watermarks. The watermark can be IP address, Email ID  and User ID shown in customisable colour & transparency to identify a playback session by the viewer.
  6. Result – Progressive High Secure Streaming
    Through this 6-step Video Hosting, Encryption and Streaming process, VdoCipher, as a video hosting software, is able to provide a progressive high security video streaming with future buffer possible. This is also different from RTMP which does not maintain any buffer and can be quite erratic as a result.

You can find out more about DRM Solution here.

Demo Free Trial for HLS DRM Streaming

You can signup for a free full version trial at VdoCipher.

Online businesses also often require features over and beyond video security. VdoCipher fulfills all major requirements for enterprise video hosting. The complete set of features that VdoCipher offers for enterprise video hosting may be found here.

Also, do read our blog on react native video playback.

FAQs

Is HLS DRM protected?

HLS is not protected with DRM by default. Traditionally, HLS streams could only be protected using Apple FairPlay DRM but with new updates, HLS can be protected by Google Widevine DRM in addition to Apple FairPlay DRM.

Is HLS unicast or multicast?

Traditionally, HLS was designed by Apple for Quicktime, Safari, and iOS devices. It did not used to support multicast but with new updates HLS, has emerged as the de facto multicast format for live streaming and video on demand (VOD).

Is HLS better than RTMP?

Moreover, it depends on the use case but HLS has notable advantages which include embedded closed captions, good advertising standards support, synchronized playback of multiple streams, and DRM support. RTMP has advantages of low latency, flexibility, efficient bandwidth usage, and dynamic content delivery.

The post HLS DRM, HLS Streaming & HLS Encryption for Content Security appeared first on VdoCipher Blog.

]]>
https://www.vdocipher.com/blog/2017/08/hls-streaming-hls-encryption-secure-hls-drm/feed/ 0
Dynamic Watermark Demo: Add User Identifier Text to Videos- User ID, Email ID, Phone No. https://www.vdocipher.com/blog/2014/12/add-text-to-videos-with-watermark/ https://www.vdocipher.com/blog/2014/12/add-text-to-videos-with-watermark/#comments Mon, 08 Jan 2024 01:00:29 +0000 http://www.vdocipher.com/blog/?p=205 Dynamic watermarking means showing user-identifiable data over a video in a moving and non-intrusive manner to ensure the highest protection from screen capture and optimize the viewing experience. Videos hosted through VdoCipher cannot be illegally downloaded through any tools/extensions/downloaders. Screen capture block with 100% surety is possible only in mobile apps and Safari browsers. For […]

The post Dynamic Watermark Demo: Add User Identifier Text to Videos- User ID, Email ID, Phone No. appeared first on VdoCipher Blog.

]]>
Dynamic watermarking means showing user-identifiable data over a video in a moving and non-intrusive manner to ensure the highest protection from screen capture and optimize the viewing experience. Videos hosted through VdoCipher cannot be illegally downloaded through any tools/extensions/downloaders. Screen capture block with 100% surety is possible only in mobile apps and Safari browsers. For Chrome, Firefox, and other browsers, there does however remain the risk of piracy from screen capture. User-based information shown as moving dynamic watermark effectively discourages users from pirating video content using screen capture and goes a long way towards helping users protect their premium content.

The sample video below contains a dynamic watermark displaying the User name, User IP, and User email. The below video is displayed using our WordPress plugin and the same can be configured using APIs or Moodle plugin as well.

The dynamic watermark can be customized for movement, color, size, transparency and frequency. You can try the watermark feature on your website by signing up for a Free 30 Day Trial on our home page.

Dynamic Watermark Demo

Features of Dynamic Watermark by VdoCipher

  1. Add user details like user id, email id, phone number, ip address as an overlay over your videos
  2. Add time stamp, and fixed text (e.g company name)
  3. Customise size, color, transparency, and frequency of moving watermark. You can make it very light and also change frequency so that it is not always visible, to ensure optimum viewing experience. You can optimize frequency in such a manner, that it is difficult to remove the watermark maintaining user experience.  To show a watermark at a particular position for 5 seconds and then not show it for 20 seconds, you can use the parameters of ‘interval’:5000 and ‘skip ‘: 20000. (1 second = 1000 microsecond).  Other parameters are explained in below tutorial steps below.
  4. If you are using a static/fixed text watermark, then it has to be compulsorily set at the top left of the player, it can not reside on other parts.
  5. Image watermark is currently not possible with VdoCipher, but you can use your company/brand name as a watermark.
  6. Quick 5-minute integration using wordpress plugin or moodle plugin or API. Iframe integration can show ip address and fixed text as watermark but it can not show user id, email id etc. as watermark since it is not a backend integration.

How to Add Dynamic Watermark to your VdoCipher Videos

To generate a watermark or to add text to videos you essentially need a JSON string describing how and what you will overlay on your protected videos. In this blog, we will be detailing how to integrate dynamic or static watermarks to add text to videos.

Step 1 is to create the watermark code. Once you have created the watermark code,
Step 2 you add the watermark to the video. This is done by adding watermark code to the WordPress plugin settings (for WordPress users), or by adding it as part of OTP API call for VdoCipher API users or by adding it to Moodle plugin settings.

Step 1: Create a Watermark Code

We are assuming that you have uploaded your video to your VdoCipher account. You would need to pass a JSON string as annotation code. The JSON string would contain all the information about the watermark. A JSON string is a universal form of representing structured data in a way that machines can understand.

Here is a sample JSON string that adds a moving (dynamic) watermark and a static watermark.

[

{'type':'rtext', 'text':'moving text', 'alpha':'0.8', 'color':'0xFF0000','size':'15','interval':'5000','skip':20000},
{'type':'text', 'text':'static text', 'alpha':'0.5' , 'x':'10', 'y':'100', 'color':'0xFF0000', 'size':'15'}
]

Technically, this is an array of JSON objects, where each object describes a single annotation item.

Each of these items will be described by its parameters. Every item requires a type parameter that defines the type of watermark. The type of watermark can be either a moving text or a static text. The rest of the parameters depend on the type.

Following is a short description of how each parameter affects the display of text.

Moving text

The following code will display a dynamic watermark code, displaying name, IP and email address in a single line. The text color will be red (#ff0000), opacity is 0.8, and font size is 15. The watermark is configured to keep one position for 5 seconds (5000ms) and then hide watermark for 20 seconds (20000 ms) , and then show again at a new position for 5 seconds.

[{
'type':'rtext',
'text':'{name}, {ip}, {email}',
'alpha':'0.8',
'color':'0xFF0000',
'size':'15',
'interval':'5000',
'skip':'20000'
}]
Type of text – Moving watermark

Set type parameter as rtext for Dynamic watermark

'type':'rtext',
Set the text to be shown
'text" : 'Enter whatever text you like to be displayed',

You can add user identifiable information, such as user name, user email and user IP.

  • ‘text’: ‘{name}’,
  • ‘text’: ‘{email}’,
  • ‘text’: ‘{ip}’,
'text':'Name: {name}, email: {email}, IP: {ip}

To display the name, email and IP separately, and not in a single line, you can simply create 3 watermark objects, as follows:

[{'type':'rtext','text':'{name}','alpha':'0.8', 'color':'0xFF0000', 'size':'15', 'interval':'5000', 'skip':'2000'},
{'type':'rtext','text':'{ip}','alpha':'0.8', 'color':'0xFF0000', 'size':'15', 'interval':'5000', 'skip':'2000'},
{'type':'rtext','text':'{email}','alpha':'0.8', 'color':'0xFF0000', 'size':'15', 'interval':'5000', 'skip':'2000'}
]
Specify text opacity

This is the opacity of the text. For full opacity keep alpha value 1.

'alpha':'0.8',
Specify text color

This is the hex value of the watermark text color. You can pick your choice of color and its corresponding hex value from the following page on W3schools.

'color':'0xFF0000',
Specify the font size

This is the font size

'size':'15',
Specify the interval over which watermark changes position

The value is the interval in milliseconds when the text changes position

'interval':'5000',
Skip feature for watermark

It is possible to have watermark skip for some time between two overlays. Here is a sample code for it –

'skip':'2000'
Time stamp for watermark. (Only for WordPress)
[[{'type':'text', 'text':'Time: {date.h:i:s A}', 'alpha':'0.30' , 'x':'12', 'y':'130', 'color':'0xFF0000', 'size':'13'}]]
Add Custom Variables as Watermark

The following blog details how you can add text to videos or custom variables as watermark to your videos: Custom Variables as Watermark

Some important things to keep in mind about Watermark
  • Note that both the name and the value of these parameters should be in quotes. This rule applies to both text as well as numbers.
  • Each parameter is to be separated by a comma. There should not be a comma after the last parameter for the dynamic watermark video settings.

Static text

[{
'type' : 'text',   //This defines the type of annotation item to static watermark
'text' : 'the text you like to be displayed',
'x' : '10',  //the distance from the left border of video.
'y': '50',  //the distance from the top border of video.
'alpha': '0.8', //the opacity of the rendered text, 0 is invisible, 1 is full opaque
'color':'0xFF0000',    //the color of the text specified as hexadecimal or uint
'size':'15' //Height of the text, in pixels.
}]

Step 2: Add Watermark Code to Video Request using API or plugin

If you are using our WordPress or Moodle plugin you can simply add the watermark JSON in the plugin settings page. If you are integrating VdoCipher to your custom-built site, you would need to pass the JSON object as part of the OTP request.

The HTTP POST data containing watermark JSON object has to be sent as Content-Type: application/json. The JSON Object is to be sent as value to the key annotate. The header for the OTP request should include the Authorization using API Secret Key. A sample OTP request including watermark information is as follows.

curl -X POST \
 https://dev.vdocipher.com/api/videos/1234567890/otp \
 -H 'Accept: application/json' \
 -H 'Authorization: Apisecret a1b2c3d4e5' \
 -H 'Content-Type: application/json' \
 -d '{
 "annotate":"[{'\''type'\'':'\''rtext'\'', '\''text'\'':'\'' {name}'\'', '\''alpha'\'':'\''0.60'\'', '\''color'\'':'\''0xFF0000'\'','\''size'\'':'\''15'\'','\''interval'\'':'\''5000'\''}]"
}'

The sample videoID is 1234567890 and the API Secret Key is a1b2c3d4e5. This sample code only passes the annotation code as parameter.

This blog: Protect Videos on WordPress provides more details on securing videos using WordPress.

Still having problems to add text to videos or with the dynamic watermark on video code? Send us the code you are using and the output you wish to be shown to support@vdocipher.com

add text to videos

The post Dynamic Watermark Demo: Add User Identifier Text to Videos- User ID, Email ID, Phone No. appeared first on VdoCipher Blog.

]]>
https://www.vdocipher.com/blog/2014/12/add-text-to-videos-with-watermark/feed/ 23
Media3 ExoPlayer Tutorial: How to Stream Videos Securely on Android? https://www.vdocipher.com/blog/exoplayer/ Tue, 02 Jan 2024 07:48:12 +0000 https://www.vdocipher.com/blog/?p=12004 Streaming videos securely on Android can be a bit challenging, but it can be easily done with Exoplayer! When it comes to streaming videos on Android, Exoplayer can be your go-to media player. It is even used by Google apps such as YouTube and Google TV. Exoplayer allows a lot of customization which enables its […]

The post Media3 ExoPlayer Tutorial: How to Stream Videos Securely on Android? appeared first on VdoCipher Blog.

]]>
Streaming videos securely on Android can be a bit challenging, but it can be easily done with Exoplayer!

When it comes to streaming videos on Android, Exoplayer can be your go-to media player. It is even used by Google apps such as YouTube and Google TV. Exoplayer allows a lot of customization which enables its adoption for various use cases.  Its support of media formats is also very wide, including adaptive streaming formats such as HLS, Dash, and Smooth Streaming. With its support for widevine, you can ensure that your content remains safe

What is an ExoPlayer?

ExoPlayer is an open-source media player for Android maintained by Google. It is not part of the Android framework and is distributed separately from the Android SDK. With ExoPlayer, you can easily take advantage of new features as they become available by updating your app.

ExoPlayer is the best alternative to android’s built-in MediaPlayer API which is used to control the playback of audio/video files and streams. It supports various features such as Dynamic Adaptive Streaming over HTTP(DASH), HTTP Live Streaming(HLS), Smooth Streaming, and Common Encryption. It can be used to play audio as well as video streaming online directly from the server and/or offline(locally) by downloading. It can be easily customized and extended. We can add features like captions, speed control, forward, rewind, etc. to the player. Exoplayer also provides a feature for encrypting the media playback (both online and offline) for secure streaming. It does not work on the device below API level 16.

Let’s see what Exoplayer has to offer and why or when we should use it over the built-in MediaPlayer API.

What are the Advantages of Using Exoplayer?

ExoPlayer has a number of advantages over Android’s built-in MediaPlayer:

  • There are fewer device-specific issues and less variation in behavior across different Android versions and devices with ExoPlayer.
  • You can update the player along with your application. Since ExoPlayer is a library included in your application, you can choose which version to use and easily update to a newer version. To ensure your application are developed and run smoothly, use the Docker platform.
  • You can customize and extend it to meet your needs. A lot of ExoPlayer components can be replaced with custom implementations since it was designed specifically with this in mind.
  • ExoPlayer has in-built support for playlists.
  • The Exoplayer supports a variety of formats in addition to DASH and SmoothStreaming. Additionally, it supports advanced HLS features such as handling #EXT-X-DISCONTINUITY tags and the ability to seamlessly merge, concatenate, and loop media streams.
  • On Android 4.4 (API level 19) and higher, it supports Widevine common encryption. Although the actual widevine support varies from device and is usually only available starting from Android 5. Sometimes, older devices with Android 5 and 6 can also get revoked due to security updates.
  • It is possible to integrate with a number of additional libraries quickly by using official extensions. For example, by using the Interactive Media Ads SDK, you can easily monetize your content with the IMA extension.

Explore More ✅

Stream Your Content Securely On Android With VdoCipher

VdoCipher helps provide end-to-end solutions for video, right from hosting, encoding, and encryption to the player. On top of it, you get APIs to manage videos, players, and more.

How To Implement Exoplayer in Android with examples?

We will create a simple exoplayer application to play a video using MediaItem.

The steps to implement Exoplayer are as follows:

  1. Add exoplayer dependencies in your app level build.gradle
    implementation 'com.google.android.exoplayer:exoplayer-core:2.18.0'
    implementation 'com.google.android.exoplayer:exoplayer-dash:2.18.0'
    implementation 'com.google.android.exoplayer:exoplayer-hls:2.18.0'
    implementation 'com.google.android.exoplayer:exoplayer-ui:2.18.0'
  1. Add SimpleExoPlayerView in layout file A SimpleExoPlayerView can be included in the layout for an Activity belonging to a video application as follows:
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:layout_width="match_parent"
    android:layout_height="match_parent">

<com.google.android.exoplayer2.ui.SimpleExoPlayerView 
    android:id="@+id/exoPlayerView"
    android:layout_width="match_parent"
    android:layout_height="match_parent"/>

</FrameLayout>
  1. Create and load the player In your Activity, create an instance of Exoplayer and add it to the SimpleExoPlayerView
    MediaItem represents a media which can be added to the player at the time of preparation. After the player is prepared you can set set setPlayWhenReady to start the playback when media is ready.
SimpleExoPlayerView playerView = findViewById(R.id.exoPlayerView);
ExoPlayer player = new ExoPlayer.Builder(context).build();
// Bind the player to the view.
playerView.setPlayer(player);
// Create and add media item
MediaItem mediaItem = MediaItem.fromUri(video_url);
player.addMediaItem(mediaItem);
// Prepare exoplayer
player.prepare();
// Play media when it is ready
player.setPlayWhenReady(true);
  1. Handling the player controls Methods on the player can be called to control the player. Below are some of the methods:
  • play and pause: Used to play and pause the video
  • seekTo: Seeks to a position specified in milliseconds in the current MediaItem
  • playWhenReady: Whether playback should proceed when ready
  • hasNextMediaItem, hasPreviousMediaItem, seekToPreviousMediaItem, seekToNextMediaItem: Allows navigating through the playlist
  • setPlaybackParameters: Attempts to set the playback parameters.Playback parameters changes may cause the player to buffer. Player.Listener.onPlaybackParametersChanged(PlaybackParameters) will be called whenever the currently active playback parameters change
  1. Release the player Use ExoPlayer.release method to release the player after when it is no longer required.
if (exoPlayer != null) {
    exoPlayer.release();
}

How to Customize Exoplayer?

ExoPlayer comes with many customizations available such as UI adjusting to match your app, deciding caching mechanism for data loaded from the network, customizing server interaction to intercept HTTP requests and responses, customizing error handling policy, enabling asynchronous buffer queueing, and many more. In this section, we will look at how we can customize UI with ExoPlayer.

Customizing ExoPlayer’s UI components

ExoPlayer V2 includes several out-of-the-box UI components for customization, most notably:

  • SimpleExoPlayerView is a high level view for SimpleExoPlayer media playbacks. It displays video, subtitles and album art during playback, and displays playback controls using a PlaybackControlView.
  • PlaybackControlView is a view for controlling ExoPlayer instances. It displays standard playback controls including a play/pause button, fast-forward and rewind buttons, and a seek bar.

Use of this view is optional. You are free to implement your own UI components by yourself at the cost of some extra work. The SimpleExoPlayerView displays video, subtitles, and album art during playback, and displays playback controls using a PlaybackControlView.

A SimpleExoPlayerView can be customized by setting attributes (or calling corresponding methods), overriding the view’s layout file or by specifying a custom view layout file, as mentioned below.

Setting attributes for SimpleExoPlayerView

A SimpleExoPlayerView can be included in the layout for an Activity belonging to a video application as follows

<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:layout_width="match_parent"
    android:layout_height="match_parent">

<com.google.android.exoplayer2.ui.SimpleExoPlayerView 
    android:id="@+id/player"
    android:layout_width="match_parent"
    android:layout_height="match_parent"/>

</FrameLayout>

Now you can use following attributes on SimpleExoPlayerView to customize it when used in a layout XML file.

  • use_artwork – Whether artwork is used if available in audio streams.
  • default_artwork – Default artwork to use if no artwork available in audio streams.
  • use_controller – Whether the playback controls can be shown.
  • hide_on_touch – Whether the playback controls are hidden by touch events.
  • auto_show – Whether the playback controls are automatically shown when playback starts, pauses, ends, or fails. If set to false, the playback controls can also be manually operated with showController() and hideController().
  • resize_mode – Controls how video and album art is resized within the view. Valid values are fit, fixed_width, fixed_height and fill.
  • surface_type – The type of surface view used for video playbacks. Valid values are surface_view, texture_view and none. Using none is recommended for audio only applications, since creating the surface can be expensive. Using surface_view is recommended for video applications.
  • shutter_background_color – The background color of the exo_shutter view.
  • player_layout_id – Specifies the id of the layout to be inflated.
  • controller_layout_id – Specifies the id of the layout resource to be inflated by the child PlaybackControlView.

Overriding the view’s layout file

To customize the layout of SimpleExoPlayerView throughout your app, or just for certain configurations, you can define exo_simple_player_view.xml layout files in your application res/layout* directories. These layouts will override the one provided by the ExoPlayer library, and will be inflated for use by SimpleExoPlayerView. The view identifies and binds its children by looking for the following ids:

  • exo_content_frame – A frame whose aspect ratio is resized based on the video or album art of the media being played, and the configured resize_mode. The video surface view is inflated into this frame as its first child. Type AspectRatioFrameLayout
  • exo_shutter – A view that’s made visible when video should be hidden. This view is typically an opaque view that covers the video surface view, thereby obscuring it when visible. Type: View
  • exo_subtitles – Displays subtitles. Type: SubtitleView
  • exo_artwork – Displays album art. Type: ImageView
  • exo_controller_placeholder – A placeholder that’s replaced with the inflated PlaybackControlView. Ignored if an exo_controller view exists. Type: View
  • exo_controller – An already inflated PlaybackControlView. Allows use of a custom extension of PlaybackControlView. Note that attributes such as rewind_increment will not be automatically propagated through to this instance. If a view exists with this id, any exo_controller_placeholder view will be ignored. Type: PlaybackControlView
  • exo_overlay – A FrameLayout positioned on top of the player which the app can access via getOverlayFrameLayout(), provided for convenience. Type: FrameLayout

Any child views are optional, but where defined they must be of the expected type.

Specifying a custom layout file

Defining your own exo_simple_player_view.xml is useful to customize the layout of SimpleExoPlayerView throughout your application. It’s also possible to customize the layout for a single instance in a layout file. This can be achieved by setting the player_layout_id attribute on a SimpleExoPlayerView. This make specified layout inflated instead of exo_simple_player_view.xml for only the instance on which the attribute is set.

For more customization option check official document on customizing ExoPlayer.

Changing Video Quality in ExoPlayer on Android

To change the video quality in ExoPlayer, developers can utilize TrackSelector and DefaultTrackSelector. They can create a DefaultTrackSelector, configure it with desired parameters like bitrate, and then pass it to the ExoPlayer instance during initialization​.

Changing ExoPlayer Aspect Ratio

Developers can set the aspect ratio of the ExoPlayer by creating a custom AspectRatioFrameLayout and wrapping it around the PlayerView. They can then use the setAspectRatio method on the AspectRatioFrameLayout to change the aspect ratio.

Implementing ExoPlayer Cache

Caching can be implemented in ExoPlayer by using the CacheDataSourceFactory which wraps around another DataSource.Factory instance. A SimpleCache instance can be used to manage the cache, and the LeastRecentlyUsedCacheEvictor can be used to evict old data from the cache to ensure it doesn’t grow too large​.

ExoPlayer Play Audio from URL

To play audio from a URL, developers can initialize a DefaultDataSourceFactory and a ProgressiveMediaSource (or other appropriate MediaSource depending on the audio format), and prepare the ExoPlayer instance with the MediaSource. The uri of the audio file needs to be passed to the MediaSource to start streaming and playing the audio.

ExoPlayer Play Local File

Playing a local file can be achieved by creating a MediaItem or a RawResourceDataSource with the URI of the local file, and then preparing the ExoPlayer instance with a MediaSource created with that URI. Developers can use the res/raw folder to store and access local files, or use the assets directory if the file is stored there. They can use methods like RawResourceDataSource.buildRawResourceUri or MediaItem.fromUri to create a URI from the local file path

How To Play DRM Content On Exoplayer

So far we have gone through the advantages of using ExoPlayer and how to customize it to suit our needs, in this section, we will see how to use ExoPlayer to play DRM-protected content which is also mentioned as its advantage over the in-built MediaPlayer API.

Before we start lets understand what is Digital rights management (DRM). Digital rights management (DRM) is way to protect copyrights for digital media. It has been developed to protect all kinds of digital materials prepared for computers and other technological devices, including movies, tv series, games, music and software. Putting these restrictions on DRM-protected content is creating security issues that prevent copying and distribution over the internet.

ExoPlayer uses Android’s Media Drm API to support DRM protected playbacks.

The minimum Android versions required for different supported DRM schemes, along with the streaming formats for which they’re supported are the following. In addition to the below table, playback is limited on older devices due to security updates. Android version 7 and above are more reliable to widevine playback.

While building a media source for ExoPlayer, you should specify the UUID of the DRM system and the license server URI. Using these properties we will build an instance of DefaultDrmSessionManager needed for handling DRM related key and provisioning request to enable media playback.

Frist we need to create a instance of DrmSessionManager

private DefaultDrmSessionManager buildDrmSessionManager(UUID uuid, String licenseUrl, String userAgent) {  
    HttpDataSource.Factory licenseDataSourceFactory = new DefaultHttpDataSource.Factory().setUserAgent(userAgent);  
    HttpMediaDrmCallback drmCallback = new HttpMediaDrmCallback(licenseUrl, true,  
            licenseDataSourceFactory);  
    return new DefaultDrmSessionManager.Builder()  
            .setUuidAndExoMediaDrmProvider(uuid, FrameworkMediaDrm.DEFAULT_PROVIDER)  
            .build(drmCallback);  
}

Now we have to build a media source with license url

DRM License Url : https://proxy.uat.widevine.com/proxy?provider=widevine_test

private DashMediaSource buildDashMediaSource(Uri uri) {  
    String drmLicenseUrl = "https://proxy.uat.widevine.com/proxy?provider=widevine_test";  
    String userAgent = Util.getUserAgent(context, context.getApplicationContext().getPackageName());  
    UUID drmSchemeUuid = Util.getDrmUuid(C.WIDEVINE_UUID.toString());  

    DrmSessionManager drmSessionManager = buildDrmSessionManager(drmSchemeUuid, drmLicenseUrl, userAgent);  

    DataSource.Factory dataSourceFactory = new DefaultDataSource.Factory(context, new DefaultHttpDataSource.Factory().setUserAgent(userAgent));  
    return new DashMediaSource.Factory(dataSourceFactory)  
            .setDrmSessionManagerProvider(unusedMediaItem -> drmSessionManager)  
            .createMediaSource(  
                    new MediaItem.Builder()  
                            .setUri(uri)  
                            .setMimeType(MimeTypes.APPLICATION_MPD)  
                            .build()  
            );  
}

Now add url and its ready to be played.

DRM Url: https://storage.googleapis.com/wvmedia/cenc/h264/tears/tears.mpd

private void initializePlayer() {  
    String url = "https://storage.googleapis.com/wvmedia/cenc/h264/tears/tears.mpd";  

    MediaSource mediaSource = buildDashMediaSource(Uri.parse(url));  

    ExoTrackSelection.Factory videoTrackSelectionFactory = new AdaptiveTrackSelection.Factory();  
    TrackSelector trackSelector = new DefaultTrackSelector(context, videoTrackSelectionFactory);  
    trackSelector.setParameters(trackSelector.getParameters().buildUpon()  
            .setPreferredTextLanguage("en")  
            .build());  

    DefaultRenderersFactory renderersFactory = new DefaultRenderersFactory(context)  
            .forceEnableMediaCodecAsynchronousQueueing()  
            .setExtensionRendererMode(DefaultRenderersFactory.EXTENSION_RENDERER_MODE_OFF);  

    int maxBufferMs = DefaultLoadControl.DEFAULT_MAX_BUFFER_MS;  

    DefaultLoadControl loadControl = new DefaultLoadControl.Builder()  
            .setBufferDurationsMs(DefaultLoadControl.DEFAULT_MIN_BUFFER_MS,  
                    maxBufferMs,  
                    DefaultLoadControl.DEFAULT_BUFFER_FOR_PLAYBACK_MS,  
                    DefaultLoadControl.DEFAULT_BUFFER_FOR_PLAYBACK_AFTER_REBUFFER_MS)  
            .build();  

    ExoPlayer exoPlayer = new ExoPlayer.Builder(context, renderersFactory)  
            .setTrackSelector(trackSelector)  
            .setLoadControl(loadControl)  
            .build();  

    exoPlayer.setMediaSource(mediaSource);  
    exoPlayer.prepare();  
}

In next section we will see how we at VdoCipher use ExoPlayer to stream DRM protected videos.

Adaptive Bitrate Streaming in Exoplayer

Exoplayer can also be used for adaptive bitrate streaming to set video quality automatically based on available network bandwidth. Adaptive bitrate streaming (also known as adaptive streaming) is a technology designed to deliver video in the most efficient way possible and in the highest usable quality for each specific user and device.

For slow connections, the video will be played in low quality, and for fast connections, the video will be played in the best quality with less buffer time. These qualities(bit rates and resolutions) are known as tracks. The same media content is split into multiple tracks, each for a given quality based on bit rate and resolution. Each track is split into chunks of a given duration, typically between 2 and 10 seconds. This makes it easier to switch between tracks with changing network speeds and signals.

Implementing Adaptive Track Selection

To implement Adaptive streaming, add TrackSelector while initializing the player. The TrackSelector is used to switch between multiple tracks.

ExoTrackSelection.Factory videoTrackSelectionFactory = new AdaptiveTrackSelection.Factory();

TrackSelector trackSelector = new DefaultTrackSelector(context, videoTrackSelectionFactory);  
trackSelector.setParameters(trackSelector.getParameters().buildUpon()
        .setMaxVideoSizeSd()  
        .setPreferredTextLanguage("en")  
        .build());  

ExoPlayer exoPlayer = new ExoPlayer.Builder(context)  
        .setTrackSelector(trackSelector)  
        .build();  

Create an adaptive track selection factory with default parameters and pass it to DefaultTrackSelector which is responsible for choosing tracks in the media item.
Then pass the trackSelector to ExoPlayer builder.

Build an Adaptive Media Source

DASH, HLS, and SmoothStreaming are all media formats ExoPlayer supports that are capable of adaptive streaming, but we’ll focus on DASH for now and use the DashMediaSource. To stream DASH content, you need to create a MediaItem.

Uri manifestUri = Uri.parse(dashUrl); 
DataSource.Factory dataSourceFactory = new DefaultDataSource.Factory(context, new DefaultHttpDataSource.Factory().setUserAgent(userAgent));
mediaSource = new DashMediaSource.Factory(dataSourceFactory)
                    .createMediaSource(
                            new MediaItem.Builder()
                                .setUri(manifestUri)
                                .setMimeType(MimeTypes.APPLICATION_MPD)
                                .build()
                    );

How VdoCipher Streams Video on Android Using ExoPlayer

The components of our video streaming can be broken down into four main parts:

  1. Client attempting to play content
  2. VdoCipher license server that generates decryption keys based on client requests
  3. Provisioning server if unique credentials are required for devices
  4. Content server that serves encrypted content

At client side we try to play protected content from the content server via a DashMediaSource with a provided DrmSessionManager, this DRMSessionManager contains the implementation of MediaDRMCallback wrapping a HttpMediaDrmCallback that extends its functionality by wrapping/unwrapping license request/response and throwing custom exceptions to help identify the cause. In the meantime, if the device needs provisioning, a request to the provisioning server is done via callback. After the MediaDRM client receives the license, and it is passed to ExoPlayer via the media source and media playback will begin. This procedure is repeated with every media playback request for non-persistent licenses. Our application saves persistent licenses and reuses them until they expire. In addition, persistent license requests are fetched before the secure video playback starts with OfflineLicenseHelper, allowing for video initialization to happen regardless of whether or not the license fetch operation succeeded. Now lets see how these classes are utilized.

How Exoplayer can play video from URL in Android video player?

After creating the instance of Exoplayer you can pass the video_url as, MediaItem mediaItem = MediaItem.fromUri(video_url);

You can also stop playback on cloned apps, emulators and rooted devices in Android with our Play Integrity integration in Android SDK. Check out our play integrity api documentation to know more. Learn more about android video SDK to stream your videos on Android with VdoCipher.

The post Media3 ExoPlayer Tutorial: How to Stream Videos Securely on Android? appeared first on VdoCipher Blog.

]]>
CENC Common Encryption Methods and Algorithms Guide https://www.vdocipher.com/blog/cenc-common-encryption-methods-algorithms/ Mon, 11 Sep 2023 17:29:14 +0000 https://www.vdocipher.com/blog/?p=13793 A fundamental concept of modern security, encryption, serves as the cornerstone to safeguard data storage, digital communications, streaming, online transactions and much more. Derived from the roots of cryptography, encryption transforms a plain, readable information into an unreadable or unintelligible form using mathematical algorithms and secret keys. The same data is decrypted using a decryption […]

The post CENC Common Encryption Methods and Algorithms Guide appeared first on VdoCipher Blog.

]]>
A fundamental concept of modern security, encryption, serves as the cornerstone to safeguard data storage, digital communications, streaming, online transactions and much more. Derived from the roots of cryptography, encryption transforms a plain, readable information into an unreadable or unintelligible form using mathematical algorithms and secret keys. The same data is decrypted using a decryption key at the client/user’s end. The overall purpose of encryption is to make sensitive data confidential and secure, restricting unauthorized access, sharing or copying. To simplify the content protection process for video creators and distributors, CENC, or Common Encryption Scheme ensures interoperability between various DRM systems. CENC common encryption methods and algorithms also plays an important role in secure and seamless digital media delivery to a range of devices and systems.

Furthermore, encryption safeguards sensitive information stored on servers, smartphones, browsers, and personal computers from hacking and piracy. As the presence of the online world is rapidly progressing, the importance of encryption in our professional and personal lives has grown more critical, playing a key role in fighting piracy and securing our digital ecosystem.

Explore More ✅

VdoCipher ensures Secure Video Hosting for OTT Platforms

VdoCipher helps over 3000+ customers from over 120+ countries to host their OTT videos securely, helping them to boost their video revenues.

How Does Encryption Work?

Here’s the basic outline of the encryption technology in simple words:

Key Generation – Before encryption or decryption happens, a secret key is generated. The key can be single for symmetric encryption or a pair of two keys for asymmetric encryption. The encryption strength depends on the algorithm used and key size.

Encryption Algorithm – Using an encryption algorithm, the plaintext data is processed and converted into ciphertext. It applies a combination of mathematical operations and transformations. There are several encryption algorithms including DES (Data Encryption Standard) and AES (Advanced Encryption Standard).

Data Encryption – The sender inputs the secret key and plaintext data into the encryption algorithm which further transforms the plaintext into ciphertext. It is then ready to be transmitted without any unauthorized user having access to it.

Data Transmission – Once the encrypted data is transmitted, it remains unreadable to anyone without the decryption key.

Decryption – Upon receiving, the authorized user uses a decryption key to decrypt the ciphertext into the plaintext using an encryption algorithm.

Data Integrity Verification – During storage or transmission, to ensure the data is not hampered, additional mechanisms are often employed. For example, message authentication codes, hashing, or digital signatures.

Types of Encryption

Symmetrical Encryption

In symmetrical encryption, a single shared key is used to both encrypt and decrypt the content files. Both the sender and receiver must have the same key to securely exchange the information. It is commonly used for bulk data encryption and secure communications in a closed ecosystem.

Asymmetrical Encryption

Asymmetrical encryption is also known as public key cryptography. It uses a pair of related keys, one for encryption (public key) and the other for decryption (private key). To encrypt data with that public key, the only way to decrypt is via the corresponding private key you have access to.

Furthermore, you can also reverse the key flow. One can encrypt the information through a private key, and only people with the public key can decrypt it. This is the mechanism that we use for doing a digital signature, for example.

Symmetric vs Asymmetric Encryption | Tabular Comparison

Attribute Symmetric Encryption Asymmetric Encryption
Key type Single shared secret key Two related keys: public key and private key
Key usage Same key for encryption and decryption Public key for encryption, private key for decryption
Speed Faster, as it uses simpler operations Slower, due to more complex calculations
Key distribution More challenging, as the secret key must be shared securely Easier, as only the public key needs to be shared and can be done so freely
Use case Bulk data encryption, secure communication within closed systems Secure communication over open networks, digital signatures, key exchange
Examples of algorithms AES (Advanced Encryption Standard), DES (Data Encryption Standard), Blowfish RSA, ElGamal, ECC (Elliptic Curve Cryptography)
Key management complexity Higher, as the number of keys increases exponentially with the number of users Lower, as each user only needs one public-private key pair
Security Depends on the key size and the algorithm used Generally considered more secure due to separate keys for encryption and decryption

Importance of Encryption for the Video Industry

For the video industry, video encryption is very important. Here are some reasons why:

  • To protect Intellectual Property – video content such as movies, and shows take a lot of time and effort. Encryption provides means to protect their intellectual property from unauthorized access, copying, and sharing.
  • To secure content distribution – to enforce licensing agreements and restrict access to their premium content, content creators and distributors use encryption. Platforms like Netflix use DRM (Digital Rights Management) to encrypt their video content.
  • Maintain user privacy – Today video platforms store a lot of information and data such as preferences, payment options, and viewing habits. To maintain trust and user privacy, the data is secured via encryption.
  • To meet compliance requirements – Many regions and industries require the mandatory protection of sensitive data. In the field of videos, encryption technologies like video DRM help in complying with these regulations.

What is CENC Common Encryption Methods & Algorithms?

The most popular and used video streaming protocols are MPEG-DASH and HLS. On one side, HLS uses (ts) container format for its videos while MPEG-DASH uses mp5 format. The first problem is, if a content provider uses both these protocols, then they need to store video files in both formats. Secondly, it’s a total waste of storage space.

To address the first issue, CMAF (Common Media Application Format) specification was developed. Here, the videos are stored in fragmented mp4 container format (fmp4). Now, instead of two separate video formats, you have to store the video in fmp4 format, using common file sets for both protocols.

What if different DRM technologies use different encryption standards?

MPEG developed the Common Encryption Scheme (CENC) that standardized the method of media content specified by ISO/IEC 23001-7.

Explore More ✅

Protect Your VOD & OTT Platform With VdoCipher Multi-DRM Support

Vdocipher helps several VOD Platform to host their videos securely, helping them to boost their video revenues.

CENC defines a common set of encryption and key mapping methods that are compatible with different DRM systems. CENC is based on the MPEG Common Encryption (MPEG-CENC) standard which is supported by various platforms, including web browsers, mobile devices, and smart TVs. Also, CENC is used in various multimedia content distribution platforms such as Netflix, Amazon Prime, Hulu, and YouTube.

This allows content providers to encrypt their media once and use multiple DRM systems for content protection simultaneously. This is particularly useful for adaptive streaming formats like MPEG-DASH and HLS, where the media is delivered in smaller segments to ensure smooth playback across various devices and network conditions.

Streaming Format Container Format Compatibility with CENC
MPEG-DASH fMP4 Yes
HLS fMP4 Yes
HLS MPEG-2 TS No

CENC Encryption Algorithms

CENC common encryption methods and algorithms does not have a specific encryption algorithm, instead supports multiple encryption algorithms. Some of the common encryption algorithms CENC supports are:

AES-CTR (Advanced Encryption Standard – Counter Mode)
AES-CBC (Advanced Encryption Standard – Cipher Block Chaining)

These two modes do the encryption handling differently and are not compatible with one another.
Primarily, there are two DRM technologies, Google Widevine, and Apple FairPlay. Different technologies have varying levels of support for each encryption standard. The points of difference between the two are how they handle the plaintext blocks and initialization vector, IV (first ciphertext block)

CENC streaming format compatibility table

CTR (Counter Mode)

CTR encryption mode

source: wikipedia

  • CTR converts a block cipher into a stream cipher using encrypting successive values of a counter.
  • It generates a unique input which is a combination of a nonce and a counter for each data block.
  • The counter is incremented for each subsequent data block to ensure uniqueness.
  • For each subsequent data block, the counter increments.
  • The block cipher then encrypts the counter value via the secret key to produce a keystream block.
  • The keystream block is XORed with the plaintext block to generate the ciphertext block.
  • CTR enables parallel encryption and decryption so that each block can be processed independently of others.
  • It doesn’t require padding for the plaintext data, making it more efficient for media content.
  • For random IV/nonce – combining with a counter through any invertible operation like XOR, concatenation to produce a unique encryption counter block.
  • For non-random nonce (example packet counter) – nonce and counter are concatenated.
  • Just adding or XORing the nonce and counter into a single value may break the security under plaintext attacks. The attacker may be capable of altering the entire IV-counter pair.
  • Once accessed, the XOR of the plaintext with the ciphertext will have a value that when XORed with the ciphertext of another block having the same IV-counter pair, would decrypt the block.
                                    CTR (Counter)
Encryption parallelizable Yes
Decryption parallelizable Yes
Random read access Yes

CBC (Cipher Block Chaining)

CNC encryption mode

source: wikipedia

  • CBC encrypts plaintext in fixed-size blocks (e.g., 128 bits for AES).
  • An initialization vector (IV) ensures an unique ciphertext for the same plaintext with the same key.
  • Before encryption, the IV is XORed with the first plaintext block with the block cipher and the secret key, producing the first ciphertext block.
  • The previous ciphertext block is XORed with the current plaintext block before encryption for each subsequent block, creating a dependencies chain.
  • Decryption requires processing the ciphertext blocks in reverse order due to the chaining mechanism.
  • CBC lacks parallelism, as each block’s encryption or decryption depends on the previous block’s ciphertext.
    It requires padding for the plaintext data, which can lead to inefficiencies.
                         CBC (Cipher block Chaining)
Encryption parallelizable No
Decryption parallelizable Yes
Random read access Yes
  • One major disadvantage of CBC is that it requires sequential encryption, meaning you cannot encrypt multiple blocks simultaneously (no parallelization). This can make the process slower compared to other encryption modes.
  • Another drawback is that the message must have a length that is a multiple of the block cipher’s size. This often requires adding padding (extra bits) to the message to meet the required size.
  • “Ciphertext stealing” is a technique used to address the padding issue mentioned above, allowing encryption without padding. In CBC, if there is a one-bit change in the plaintext or the initialization vector (IV), all the following ciphertext blocks are affected. This property can be both an advantage and a disadvantage, as it increases security but also makes error propagation more likely.

CENC Common Encryption Methods & Algorithms Support

Some major browsers, devices, and platforms supporting CENC are:

Streaming Protocols:
MPEG-DASH (Dynamic Adaptive Streaming over HTTP)
HLS (HTTP Live Streaming)

DRM Systems:
Google Widevine
Apple FairPlay Streaming
Microsoft PlayReady

Devices and Platforms:
Many smart TVs
iOS and Android devices
Web browsers such as Google Chrome and Safari

CENC Implementation

Common Encryption Scheme (CENC) is pretty straightforward for developers having knowledge of media streaming, encryption process, and most importantly DRM systems. Based on the project’s complexity and the developer’s expertise, the CENC implementation varies.

Some factors to consider while implementing CENC include

DRM Systems – CENC works with multiple DRM systems, such as Google Widevine and Apple FairPlay Streaming. The developer must understand the intricacies of each DRM system being used and ensure proper integration with CENC.

Streaming Protocols – Implementing CENC requires knowledge of the streaming protocols being used, such as MPEG-DASH or HLS streaming. The developer must understand how CENC is integrated with these protocols to provide secure content delivery.

Encryption Algorithms – CENC supports multiple encryption algorithms, like AES-CTR and AES-CBC. The developer needs to be familiar with these encryption algorithms and their proper implementation within the CENC framework.

Key Management – Proper key management is essential for secure content protection. The developer must ensure that encryption keys are securely generated, stored, and distributed and that the right keys are used for the appropriate DRM systems.

Compatibility – Ensuring compatibility across various devices and platforms can be a challenge, as each may have unique requirements and limitations. Developers must thoroughly test their CENC implementation to ensure seamless content delivery and playback.

VdoCipher Secure Multi-DRM Solution

VdoCipher, a secure video hosting platform offers the highest available video security to protect your premium content from piracy and unauthorized access or sharing. Being a direct partner with Google for Widevine DRM, VdoCipher offers Multi-DRM solutions with Apple FairPlay and Google Widevine DRM support. Videos streamed via VdoCipher cannot be illegally downloaded or hacked using any internet plugin or software.

VdoCipher secure custom video player features

More than 3000 customers across 120+ countries rely on VdoCipher to securely host and stream their premium content. The various features offered by VdoCipher are:

  • Multi-DRM Encryption
  • Dynamic Watermarking
  • Custom Video Player for Android, Desktop, and iOS
  • Secure offline downloads in Android
  • Screen Capture Prevention
  • Ready to use Plugins for WordPress & Moodle
  • Easy Embed Options
  • Domain/IP Restrictions
  • Adaptive Bitrate Streaming
  • Video Analytics

FAQs

What are the advantages of CENC?

Interoperability between DRM systems, easier content distribution, reduced costs in managing multiple DRMs, and simplified workflow.

Is CENC common encryption methods and algorithms secure for my videos?

CENC security depends on the DRM systems it supports and the underlying encryption algorithms. When implemented properly, CENC can highly secure digital content.

Name some most common encryption methods

Advanced Encryption Standard (AES), Data Encryption Standard (DES), RC4, Triple Data Encryption Standard (3DES), and RSA.

The post CENC Common Encryption Methods and Algorithms Guide appeared first on VdoCipher Blog.

]]>
CMAF Streaming Guide to Enhance Video Delivery and User Experience https://www.vdocipher.com/blog/cmaf-streaming/ Tue, 16 May 2023 05:59:15 +0000 https://www.vdocipher.com/blog/?p=13847 The Common Media Application Format (CMAF) is a versatile media format designed to simplify streaming delivery, reduce storage costs, and enable adaptive streaming across various devices and platforms. In this comprehensive guide, we will explore the benefits, applications, and best practices for implementing CMAF in your video streaming workflow. Table Of Contents: What is CMAF […]

The post CMAF Streaming Guide to Enhance Video Delivery and User Experience appeared first on VdoCipher Blog.

]]>
The Common Media Application Format (CMAF) is a versatile media format designed to simplify streaming delivery, reduce storage costs, and enable adaptive streaming across various devices and platforms. In this comprehensive guide, we will explore the benefits, applications, and best practices for implementing CMAF in your video streaming workflow.

What is CMAF (Common Media Application Format)?

CMAF, the Common Media Application Format, is an innovative and extensible standard aimed at streamlining the end-to-end delivery of HTTP-based streaming content. It simplifies the process of broadcasting to multiple devices while reducing costs, lowering latency, and eliminating workflow complexities for content owners or broadcasters.

Formalized as ISO/IEC 23000-19, CMAF was introduced by Apple and Microsoft following the decline of Flash’s Real-Time Messaging Protocol (RTMP). The spotlight shifted to HTTP-based technologies, which facilitated adaptive bitrate streaming and supported various file containers and formats. However, content owners had to encode and store video streams in different versions to cater to a broad audience base, attracting significant storage and maintenance costs. CMAF addresses this issue by providing a uniform streaming container that works with both HLS and DASH protocols. It also employs chunked encoding and chunked transfer encoding to lower latency.

Explore More ✅

Secure Your Videos with VdoCipher Video Hosting

VdoCipher can help you stream your videos. You can host your videos securely, and you get various features such as Video API, CDN, Analytics and Dashboard to manage your videos easily.

Although CMAF is not a protocol in itself, it is a container and set of standards for single-approach video streaming that works with protocols like HLS and MPEG-DASH. It supports various existing codecs, making it more compatible with a wider range of devices.

In the realm of content delivery, CMAF (Common Media Application Format) boasts several unique media components – CMAF Tracks, Switching Sets, Aligned Switching Sets, Selection Sets, and Presentations. Efficient caching and multi-platform distribution are its strengths.

CMAF Purpose and Benefits

The Common Media Application Format (CMAF) was developed to address the challenges of streaming latency, complexity, and costs. By creating a standardized format for delivery, CMAF aims to provide several benefits:

  • Reduced storage costs: CMAF eliminates the need to create different content renditions for compatibility with various streaming formats, thereby cutting down repackaging and CDN maintenance costs.
  • Simplified workflow: It allows for common encryption (CENC), which means data does not need to be encrypted multiple times. Reliable DRM solutions can quickly and easily decrypt encrypted data, removing unnecessary operational complexity.
  • Reduced latency: CMAF enables publishers to leverage chunked encoding, ensuring speedy delivery of content. By transmitting smaller chunks in sequence, CMAF offers ultra-low latency (ULL), close to real-time (three seconds or fewer), as opposed to other streaming protocols that result in higher latency.
  • Universal encryption: It creates a standard encryption method that each device can easily decode, reducing file size and improving transmission and playback.
  • Reduced data redundancies: CMAF simplifies the streaming process by reducing data redundancies created by multiple encryption formats and duplicate files.
  • Lowered processing costs: By reducing network bandwidth required to process, and encode video content, CMAF contributes to overall cost savings in the streaming process.

CMAF’s ultra-low latency is particularly notable, as it uses the same infrastructure as other higher latency options without increasing costs. This makes CMAF an ideal choice for delivering real-time or near-real-time streaming experiences.

Need of CMAF for Video Streaming

The world of video streaming is complex, with a multitude of codecs, media formats, protocols, and devices adding to the intricacy. Different media formats increase streaming latency and costs, making video delivery unnecessarily expensive and slow. Broadcasters aiming for a wider audience need to create multiple copies of each stream file in different file containers, which doubles the cost of packaging, storing, and caching on CDN servers.

Before CMAF, Apple’s HLS protocol depended on .ts (MPEG-TS) or MPEG container formats, while HTTP-based technologies like DASH relied on .mp4 (fMP4). With the advent of CMAF and Common Encryption (CENC), industry players like Microsoft and Apple now deliver content across HLS and DASH protocols using the fragmented MP4 (.fmp4) container.

CMAF streamlines interoperability of DRM (Digital Rights Management) solutions with the help of MPEG-CENC, further simplifying the video streaming process. Overall, CMAF significantly reduces latency, complexity, and costs associated with video streaming, making it an essential tool for the industry.

History and Developments of CMAF

The decline of Adobe’s Flash Player and RTMP in 2020 ushered in a new era of HTTP-based technologies for adaptive bitrate streaming. However, different streaming standards, such as MPEG-DASH and HLS, required different file containers like .mp4 and .ts, respectively.

In February 2016, Apple and Microsoft proposed a new uniform standard, the Common Media Application Format (CMAF), to the Moving Pictures Expert Group (MPEG) to reduce complexity and costs when transmitting video online. By June 2016, Apple announced support for the fMP4 format, and by July 2017, the CMAF specifications were finalized. In January 2018, the CMAF standard was officially published.

This achievement was more diplomatic than technical, as it brought about cooperation between major tech giants to establish a standardized container for seamless video streaming.

CMAF Encoding and Extensions

CMAF (Common Media Application Format) is a way to create special MP4 video files that can be easily streamed online. These files can be used with multiple streaming technologies, like DASH and HLS, making it easier for video providers to deliver content to different devices.

cmaf streaming logical workflow

File extensions like *.cmfv, *.cmfa, and *.cmft are suggested by the CMAF standard for video, audio, and text, but they are not strictly required. You can use other extensions like MP4 or M4A, and the files will still work with most CDNs (Content Delivery Networks). Some CDNs might have optimizations for specific extensions, so it’s essential to consider that.

CMAF is not related to transcoding (converting video files from one format to another). It is just a container format for storing video, audio, and text data. To create CMAF files, you can use tools like Bento4, Shaka, or FFmpeg, which help generate the right format for streaming.

Although CMAF is a useful format, its adoption has been slower than expected. It benefits CDN providers along with content creators and distributors. You can still create CMAF-like files using FFmpeg and other tools, even if there isn’t a dedicated CMAF muxer yet.

Comparing Between Elements Of CMAF, HLS And DASH

CMAF vs RTMP

RTMP (Real-Time Messaging Protocol), a TCP-based brainchild of Macromedia (now Adobe-owned) for streaming audio, video, and data between Flash player and server. While RTMP’s low latency and minimal buffering are noteworthy, it lacks quality and scalability, with dwindling support. CMAF outshines RTMP, ensuring low latency (3-5 seconds), superior quality, and scalability.

CMAF vs HLS

HLS (HTTP Live Streaming) is an adaptive HTTP-based protocol developed by Apple for transporting video and audio data from media servers to end users’ devices. While HLS is widely supported and ensures optimal user experience with minimal buffering, it offers a latency of 5-20 seconds. CMAF can work with HLS to improve latency and standardize container files. However, Apple has developed Low-Latency HLS, which reduces latency and competes with CMAF, raising questions about Apple’s commitment to standardization.

Explore More ✅

Protect Your VOD & OTT Platform With VdoCipher Multi-DRM Support

Vdocipher helps several VOD platforms to host their videos securely, helping them to boost their video revenues.

CMAF vs WebRTC

WebRTC (Web Real-Time Communications) is a revolutionary tech, enables real-time media exchanges between browsers and devices. Its ultra-low latency (0.5 seconds) is impressive, but it’s tailored more for real-time video conferencing and feedback-enabled systems.

While each protocol has its advantages and drawbacks, CMAF provides a consistent approach that simplifies content delivery and improves viewer experience with low latency of 3-5 seconds.

CMAF HLS DASH
Manifest HLS Master Playlist (.m3u8) files Media Presentation Description (.mpd) file
Presentation Presentation defined by Master Playlist and associated Media Playlists with aligned start points. DASH Period and associated Adaptation Sets defined in MPD.
Selection Sets Sets of parallel tiers of Media Playlists defined by appropriate sets of EXT-X-STREAM-INF tags. Such tiers could be defined, e.g. for different codecs. A group of Adaptation Sets defined for each Period in MPD.
Switching Set A set of Media Playlists or Variant Streams that can be used by player to play presentation. DASH Adaptation Set
Track HLS Variant Stream (specified by Media Playlist URI and EXT-X-STREAM-INF tag describing its properties), restricted to single media type. DASH Representation restricted to single media type.
Header Media Initialization Section, defined by EXT-X-MAP tag DASH Initialization Segment
Segment Sequence of fMP4 segments within same variant stream Sequence of DASH segments within same representation
Fragment HLS fMP4 segments limited to single media type (i.e. audio or video) DASH segment limited to single media type
Chunk Chunk of fMP4 segment limited to integral number of samples DASH subsegment
Presentation profile Only unencrypted or ‘cbcs’ encrypted profile are supported Unencrypted, and multiple types of encrypted profiles are supported

CMAF Supported Video Formats and Encoding

CMAF accommodates various video codecs, resolutions, and frame rates (think HDR and WCG content). CMAF’s popular video codecs – H.264 (AVC) and H.265 (HEVC) – offer efficient compression for high-quality streaming with low storage consumption. It also employs the ISOBMFF container, founded on the fragmented MP4 (fMP4) format, standardizing content delivery across HLS and DASH streaming protocols.

Audio codecs like AAC, AAC-LC, HE-AAC+ v1 & v2, and MP3 are also compatible with CMAF, ensuring high-quality audio complements video content for an immersive experience.

CMAF Streaming Tools for Encoding, Packaging and Playback

CMAF (Common Media Application Format) streaming tools help content providers encode, package, distribute, and playback their media content across various platforms and devices. Here is a list of some popular CMAF streaming tools and services:

Encoding tools

  • FFmpeg: A widely-used, open-source multimedia framework that supports encoding video and audio content in CMAF-compatible formats.
  • AWS Elemental MediaConvert: A cloud-based encoding service from Amazon Web Services that provides support for CMAF format conversion.

Packaging tools

  • Bento4: A set of open-source tools for working with fragmented MP4 (fMP4) files, which can be used to package CMAF-compatible content.
  • Shaka Packager: An open-source media packaging tool developed by Google that supports CMAF packaging along with other popular streaming formats.

Video players

  • VdoCipher Custom Player: An advanced video player with customization options and watermarking features. Also, has plugins, SDKs and APIs for easy integration.
  • Video.js: An open-source HTML5 video player that supports CMAF playback, making it compatible with various devices and platforms.
  • Shaka Player: An open-source, JavaScript-based player developed by Google that supports CMAF playback, along with other popular streaming formats.

These CMAF streaming tools can be combined to create a complete end-to-end streaming workflow, ensuring compatibility, security, and high-quality streaming experiences for viewers.

Combining DRM with CMAF for Video Security

To safeguard copyrighted content, Digital Rights Management (DRM) systems are essential. CMAF flawlessly integrates with major DRM systems (FairPlay, and Widevine) via the Common Encryption (CENC) standard. Content providers can encrypt video streams with a single method compatible with multiple DRMs.

When implementing CMAF with DRM, content providers need to consider the following steps:

  • Encrypt the content: Use an encryption tool that supports CENC to encrypt the video and audio streams.
  • Generate DRM licenses: Set up a license server for each DRM system (FairPlay, Widevine) to generate and manage licenses for authorized users.
  • Integrate with a video player: Use a video player that supports multi-DRM playback and can request the appropriate license from the license server based on the end user’s device and platform.

By combining CMAF with DRM systems, content providers can ensure the security of their video content while maintaining compatibility and delivering a high-quality streaming experience to their users.

Note: This implementation requires technical expertise. If you are looking for a CMAF based video hosting provider with DRM security, VdoCipher seems to be the obvious choice due to additional features.

FAQs

How does CMAF reduce latency?

CMAF uses chunked encoding and chunked transfer encoding processes to break digital content into smaller, manageable chunks with a fixed duration. These chunks can be published immediately upon encoding, allowing for near-real-time content delivery while the encoding process continues.

Is CMAF compatible with DRM (Digital Rights Management) systems?

Yes, CMAF supports major DRM systems like FairPlay, and Widevine. It aims to standardize encryption and DRM systems through Common Encryption (CENC), which simplifies content protection and ensures secure streaming.

Can CMAF be used with popular streaming protocols like HLS, DASH, and RTMP?

CMAF works seamlessly with HLS and DASH protocols, standardizing the container format for cross-protocol content delivery. But it doesn’t directly connect with RTMP, an older streaming protocol designed for Adobe Flash Player.

How does CMAF compare to WebRTC in terms of latency?

Though CMAF excels in low-latency streaming (3-5 seconds latency), WebRTC trumps it with ultra-low latency (0.5 seconds). While CMAF caters to most live streaming situations, WebRTC is the go-to for real-time communication and applications demanding minimal delay.

The post CMAF Streaming Guide to Enhance Video Delivery and User Experience appeared first on VdoCipher Blog.

]]>
React & React Native Video Playback: Simple Guide https://www.vdocipher.com/blog/react-native-video Wed, 15 Feb 2023 05:30:25 +0000 https://www.vdocipher.com/blog/?p=7325 Are you an entrepreneur or an enterprise IT leader with plans to launch client-facing web and mobile apps? You will likely use videos heavily since these are increasingly popular. You might be considering the use of React for your proposed web app. Furthermore, you probably want to use React Native for the proposed mobile apps. […]

The post React & React Native Video Playback: Simple Guide appeared first on VdoCipher Blog.

]]>
Are you an entrepreneur or an enterprise IT leader with plans to launch client-facing web and mobile apps? You will likely use videos heavily since these are increasingly popular. You might be considering the use of React for your proposed web app. Furthermore, you probably want to use React Native for the proposed mobile apps. These frameworks enjoy plenty of popularity. How to incorporate the React video and React Native Video playback functionality? Read on, as we explain that. 

The Importance Of Video Content

If you want to capture the attention of your potential and existing clients, then focus on video content. Video content is a key part of content marketing. The reasons for this are as follows:

  • A video makes it easy to explain your product to your potential clients.
  • The video content delivers a high return on your investments in marketing.
  • You can market video content in various ways, e.g., “stories” on social media platforms like Facebook, live videos, webinars, etc. This offers more flexibility.
  • Videos drive more user engagement.
  • You improve your SEO rankings with video content.

Observers state that in 2020, streaming video content amounted to 75% of all Internet traffic. They believe that this will increase to 82% by 2022. 

What Are React and React Native?

Before discussing the technicalities of incorporating video content in React and React Native apps, let’s talk about React and React Native briefly. These are two popular frameworks for modern application development. 

Facebook developed React in 2011. This web development library is also called React.js. The company used this JavaScript-based open-source framework in the Facebook timeline. Facebook acquired Instagram in 2012. It used React.js in the Instagram timeline too. 

React.JS offers the following advantages:

  • You can use React.js for both front-end and server-side development. 
  • Websites and web apps developed using React have good speed.
  • React.js has a component-based architecture, which promotes the reuse of components. This expedites development.
  • React improves the SEO of a website.
  • React.js enjoys high popularity. A vibrant developers’ community supports it, and this community has created many useful development tools.

React Native is a cross-platform mobile development framework from Facebook. This open-source JavaScript-based framework uses React.JS. React Native can deliver a near-native user experience. You can run the app on both Android and iOS. 

React Native offers the following advantages:

  • It offers a near-native experience since React Native compiles to native app components. 
  • React Native generates platform-specific code for both Android and iOS. This further helps it deliver a near-native user experience.
  • Since React Native is based on JavaScript, it’s easy to learn.
  • React Native supports “hot reloading”, which expedites app development.
  • You can use the ready-to-use UI libraries offered by the React Native ecosystem. This helps you to offer a smooth user experience. 

Setting The Context: Key Challenges In Implementing React Video & React Native Video Playback

Let’s assume that a user of your proposed React or React Native app clicks the button to play a video. A lot of processing takes place before the user sees the first frame of the video. You need to address the challenges that occur during this process.

The video player will download the entire video at one go only if the video is very small. We call this process “single-source playback”, and this isn’t a recommended approach in most cases. Very few real-life use cases have videos small enough for “single-source playback”. 

Most of the time, a video download-and-playback software spits a video into pieces. We commonly use the term “chunks” for these pieces. The software downloads these chunks in a series. Therefore, a device will always download small amounts of data at one time. 

On top of that, a video player must choose the right video quality that’s suitable for the network conditions that it faces. We refer to this process as “choosing a bitrate”. If a video player faces slow network conditions, then it should step down to a smaller bitrate. A failure to do so results in that “buffering” message. This can put off users, and they might move away from the video content. 

Video players use  ABS (Adaptive Bitrate Streaming) to switch between different levels of video quality while downloading chunks. We now discuss the software solutions to achieve ABS effectively.

React and React Native Video Playback

Technology solutions to achieve ABS effectively For React Video & React Native Video Playback

Three technology solutions exist that help video players achieve ABS effectively. These are as follows:

HLS (HTTP Live Streaming)

Apple developed HLS streaming and launched it in 2009. This solution divides a video into chunks of 10 seconds. It then creates indexes for all of these chunks in a separate playlist file. 

HLS is the only native ABS format for operating systems like iOS and OS X. If you are building a video player app for these operating systems, then you need to use HLS.

#EXTM3U

#EXT-X-PLAYLIST-TYPE:EVENT

#EXT-X-TARGETDURATION:10

#EXT-X-VERSION:4

#EXT-X-MEDIA-SEQUENCE:0

#EXTINF:10.0,

fileSequence0.ts

#EXTINF:10.0,

fileSequence1.ts

#EXTINF:10.0,

fileSequence2.ts

#EXTINF:10.0,

fileSequence3.ts

#EXTINF:10.0,

fileSequence4.ts

Explore More ✅

Stream Your Content Securely On React & React Native Apps With VdoCipher

VdoCipher helps provide end-to-end video solutions, right from hosting, encoding, and encryption to the player. On top of it, you get APIs to manage videos, players, and more.

DASH (Dynamic Adaptive Streaming over HTTP)

Multiple technology companies including Google and Microsoft collaborated to develop DASH. They did that in response to a request from MPEG in 2009. DASH is a relatively new standard since it was published in 2012. The developers of DASH wanted to combine earlier standards like MSS and HLS into one standard. However, Apple devices still only support HLS. 

This technology solution splits a video into chunks of 2-4 seconds. This makes the downloading process faster. In turn, this results in better performance. 

<?xml version="1.0" encoding="utf-8"?>
<MPD xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:mpeg:dash:schema:mpd:2011"
xmlns:xlink="http://www.w3.org/1999/xlink"
xsi:schemaLocation="urn:mpeg:DASH:schema:MPD:2011 http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-DASH_schema_files/DASH-MPD.xsd"
profiles="urn:mpeg:dash:profile:isoff-live:2011"
type="dynamic"
minimumUpdatePeriod="PT3S"
suggestedPresentationDelay="PT3S"
availabilityStartTime="2017-10-04T03:31:48"
publishTime="2017-10-04T22:28:30"
timeShiftBufferDepth="PT15.8S"
minBufferTime="PT3.9S">
<ProgramInformation>
<Title>RTSP Session</Title>
</ProgramInformation>
<Period start="PT0.0S">
<AdaptationSet contentType="video" segmentAlignment="true" bitstreamSwitching="true" frameRate="15/2">
<Representation id="0" mimeType="video/mp4" codecs="avc1.4d0020" bandwidth="2000000" width="1280" height="960" frameRate="15/2">
<SegmentTemplate timescale="15360" initialization="init-stream$RepresentationID$.m4s" media="chunk-stream$RepresentationID$-$Number%05d$.m4s" startNumber="22732">
<SegmentTimeline>
<S t="1047462831" d="30639" />
<S d="61438" />
<S d="30709" />
<S d="61457" />
</SegmentTimeline>
</SegmentTemplate>
</Representation>
</AdaptationSet>
</Period>
</MPD>

MSS (Microsoft Smooth Streaming)

Microsoft built this solution, which downloads small chunks of a video in a series. MSS caches the small chunks of videos that it downloads. It uses the edge of the network for this caching. The client-side of an app using this solution requests for these chunks, and it receives them quickly.

The video player needs a manifest file from the server before it can start playback. The manifest file has details like the duration of the video, locations of the chunks, and bitrates available to the video player. 

<?xml version="1.0" encoding="UTF-8"?>
<SmoothStreamingMedia MajorVersion="2"
      MinorVersion="0" Duration="2300000000" TimeScale="10000000">
   <CustomAttributes>
      <Attribute Name = "timeScaleZeroPoint" Value = "..." />
   </CustomAttributes>
   <Protection>
      <ProtectionHeader SystemID="{9A04F079-9840-4286-AB92E65BE0885F95}">
      Base-64 Encoded Data
      </ProtectionHeader>
   </Protection>
   <!-- <StreamIndex Type="video">
            describes the video streams available at each quality level-->
   <StreamIndex
      Type = "video"
      Chunks = "115"
      QualityLevels = "6"
      MaxWidth = "720"
      MaxHeight = "480"
      TimeScale="10000000"
      Url="QualityLevels({bitrate},{CustomAttributes})/Fragments(video={start_time})"
         Name = "video">
      <QualityLevel Index="0" Bitrate="1536000" FourCC="WVC1"
         MaxWidth="720" MaxHeight="480"
         CodecPrivateData = "270000010FCBEE1670EF8A16783BF180C9089CC4AFA11C0000010E1207F840" >
            <CustomAttributes>
               <Attribute Name = "hardwareProfile" Value = "10000" />
            </CustomAttributes>
         </QualityLevel>
         <QualityLevel Index="1" Bitrate="1536000" FourCC="WVC1"
                     MaxWidth="720" MaxHeight="480"
                     CodecPrivateData = "270000010FCBEE1670EF8A16783BF180C9089CC4AFA11C0000010E1207F840" >
            <CustomAttributes>
               <Attribute Name = "hardwareProfile" Value = "1000" />
            </CustomAttributes>
         </QualityLevel>
         <QualityLevel Index="2" Bitrate="1024000" FourCC="WVC1"
            MaxWidth="720" MaxHeight="480"
            CodecPrivateData = "270000010FCBEE1670EF8A16783BF180C9089CC4AFA11C0000010E1207F840">
            <CustomAttributes>
               <Attribute Name = "hardwareProfile" Value = "1000" />
            </CustomAttributes>
         </QualityLevel>
      <!-- Additional quality levels, up to a total of ‘QualityLevels’
         attribute, last one below -->
         <QualityLevel Index="5" Bitrate="307200" FourCC="WVC1"
            MaxWidth="720" MaxHeight="480"
            CodecPrivateData = "270000010FCBEE1670EF8A16783BF180C9089CC4AFA11C0000010E1207F840">
            <CustomAttributes>
               <Attribute Name = "hardwareProfile" Value = "1000" />
            </CustomAttributes>
         </QualityLevel>
         <!-- fragment boundary definitions: specify the duration of
            each fragment in TimeScale increments (default is 100nsec) -->
         <c n="0" d="19680000">
         <!-- fragment boundary definitions: specify the duration
               of each fragment in TimeScale increments (default is
               100nsec) -->

                     <f i="0" s="1525" q="2122"/>
                     <f i="1" s="1406" q="1640"/>
                     <f i="2" s="1217" q="875"/>
                     <f i="3" s="1107" q="1428"/>
                     <f i="4" s="607" q="928"/>
                     <f i="5" s="407" q="428"/>
         </c>
         <c n="1" d="8980000">
                     <f i="0" s="1525" q="2122"/>
                     <f i="1" s="1406" q="1640"/>
                     <f i="2" s="1217" q="875"/>
                     <f i="3" s="1107" q="1428"/>
                     <f i="4" s="607" q="928"/>
                     <f i="5" s="407" q="428"/>
         </c>
         ... <!-- fragment definitions omitted -->
         <c n="114" d="50680000">
                     <f i="0" s="1525" q="2122"/>
                     <f i="1" s="1406" q="1640"/>
                     <f i="2" s="1217" q="875"/>
                     <f i="3" s="1107" q="1428"/>
                     <f i="4" s="607" q="928"/>
                     <f i="5" s="407" q="428"/>
                  </c>
         <!-- end fragment definitions -->
   </StreamIndex>
   <!-- a stream of pictures designed to provide film-strip navigation
      (Zoetrope) around the presentation -->
   <StreamIndex
       Type = "video"
       ParentStreamIndex = "video"
       Subtype = "ZOET"
       FourCC = "JPEG"
       MaxWidth = "100"
       MaxHeight = "100"
       Url = "QualityLevels({bitrate})/Fragments(zoetrope={start_time})"
       Name="zoetrope">
       <QualityLevel Index = "0" Bitrate = "0" />
       <!-- this data is much sparser - every 10 seconds or so -->
       <c t = "0"/>
       <c t = "100000000" />
       <c t = "200000000" />
       <!-- additional data omitted for clarity -->
   </StreamIndex>

   <StreamIndex Type = "text" ParentStreamIndex = "video"
      ManifestOutput = "true" Subtype = "CTRL"
      Url = "QualityLevels({bitrate})/Fragments(control={start_time})"
      Name = "control">
      <QualityLevel Index = "0" Bitrate = "0" />
      <c t = "0">
      <!-- data is a Base64-encoded version of:
      <AdInsert Type = "midroll" Duration = "30s" Time = "250000000"/>-->
         <f i = "0"> PEFkSW5zZXJ0IFR5cGUgPSAibWlkcm9sbCIgRHVyYXRpb24gPSAiMzBzIiBUaW1l
   ID0gIjI1MDAwMDAwMCIgLz4=
         </f>
      </c>
   </StreamIndex>
      <!-- <StreamIndex Type="audio"> describes the audio streams
      available at each bitrate-->

   <StreamIndex
      Type = "audio"
      Chunks = "147"
      Language = "eng"
      QualityLevels = "1"
      TimeScale="10000000"
      Url = "QualityLevels({bitrate},{CustomAttributes})/Fragments(audio={start_time})"
      >

      <QualityLevel Index="0" Bitrate="94208" FourCC="WMA2"
         SamplingRate="48000" Channels="2" BitsPerSample="16"
         PacketSize="1115" HardwareProfile="1000"
         CodecPrivateData= "6101020044AC0000853E00009D0B10000A00008800000F0000000000"/>

      <!-- fragment boundary definitions: specify the duration of
         each fragment in TimeScale increments -->
      <c n="0" d="18770000"><f i="0" s="45"/></c>
      <c n="1" d="18840000"><f i="0" s="41"/></c>
      <c n="146" d="9290000"><f i="0" s="41"/></c>
      <!-- end fragment boundary definitions -->
   </StreamIndex>

   <!-- Additional audio and video feeds can be made available by
   adding <StreamIndex Type="audio" Name="..."> and
      <StreamIndex Type="video" Name="...">
   tags to this manifest and adding an additional Name attribute
   that discriminates for the default video/audio feed. E.g.:
      <StreamIndex Type="video" Name="alternate-angle"> ...
   -->

   <!-- specifies a script-stream [Type="Text" Subtype="SCMD"]
   The absence of a Url attribute and presence of a <Content    >
   element indicates that the content is embedded in the manifest
   rather than requested in fragments from the server
   -->
   <StreamIndex Type="text" Subtype="CAPT" Name="captions ">
      <QualityLevel Index = "0" FourCC = "DFXP" />
      <c t = "0" />
      <c t = "20000000" />
      <c t = "40000000" />
      <!-- additional fragments omitted for clarity -->
   </StreamIndex>

   <StreamIndex Type="text" Subtype="SCMD" Language="en-us"
      TimeScale="10000000" >
      <Content>
         <ScriptCommand Time="REFERENCE_TIME"
            Type="Some string" Command="some string"/>
         <ScriptCommand Time="REFERENCE_TIME2"
            Type="Some string2" Command="some string2"/>
      </Content>
   </StreamIndex>

   <!-- specifies markers/chapters [Type="Text" Subtype="CHAP"]
   The absence of a Url attribute and presence of a <Content    >
   element indicates that the content is embedded in the manifest
   rather than requested in fragments from the server
   -->

   <StreamIndex Type="text" Subtype="CHAP" Language="eng"
      TimeScale="10000000">
      <Content>
         <Marker Time="REFERENCE_TIME" Value="some string" />
         <Marker Time="REFERENCE_TIME" Value="some string" />
      </Content>
   </StreamIndex>
</SmoothStreamingMedia>

React video and React Native Video Playback Frameworks

Now that you know about the different technology solutions to ensure effective ABS, you might wonder how to incorporate them into your app. You can do that by using online video player frameworks. 

We now review a few such frameworks. Since we are talking about  React video and React Native Video playback, we focus on JavaScript frameworks. We highlight whether they support MSS, HLS, or DASH. These React video and React Native Video playback frameworks are as follows:

1. Video.JS

Video.JS is a popular React video and React Native Video playback framework. This free and open-source framework supports HTML5. Developers of this framework first launched it in 2010, and it supports 450,000+ websites at the time of writing this. Companies like LinkedIn and Tumbler use it. 

Video.js offers the following key advantages:

  • It offers a consistent look-and-feel across different browsers.
  • Video.JS supports video playback on desktops and mobile devices;
  • This framework supports HTML5;
  • Video.JS supports popular adaptive video formats like HLS and DASH.
  • You can play videos from YouTube, Vimeo, and other social video platforms with added plugins.
  • You can import Video.JS easily using NPM (Node Package Manager). 
  • This framework offers comprehensive documentation. This makes it easy to integrate for React video and React Native Video playback.
  • Video.JS offers many community-built plugins. 
  • You can use extra CSS to enhance the style according to your requirements.

2. HLS.JS

HLS.JS is a well-known video player library. It’s open-source and free. As the name indicates, HLS.JS supports the HLS format. HLS.JS uses HTML5 video and MediaSource Extensions to deliver the React video and React Native Video playback functionality.

The developers of this library launched it in 2015. Twitter and New York Times are among popular companies that use HLS.JS. 

HLS.JS offers several advantages, which are as follows:

  • It’s lightweight.
  • HLS.JS supports all popular browsers like Chrome, Firefox, Safari, etc. 
  • This video playback library supports all the key desktop and mobile platforms. 
  • You can import HLS.JS easily using NPM.
  • HLS.JS offers detailed documentation, which makes it easy to integrate it into your React or React Native app.

3. DASH.JS

Developers working with the DASH industry forum created DASH.JS, a popular free and open-source React video and React Native Video playback. DASH.JS supports the MPEG-DASH format, which is popularly known as DASH. 

MPEG-DASH is browser-agnostic. As we discussed earlier, it combines many benefits ABS formats like MSS and HLS. It’s a robust ABS format. The fact that DASH.JS supports it is a key advantage of this video player framework.

The other key advantages of DASH.JS are as follows:

  • DASH.JS is a reliable and robust framework.
  • This framework offers adaptation algorithms that work very well.
  • This framework is codec and browser agnostic.
  • DASH.JS offers a wide range of features like in-band events, multiple-periods, etc.
  • You can import DASH.JS into your project using NPM. DASH.JS offers comprehensive documentation.

Implementing The React Video Playback Functionality

To implement the React video playback functionality in a React JS app, you first need to take care of the dependencies. You can use the “React JS Media” library for this. You can install it using NPM.

Subsequently, you need to install the following components:

  • “ReactVideo”, which is for a native video player;
  • “ReactAudio”, which is for an audio player;
  • “FacebookPlayer”, which is for videos from Facebook;
  • “Image”, which is for images that are responsive and optimized;
  • “YouTubePlayer”, which is for YouTube videos.

Take the following steps:

  • Import the native video player from the relevant library. You need to pass the necessary arguments to it. Use it in your app for normal video files supported by browsers. 
  • You need to import the audio player from the “React JS Media” library and pass arguments.
  • Import the YouTube player and provide the necessary arguments. 
  • Import the Facebook player to play videos from Facebook. Provide the necessary arguments. 

Implementing the React Native Video Playback Functionality

React-native doesn’t provide video or audio playback functionality out of the box. However, there are fully functional libraries developed and maintained by the react-native developer community for media playback.

The most important being react-native-video, which has extensive support for both android and ios platforms. The library provides video support by delegating to the corresponding native video libraries on the target platform, i.e. ExoPlayer on android, and AVFoundation on ios. The library supports playback of most popular video streaming formats such as DASH and HLS and supports adaptive multi-bitrate playback, captions, and many low-level video APIs.

If you have secure content protected by DRM, you can configure the react-native-video library to play that as well. It supports playback of Fairplay DRM content on ios, and Playready and Widevine DRM content on android devices.

You need to first take care of the dependencies when implementing the React Native video payback functionality. You can use the following libraries:

  • React Navigation”, which is for navigating the application;
  • Redux”, a predictable state container for JavaScript-based apps;
  • React-redux”, which has the React bindings for Redux;
  • Recompose”, which helps you to write the logic of the component;
  • Reselect”, a “selector” library for Redux.

You can use NPM to install them.

The execution of the video playback function involves the following arguments/parameters:

  • “source”: This refers to the source of the video that you want to display.
  • “resizeMode”: This is a string. It describes how the app should scale the video for display. This argument can have “stretch”, “contain”, or “cover” as values. 
  • “shouldPlay”: It’s a Boolean. This indicates whether a video is supposed to play.
  • “useNativeControls”: This is another Boolean. If this is set to “true”, then it displays the native playback controls like “play” and “pause”. 
  • “onLoad”: This is a function. The React Native app calls it when the video has been loaded.
  • “onError”: This is another function. The React Native app calls it if loading or playback encounters a fatal error. The “onError” function passes a string with an error message as the parameter.  

Developing a React.JS or React Native app: Everything else remains the same

The above guide touches upon the React video and React Native Video playback functionality, however, everything else about developing a React.JS or React Native app remains the same. Focus on the following:

  • Forming a competent project team that focuses on client value and collaboration;
  • Managing the functional and non-functional requirements of your project effectively;  
  • Making the right architectural decisions and choosing a suitable architectural pattern;
  • Designing and developing the app to deliver the functional requirements;
  • Delivering the non-functional requirements like performance, scalability, security, etc.;
  • Managing the project effectively and delivering value to the client. 

Solution by Video Hosting Providers like VdoCipher for React Native App

Instead of using an open-source player, you can choose an online video player from a ready-to-use solution provider. Opting for a cloud solution can help you in getting the additional features, which you’d not have gotten with an open-source solution. These additional features can be encryption-based, which ultimately stops the download of your videos, helping you to put a stop to piracy.

Opting for proprietary software gives you the following benefit:

Support: You get dedicated support that can easily solve all your problems. Otherwise, with an open-source, you’re pretty much on your own and at the whims of their community,

Complexity: If you don’t have the right personnel, it can be really difficult to set up your own video player, hosting, APIs, and more. With a paid solution, you won’t have to care about setting up the infrastructure. As you’ll get a finished product, and a better user interfaces with the help of which you can easily set up your videos.

Security: With open-source players, all the vulnerabilities are usually exposed to every one, since everyone is exposed to the code. In case of any vulnerability, any malicious user can easily find them and take advantage of them. With a paid solution you won’t have to worry about it. On top of it, you get other features, which can protect your videos from any pirate. With a decrypted player along with DRM-based security, you are good to go and not worry about the pirates.

Vdocipher provides a react native SDK, which allows secure encrypted adaptive react native video playback. Also, everything from transcoding, hosting, CDN, DRM encryption to encrypted player is taken care of. he end to end solution ensures that there are no issues in the React video and React Native Video playback. Also, to ensure smooth and secure video streaming regardless of the user’s location, device, browser, and internet speed.

Conclusion

Video contents are already popular, their appeal is growing. React and React Native are popular frameworks to develop modern apps. If you plan to develop a React video or React Native video playback functionality, then you need to know about the relevant adaptive video formats. You also need to know about the best video player frameworks. We reviewed 3 such frameworks, and we evaluated their advantages. Analyze your project requirement carefully before choosing the right video player framework. You can refer to Vdocipher’s react native video SDK to know more about how this SDK enables you to securely stream and download DRM-protected videos through your react-native app.

The post React & React Native Video Playback: Simple Guide appeared first on VdoCipher Blog.

]]>
Custom variables as watermark on WordPress videos https://www.vdocipher.com/blog/custom-variables-watermark-on-wordpress-videos/ https://www.vdocipher.com/blog/custom-variables-watermark-on-wordpress-videos/#respond Fri, 06 Jan 2023 01:48:27 +0000 https://www.vdocipher.com/blog/?p=361 Please visit Add Text to Videos with Watermark for a detailed introduction to adding a watermark to your videos. This particular blog explains what is going on under the hood of the WP plugin, and is useful only if you are adding your own custom-built variables as part of the watermark. Currently, name, IP, and […]

The post Custom variables as watermark on WordPress videos appeared first on VdoCipher Blog.

]]>
Please visit Add Text to Videos with Watermark for a detailed introduction to adding a watermark to your videos. This particular blog explains what is going on under the hood of the WP plugin, and is useful only if you are adding your own custom-built variables as part of the watermark. Currently, name, IP, and email can be shown as part of the watermark.

Watermark on videos adds extra security for the video from screen capture by adding variables such as email, IP or date information to the videos. Custom variables are now supported in plugin 1.6

Default WordPress fields that can be added

Our plugin has been configured to replace the following strings in the annotation code by default:

  • {name} – Current User display name
  • {email} – Current User email
  • {username} – Current User Login
  • {id} – Current User ID

Till version 1.5 of our WordPress video hosting plugin, watermark on videos could only have a limited number of dynamic variables. With version 1.6, we have now added filter hooks on the annotation code to enable other plugins or themes to change the annotation code.

Custom filter addition to the WordPress hook

You can now add a custom filter to the hook `vdocipher_annotate_preprocess` . Example code for adding custom filter is:

function customfunc($vdo_annotate_code){
 $customVariable = "Hello world";
 $vdo_annotate_code = str_replace('{var1}', $customVariable, $vdo_annotate_code);
 return $vdo_annotate_code;
}

add_filter('vdocipher_annotate_preprocess', 'customfunc');

Display WordPress Default field like User Fullname

An example code to display the full name is as follows:

function customvdofunc($vdo_annotate_code){
    $fullname = "";
    if (is_user_logged_in()) {
        $current_user = wp_get_current_user();
        $firstname = $current_user->user_firstname;
        $lastname = $current_user->user_lastname;
        $fullname = $firstname . " " . $lastname;
     }
     $vdo_annotate_code = str_replace('{fullname}', $fullname, $vdo_annotate_code);
     return $vdo_annotate_code;
}
add_filter('vdocipher_annotate_preprocess', 'customvdofunc');

This would replace the string ‘{fullname}’ in the watermark code to the fullname of the logged in user.

JSON Code addition to the VdoCipher WordPress Plugin

The above code enables you to replace the token {var1} with the value of $customVariable. You can then use an annotation code like:

[
{'type':'rtext', 'text':'Your IP : {ip}', 'alpha':'0.8', 'color':'0xFF0000','size':'12','interval':'5000'},
{'type':'text', 'text':'{var1}', 'alpha':'0.5' , 'x':'150', 'y':'100', 'color':'0xFF0000', 'size':'12'}
]

This code on going through the above filter will become

[
{'type':'rtext', 'text':'Your IP : {ip}', 'alpha':'0.8', 'color':'0xFF0000','size':'12','interval':'5000'},
{'type':'text', 'text':'Hello world', 'alpha':'0.5' , 'x':'150', 'y':'100', 'color':'0xFF0000', 'size':'12'}
]

This function can be placed in the functions.php file in your theme. It is recommended to create a child theme before making such edits.

Example Steps to configure custom field “Phone number” as a watermark

You can configure user-specific details like “phone numbers” as a watermark using the VdoCipher WordPress video plugin annotation field and add_filter function in the functions file. shortcode embedded in WordPress.

Note: This phone number is a Custom Field created for illustration using a  plugin named “Advanced Custom Fields” and the name for this custom field is phone_number. You might not need to configure such custom fields, your membership plugin that you might be using would already have such custom field addition functionality. The phone number addition of a user on their profile needs to be managed and taken care of from your WordPress setup side. Your WordPress developers can check and implement it. For this example, the sample WordPress viewer playing the video has the phone number  887788778877 on his profile.

custom variables like phone number addition in wordpress user profile

custom field phone number addition via plugin

Below is a code demonstrating the usage of a sample function for displaying the saved phone number as a watermark and plugin setup.

Additions in functions.php WordPress file

  1. Login to your WordPress account having theme editor access.
  2. Open functions.php through Appearance>Theme File Editor or Tools>Theme File Editor
  3. Add given below custom PHP function in the functions.php file and save the file.
function customvdofunc($vdo_annotate_code){
   $Phonenumber = "";
   if (is_user_logged_in()) {
       $current_user = wp_get_current_user();
       $PNO = $current_user->phone_number;
          }
    $vdo_annotate_code = str_replace('{Phonenumber}', $PNO, $vdo_annotate_code);
    return $vdo_annotate_code;
}
add_filter('vdocipher_annotate_preprocess', 'customvdofunc');

Additions in functions php WordPress file

Adding JSON to the VdoCipher WordPress Plugin field

You need to add the following JSON code in the plugin settings to call the custom function and display the viewer’s phone number as a watermark.


[{"type":"rtext", "text":"{Phonenumber}", "alpha":"0.90","color":"#FFFF00","size":"12","interval":"5000","skip":5000}]

Adding JSON to the VdoCipher WordPress Plugin field

On playback of the videos, the watermark will show the phone number of the viewer playing the video. Similarly, you can call other custom or WordPress data fields via functions.php and display the same by adding more lines of JSON code in the VdoCipher plugin field.

watermark showing the phone number of the viewer playing the video

The post Custom variables as watermark on WordPress videos appeared first on VdoCipher Blog.

]]>
https://www.vdocipher.com/blog/custom-variables-watermark-on-wordpress-videos/feed/ 0