Skip to content

Get In Touch

Email: sales@enciris.com
Phone: +33 (0)5 82 95 09 55
Address: Enciris Technologies, 22 Ave de l'Europe, 81600 Gaillac, France

What is a capture card?

A capture card is a device that lets you record or stream video and audio from one device to another—usually from devices such as a game console, camera, or another computer to your PC.

It works by taking the video and audio signals from an external device (like a Camera, PlayStation, Xbox, or PC), converting this input into digital data, and sending that data to your computer where you can record, edit, or livestream the imported content.
Capture cards are especially popular with gamers and content creators who want high-quality video for streaming or making videos, and they help make sure your footage stays smooth and clear without putting extra strain on your computer or game system.

In medical and avionic applications they are typically used to capture video from sources like connected cameras or endoscopes.

Do you really need a capture card on PC?

You do not always need a capture card on a PC, but it depends on what you want to do:

  • If your goal is to record or stream content that is already running on your PC (like PC games or your desktop), you can usually use software such as OBS Studio or built-in tools like Windows Game Bar. In this case, a capture card is unnecessary.

  • A capture card becomes necessary if you want to record or stream video from an external device—for example, a game console (like PlayStation, Xbox, or Nintendo Switch), a camera, or a second PC. The capture card acts as a bridge, bringing the video and audio from those external sources into your main PC for recording or live streaming.

  • Capture cards can also help in advanced setups (such as dual-PC streaming), where you use one computer for gaming and another for streaming, allowing for better performance and quality.

Summary:
If you only need to capture video directly from your own PC, you don’t need a capture card. If you want to capture or stream content from another device, a capture card is required for best results.

Why do streamers use a capture card?

Streamers often use a capture card for several important reasons:

        • Connecting External Devices: Capture cards let streamers capture video and audio from devices that can’t directly run streaming software—such as game consoles (PlayStation, Xbox, Nintendo Switch), cameras, or even other computers. This makes it possible to include high-quality gameplay footage or live camera feeds on their stream.

        • Improved Performance: When streaming games from a console or a second PC, a capture card offloads the video encoding from the main gaming device. This helps maintain smooth gameplay without slowing down the system or causing lag, as all the resource-heavy streaming work is handled on a separate computer.

        • Professional Video Quality: Capture cards support high-definition resolutions and frame rates. They preserve image clarity and synchronization, which is essential for delivering a polished and professional-looking stream.

        • Advanced Streaming Setups: For streamers who use multiple cameras, overlays, or dual-PC setups, capture cards allow flexible mixing of video sources and seamless switching between them during a live broadcast.

        • For Medical: In medical use cases capture cards are often use to capture live video of operations. This is done to share information regarding procedures and well as for training purposes e.g. live display in the operating theater. But Surgeons do also often use this live display during the procedure as well. This is why latency performance is critical, since it is essential to be able to provide instant feedback for the Surgeon to judge their hand movements.

In summary, streamers use capture cards to connect external devices, achieve higher stream quality, boost performance, and support advanced multi-device streaming setups. Medical use cases have very demanding requirements in terms of latency performance.

What is an encoder?

In the context of video applications, an encoder is a crucial hardware or software component that converts raw video signals into a compressed digital format suitable for storage, transmission, or streaming.

What Does a Video Encoder Do?

  • Signal Compression: Video encoders take uncompressed or analog video input and use compression algorithms (codecs like H.264, HEVC, MPEG-4) to reduce the file size while maintaining as much visual quality as possible. This step is essential for streaming and storing large amounts of video data efficiently.

  • Format Conversion: They can convert various input formats—such as HDMI, SDI, or composite video—into widely used compressed digital formats, making the video easy to play back on different devices or platforms.

  • Real-Time Processing: Many video encoders operate in real-time, enabling live streaming of video content. For example, during a live broadcast, an encoder compresses and prepares the video for direct delivery to streaming services like YouTube, Twitch, or Facebook Live.

Applications of Video Encoders

  • Live Streaming: Encoders make it possible to stream events (sports, concerts, gaming, meetings) to online audiences in real time by converting camera or device output into a streamable digital format.

  • Video Recording: When capturing video from cameras or capture cards, encoders compress footage before saving it to a hard drive or cloud storage, dramatically reducing storage requirements.

  • Surveillance: Security systems use video encoders to convert analog CCTV feeds into digital streams, enabling network video recording and remote monitoring.

  • Broadcast and Media Production: They’re used in professional studios to compress and transmit high-quality broadcasts efficiently, or to create on-demand video assets for TV and the internet.

Why Encoders Matter in Video

  • Bandwidth Efficiency: Compressed video streams use much less data, making them practical for online distribution or wireless transmission.

  • Device Compatibility: Encoded video is standardized, so it can be played on smartphones, computers, TVs, and other digital devices.

  • Quality Control: Advanced encoders balance compression and video quality to provide smooth playback even on limited internet connections while keeping latency low for live applications.

In summary:
A video encoder is essential for converting raw video into compressed digital formats needed for modern recording, transmission, and streaming. It enables efficient storage, live broadcasting, and smooth digital distribution of video content across diverse platforms.

For more information you can also read our FFmpeg Whitepaper.

What is EDID?

EDID stands for Extended Display Identification Data. It is a standardized metadata format embedded in display devices (such as monitors, TVs, or projectors) that communicates the display’s capabilities to a video source device (like a graphics card, computer, DVD player, or set-top box).

This communication enables the source to automatically select the best compatible video output settings, such as resolution, refresh rate, color characteristics, and audio capabilities, ensuring optimal picture and sound quality without the need for manual configuration by the user.

The EDID data typically includes information about the manufacturer, product type, serial number, supported display timings and resolutions, display size, color characteristics, luminance, and pixel mapping (for digital displays). This information is stored in the display’s firmware, usually in non-volatile memory like EEPROM.

EDID is transmitted through a communication channel known as the Display Data Channel (DDC), which works over standard video interfaces such as VGA, HDMI, DVI, and DisplayPort. The source device reads the EDID data during connection or startup in a process often called an “EDID handshake,” allowing it to configure its output correctly for the connected display.

The system originated from standards published by the Video Electronics Standards Association (VESA) and has become essential for plug-and-play functionality in modern AV and computing environments. When EDID is missing or faulty, users may experience resolution mismatches or display issues.

In summary, EDID is the “identity card” of a display that helps video sources identify and adapt to the display’s optimal capabilities automatically, enhancing user experience and compatibility across devices.

Why do some monitors take time to display incoming video?

Monitors can sometimes take a long time to recognize an input for several reasons:

  • EDID Communication: When you connect a monitor to a computer or another device, the two perform a process called a “handshake.” This involves the monitor sending its Extended Display Identification Data (EDID)—information about resolutions, refresh rates, and other capabilities—to the source device. If this handshake or data exchange is slow or fails temporarily, it can delay the display from showing an image. This is especially common when switching between inputs or connecting new devices.

  • Signal Detection and Processing: After receiving an input signal (like HDMI or DisplayPort), many monitors spend a few seconds detecting, processing, and locking onto the signal. This includes figuring out the resolution, refresh rate, and color format. Monitors with more advanced processing or additional features (like built-in upscaling or HDR) may take longer to complete this process.

  • Hot Plug Detection: Each time an input is plugged in or switched, monitors use “hot plug detection” to trigger the EDID exchange again, which can add to the delay before the image appears.

  • Cable and Connection Quality: Poor or incompatible cables, or loose connections, can slow down or complicate the detection and handshake process, sometimes causing extra delay or requiring several attempts before the monitor recognizes the input signal3.

  • Monitor Hardware Issues: In rare cases, aging or failing monitor components (like capacitors or power supply parts) can add significant delays to the start-up or signal recognition process.

Summary: The most common cause is the EDID handshake and signal negotiation whenever you connect or switch sources. Some higher-end devices or well-matched source-display pairs complete this process faster, while others may take a few seconds or longer based on their firmware and hardware design.

What is ecurl?

When you want to start developing an application for our capture cards or cameras we have a common Restful API and Command Line Interface (CLI) callled ecurl. The CLI is very useful because you can start developing your application straight away (before doing any coding) by using the simple ecurl commands on the PC’s command line (Windows or Linux).

For example to start the card recording you can use the ecul command below:

ecurl rec lt310:/0/sdi-in/0 -d extra.hw=nvenc                             // will start recording, you can specify hardware acceleration, here we use NVIDIA
                                                                                                             // using -d extra.hw=amf  will use onboard AMD GPU hardware acceleration
                                                                                                             // using -d extra.hw=qsv   will use onboard Intel GPU hardware acceleration

For more information you can download our latest API documents, for the LT-300 or Camera products.

Which GPU's do you Support?

For the LT-300 family and our Camera products we currently support NVIDIA, AMD and Intel. You can select this in the same way using our API or directly on the command line using the following ecurl command:

ecurl rec lt310:/canvas/0 -d extra.hw=nvenc
// Records video from the LT310 capture device using the NVIDIA (NVENC) hardware encoder.

Where can I get the drivers and SDK?

You can download these directly from our products pages for capture cards or cameras.

What versions of OS do you support?

We currently support Windows 11, 10, 8, 7, XP as well and Ubuntu Linux LTS 22.04 and later. We have customers using equivalent Debian versions without problems. If you are interested in support for Mac, please let us know here or just send us an email to info@enciris.com.

 

What programming languages are supported?

Although nearly all programming language can be used with our RESTful API, we provide example code and helpers for Go, C++, C# and Python. Other languages can be used but they do require to have JSON formats integrated.

Back To Top
en_USEN