Make a recording (video file) of what is shown by your computer (PC's) display (video screen output).

This can be done for free, without having to pay for software. And by free, I mean, true free-ware, even open-source software, not trial-ware / freemium (such as Camtasia, e.g.).

Also, VLC, and mplayer (mencoder) can be used for live screen-casting / streaming of any raster/pixel/video data. I bet those use much of the same open-source codebase as ffmpeg does (from that project).

ffmpeg Edit

Windows Edit

When using Microsoft's Windows O.S. (NT-based) , as opposed to a Unix or unix-like flavour OS distribution (e.g. BSDs or GNU/Linux) (or even Apple's Mac OS X) ,

obtain install Edit

Download a free windows build (compiled version that can run under Windows) from Zeranoe

in the form of a 7-zip archive:

- ffmpeg for Windows daily/nightly builds

ffmpeg for Windows can access Direct Show multi-media (audio and video) devices.

run ffmpeg -list_devices true -f dshow -i dummy

to (query, and (get a)) list what devices are available to ffmpeg.exe as input and output.

gdigrab Edit

One possible means of input (doesn't use dshow but instead) is "gdigrab"

which makes use of GDI. G.D.I. is an acronym that stands for "Microsoft Windows graphics device interface"


ffmpeg.exe -f gdigrab -i desktop -framerate 15 -video_size 1100x800 -vcodec libx264 -pix_fmt yuv420p -preset ultrafast new-output-video-file.mkv

N.B. "-video_size" doesn't seem to work for me. It captures entire Desktop display output, anyway.

Read about more options provided by ffmpeg.exe using GDIgrab:

  • offset
    • "-offset_x"
    • "-offset_y"
  • window (task) "title" (the text in the Title bar of a window that is running within/on your Desktop)

include audio Edit

It is possible to record audio input (stream(s)) simultaneously to accompany the video (raster/pixel data from the display/Desktop):

ffmpeg.exe -f gdigrab -i desktop -framerate 15 -f dshow -i audio="Headset Microphone (Logitech US" -filter_complex amix=inputs=1 -vcodec libx264 -pix_fmt yuv420p -preset ultrafast -vsync vfr -acodec pcm_s16le new-output-video-file.mkv

But, in my experience, this created a lag (latency mis-synchronisation between audio and video content/streams, in which the video lagged behind the audio) (upon playback). (See further below in this article for colon ":" syntax with use of not GDIgrab but "screen-capture-recorder").)

Also, I managed to combine audio from two sources -- make sure the value given/assigned to "amix=inputs=" is not '1' (Particularly, in this case, that value is '2'). Recording from 2 simultaneous audio inputs is possible with : "-filter_complex amix=inputs=2"

The primary thing to add to your ffmpeg command line is another DirectShow input:

-f dshow -i audio="(( input audio device name ))"

example command-line:

c:Downloads\ffmpeg.zeranoe.com__builds\bin\ffmpeg -f gdigrab -i desktop -framerate 15 -f dshow -i audio="Headset Microphone (Logitech US" -f dshow -i audio="virtual-audio-capturer" filter_complex amix=inputs=2 -vcodec libx264 -pix_fmt yuv420p -preset ultrafast -vsync vfr -acodec pcm_s16le muxed-video-file.mkv

screen-capture-recorder Edit

In order to record desktop, the following software (free) (open-source *) needs to be installed to make Desktop recording available as one of the possible input devices: Screen Capture(r) Recorder.


Obtain the installer (".exe") file from:

Installing that package will provide two additional (virtual, if you will) input devices to DirectShow (for use by ffmpeg.exe) :

"screen-capture-recorder" (for visual/raster/pixel grab)


"virtual-audio-capturer" which is a loop-back audio device that takes the digital? output of the sound card (sound device) and allows that to be recorded (as an input stream source).

example of use Edit

Here is an example command-line to run, that makes use of the new DirectShow input device(s) :


ffmpeg.exe -f dshow -framerate 10 -i video="screen-capture-recorder" -vcodec libx264 -pix_fmt yuv420p -preset ultrafast -vsync vfr ouput-video.mkv

Another example -- record simultaneous audio, as well:

C:\Users\Public\Downloads\ffmpeg.zeranoe.com__builds\bin\ffmpeg.exe -f dshow -framerate 10 -i video="screen-capture-recorder" -f dshow -i audio="Headset Microphone (Logitech US" -filter_complex amix=inputs=1 -vcodec libx264 -pix_fmt yuv420p -preset ultrafast -vsync vfr -acodec pcm_s16le ouput-video-filename_produced-by-running-this-command-line.mkv

Break that command-line down, to help better understand it (and how to tweak it, for your individual case):

ffmpeg (binary executable)

  • next specifies the INPUT
    • video (visual , raster, pixels)
      • -f dshow
      • -i video="screen-capture-recorder"
      • additional options (modifiers), such as to specify the frame-rate (explicitly) by including (adding) -framerate 10. That means 10 frames-per-second. (Do NOT use the "-r" switch for this!)
    • audio
      • -f show (you start off this (the audio) component of the command line with that string of "-f dhow"
      • -i audio="Headset Microphone (Logitech US" (Apparently no closing parenthesis character at end, enclosed with that pair of double-quotation marks ? )
      • an additional option can go here, such as: -audio_device_number 0
      • -filter_complex amix=inputs=1 (set that last integer value to '2' if you specify more than one simultaneous input device.
  • next, OUTPUT (keep in mind that certain options that were specified in the input stage component of the command line invocation of ffmpeg, can be independently controlled for the output. E.g. a given region of the source display can be chosen for input into ffmpeg, but another region (taken from what the input stream provides) can be chosen to be encoded in output/product result file).
    • video
      • -vcodec libx264 (in this case, using the open-source H.264 encoding library, that is linked to the ffmpeg binary executable
      • -pix_fmt yuv240p (I read that this ensures maximum playback compatibility for most systems of the file that ffmpeg will create here.
      • (additional options for the encoding) -preset ultrafast
      • -vsync vfr
    • audio
      • -acodec pcm_s16le
  • The last argument (value that is plugged in(to) the command line is the location within the filesystem hierarchy tree of where to store (save) (spit out) the output that ffmpeg generates. This is usually a file on a mounted filesystem volume. At the minimum, specify a(ny arbitrary combination of characters / text string (given that the filesystem supports it, oh, and the command interpreter supports it and doesn't mangle "expand" interpret it, as well)) filename -- and the filename suffix/extension does matter to ffmpeg. It cannot be arbitrary. It's not like any combination of characters to the right of the right-most period (dot character) in the filename value (string) is okay. FFmpeg cares about it because it will determine which kind of container file format to house the outputting video and audio streams within.

better synchronization Edit

ffmpeg.exe -f dshow -framerate 30 -i video="screen-capture-recorder":audio="Headset Microphone (Logitech US" -f dshow -i audio="virtual-audio-capturer" -filter_complex amix=inputs=2 -vcodec libx264 -pix_fmt yuv420p -preset ultrafast -acodec pcm_s16le "output result file.mkv"

Notice the colon (":") syntax. ... as opposed to a separate "-f dshow -i" for each input device (accessed through Windows's DirectShow layer).

-f dshow -i video="screen-capture-recorder" -f dshow -i audio="Headset Microphone (Logitech US"

may not give as good a video+audio sync as( what was shown, previously, above, which is):

-i video="screen-capture-recorder":audio="Headset Microphone (Logitech US"

And, either way (regardless of whether using either of those two possibilities/variations/variants, shown directly above, previously),

an additional (con-current simultaneous audio input device/stream) cannot be daisy-chained using the same ":" syntax.

Instead, it must be separately specified as part of the command line :

-f dshow -i audio="virtual-audio-capturer"

Also take careful note of the presence of the integer value of '2' for:

-filter_complex amix=inputs=2

Since ffmpeg will be combining audio from two different input streams/sources.

If the following software is not installed, "screen-capture-recorder" and "virtual-audio-capturer" will not be available as input devices for ffmpeg (through the Direct Sound layer) ...

software solution download Edit

"c:\Program Files\Screen Capturer Recorder\configuration_setup_utility\vendor\ffmpeg\bin\ffmpeg.exe" -f dshow -i audio="VIA HD Audio Input":video="screen-capture-recorder" -s 352x288 -r 20 -t 20 mobile-resolution-screen-capture.mp4

- source: this posting, on this thread.

The software program that offers that is called Screen Capturer Recorder.
Free binaries (installer packages) :


The off-set that I successfully used with an X11 GNU/Linux -based system didn't work , as well as "-s" switch for x and y width and height of capture region area (co-ordinates).

Linux Edit

Or maybe any unix-like Operating System (Unix-flavour) (*nix), including GNU/Linux distros, BSD-based (including OS X ?) -- any graphical environment based upon / powered by the X server (X11,

ffmpeg (in compiled form) (that can be run) is available in all of the major distros' software repositories.


apt-get install ffmpeg

N.B.: Actually, Debian and derviative distros like Ubuntu use a fork of the ffmpeg codebase called libAV wikipedia: Libav. The following command-lines (commands and switches and syntax) , examples, and such in this article
should work the same as with the official ffmpeg itself.

Another note: ffmpeg is an open-source project. It is often used (compiled) into library form. However, a front-end piece of application software (binary executable) is necessary to make use of the functionality in its libraries. An example is the Windows build "ffmpeg.exe". With ffmpeg installed, simply type "ffmpeg" on the terminal (emulator, command-line interpreter/processor). If the libav fork is installed instead of official ffmpeg, the command "ffmpeg" should be linked to a binary executable file (program) called "avconv"

record Edit

ffmpeg -f alsa -ac 1 -i pulse -f x11grab -r 10 -s 1024x720 -i :0.0 -acodec pcm_s16le -vcodec libx264 -preset ultrafast -crf 0 -threads 0 output-file1.mkv

-ac audio channels ('1' for mono, '2' for stereo)

-r frame rate of video. 30 is standard US/Japan TV, 25 is European. I recommend 10 or 15 for this purpose.

If "pulse" audio (layer) is not available, try:

-i hw:0
TIP: Disable your screen saver. That -will- interfere with the video that is captured. This is not semantic, instead it captures pixels (raster).

Another tip is to first use command "sleep" (for *nix GNU/Linux Unix-like OSes) followed by an integer argument/value which specifies how many seconds to wait before executing the next command -- which, in this case, is ffmpeg.


sleep 5 && ffmpeg -f alsa -ac 1 -i pulse -f x11grab -r 10 -s 1024x720 -i :0.0 -acodec pcm_s16le -vcodec libx264 -preset ultrafast -crf 0 -threads 0 output-file1.mkv

Once that long compound command line is invoked (with the | Enter | key being pressed),
the system (command-line interpreter console terminal) will wait 5 seconds and then it will launch ffmpeg, which will be recording (making the video file).

offset Edit

If the area that you want visually-captured

off-set co-ordinates

-i :0:0

add +(x),(y)

-i :0.0+62,168

ffmpeg -f alsa -ac 1 -i 'hw:1,0' -f x11grab -r 10 -s 1084x704 -i :0.0+62,168 -acodec pcm_s16le -vcodec libx264 -preset ultrafast -crf 0 -threads 0 section1.mkv

will capture a window whose top-left co-ordinates (position) is at 62 pixels (from left edge of display/desktop/screen) and 168 lines (pixels) down from top.

  • more examples / source: [1]


To choose a region of the X server's (X11) output -- one particular window or box, let's say,
get the co-ordinates (bounds) (boundaries of pixel range) using a tool called xwininfo

xwininfo ( X Win Info) (or, think of it as (meaning) Window Information)

It should ship with any X11/Xorg installation (that comes with most GNU/Linux distros, and often other *nix Unix-like Unix flavours OSes)

example usage Edit
$ xwininfo

xwininfo: Please select the window about which you
          would like information by clicking the
          mouse in that window.

xwininfo: Window id: 0x4600898 "Title of webpage that is in web browser's window"

  Absolute upper-left X:  38
  Absolute upper-left Y:  43
  Relative upper-left X:  0
  Relative upper-left Y:  0
  Width: 1002
  Height: 709
  Depth: 24
  Visual: 0x21
  Visual Class: TrueColor
  Border width: 0
  Class: InputOutput
  Colormap: 0x20 (installed)
  Bit Gravity State: NorthWestGravity
  Window Gravity State: NorthWestGravity
  Backing Store State: NotUseful
  Save Under State: no
  Map State: IsViewable
  Override Redirect State: no
  Corners:  +38+43  -240+43  -240-272  +38-272
  -geometry 1002x709+35+20

recordMyDesktop Edit

Another piece of application software (for GNU/Linux only) is recordMyDesktop which converts output to Theora-format video. Theora can be thought of as the open-source equivalent (on par with) to the original Part 2 of MPEG 4 spec from early 2000s. VP8 is more (or now, VP9) competitive with h.264 and maybe h.265. The codebase for "recordMyDesktop" has not been updated since 2008. But it works!

Use it thusly:

recordmydesktop -width 1024 -height 720 -o $filename.720.1Mbps.ogv -delay $time -freq 44100 -channels 1 -fps 15 -s_quality 10 -v_bitrate 1000000

That -bitrate is high.

related Edit

How to convert media files using FFmpeg

How to find basic codec and compression info of a media file in Linux

Ad blocker interference detected!

Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.