| Aug | SEP | Oct |
| 19 | ||
| 2020 | 2021 | 2022 |
COLLECTED BY
Collection: Common Crawl
●About
●Download
●Documentation
●Bug Reports
●Wiki
●Planet
●News
●Consulting
●Contact
avconv [global options] [[infile options][‘-i’ infile]]... {[outfile options] outfile}...
|
-i option, and writes to an arbitrary number of output "files", which are
specified by a plain output filename. Anything found on the command line which
cannot be interpreted as an option is considered to be an output filename.
Each input or output file can in principle contain any number of streams of
different types (video/audio/subtitle/attachment/data). Allowed number and/or
types of streams can be limited by the container format. Selecting, which
streams from which inputs go into output, is done either automatically or with
the -map option (see the Stream selection chapter).
To refer to input files in options, you must use their indices (0-based). E.g.
the first input file is 0, the second is 1etc. Similarly, streams
within a file are referred to by their indices. E.g. 2:3 refers to the
fourth stream in the third input file. See also the Stream specifiers chapter.
As a general rule, options are applied to the next specified
file. Therefore, order is important, and you can have the same
option on the command line multiple times. Each occurrence is
then applied to the next input or output file.
Exceptions from this rule are the global options (e.g. verbosity level),
which should be specified first.
Do not mix input and output files – first specify all input files, then all
output files. Also do not mix options which belong to different files. All
options apply ONLY to the next input or output file and are reset between files.
●
To set the video bitrate of the output file to 64kbit/s:
avconv -i input.avi -b 64k output.avi |
avconv -i input.avi -r 24 output.avi |
avconv -r 1 -i input.m2v -r 24 output.avi |
avconv for each output can be described by
the following diagram:
_______ ______________
| | | |
| input | demuxer | encoded data | decoder
| file | ---------> | packets | -----+
|_______| |______________| |
v
_________
| |
| decoded |
| frames |
|_________|
________ ______________ |
| | | | |
| output | <-------- | encoded data | <----+
| file | muxer | packets | encoder
|________| |______________|
|
avconv calls the libavformat library (containing demuxers) to read
input files and get packets containing encoded data from them. When there are
multiple input files, avconv tries to keep them synchronized by
tracking lowest timestamp on any active input stream.
Encoded packets are then passed to the decoder (unless streamcopy is selected
for the stream, see further for a description). The decoder produces
uncompressed frames (raw video/PCM audio/...) which can be processed further by
filtering (see next section). After filtering the frames are passed to the
encoder, which encodes them and outputs encoded packets again. Finally those are
passed to the muxer, which writes the encoded packets to the output file.
avconv can process raw audio and video frames using
filters from the libavfilter library. Several chained filters form a filter
graph. avconv distinguishes between two types of filtergraphs -
simple and complex.
_________ ______________
| | | |
| decoded | | encoded data |
| frames |\ /| packets |
|_________| \ / |______________|
\ __________ /
simple \ | | / encoder
filtergraph \| filtered |/
| frames |
|__________|
|
_______ _____________ _______ ________ | | | | | | | | | input | ---> | deinterlace | ---> | scale | ---> | output | |_______| |_____________| |_______| |________| |
fps filter in the example above changes number of frames, but does not
touch the frame contents. Another example is the setpts filter, which
only sets timestamps and otherwise passes the frames unchanged.
_________
| |
| input 0 |\ __________
|_________| \ | |
\ _________ /| output 0 |
\ | | / |__________|
_________ \| complex | /
| | | |/
| input 1 |---->| filter |\
|_________| | | \ __________
/| graph | \ | |
/ | | \| output 1 |
_________ / |_________| |__________|
| | /
| input 2 |/
|_________|
|
overlay filter, which
has two video inputs and one video output, containing one video overlaid on top
of the other. Its audio counterpart is the amix filter.
copy parameter to the
‘-codec’ option. It makes avconv omit the decoding and encoding
step for the specified stream, so it does only demuxing and muxing. It is useful
for changing the container format or modifying container-level metadata. The
diagram above will in this case simplify to this:
_______ ______________ ________ | | | | | | | input | demuxer | encoded data | muxer | output | | file | ---------> | packets | -------> | file | |_______| |______________| |________| |
-vn/-an/-sn options. For
full manual control, use the -map option, which disables the defaults just
described.
-codec:a:1 ac3 option contains
a:1 stream specifier, which matches the second audio stream. Therefore it
would select the ac3 codec for the second audio stream.
A stream specifier can match several stream, the option is then applied to all
of them. E.g. the stream specifier in -b:a 128k matches all audio
streams.
An empty stream specifier matches all streams, for example -codec copyor-codec: copy would copy all the streams without reencoding.
Possible forms of stream specifiers are:
‘stream_index’
Matches the stream with this index. E.g. -threads:1 4 would set the
thread count for the second stream to 4.
‘stream_type[:stream_index]’
stream_type is one of: ’v’ for video, ’a’ for audio, ’s’ for subtitle,
’d’ for data and ’t’ for attachments. If stream_index is given, then
matches stream number stream_index of this type. Otherwise matches all
streams of this type.
‘p:program_id[:stream_index]’
Ifstream_index is given, then matches stream number stream_index in
program with id program_id. Otherwise matches all streams in this program.
‘i:stream_id’
Match the stream by stream id (e.g. PID in MPEG-TS container).
‘m:key[:value]’
Matches streams with the metadata tag key having the specified value. If
value is not given, matches streams that contain the given tag with any
value.
‘u’
Matches streams with usable configuration, the codec must be defined and the
essential information such as video dimension or audio sample rate must be present.
Note that in avconv, matching by metadata will only work properly for
input files.
AV_LOG_FORCE_NOCOLORorNO_COLOR, or can be forced setting
the environment variable AV_LOG_FORCE_COLOR.
The use of the environment variable NO_COLOR is deprecated and
will be dropped in a following Libav version.
‘-cpuflags mask (global)’
Set a mask that’s applied to autodetected CPU flags. This option is intended
for testing. Do not use it unless you know what you’re doing.
avconv -i input.flac -id3v2_version 3 out.mp3 |
copy (output only) to indicate that
the stream is not to be reencoded.
For example
avconv -i INPUT -map 0 -c:v libx264 -c:a copy OUTPUT |
coption is applied, so
avconv -i INPUT -map 0 -c copy -c:v:1 libx264 -c:a:137 libvorbis OUTPUT |
hh:mm:ss[.xxx] form.
‘-fs limit_size (output)’
Set the file size limit.
‘-ss position (input/output)’
When used as an input option (before -i), seeks in this input file to
position. Note the in most formats it is not possible to seek exactly, so
avconv will seek to the closest seek point before position.
When transcoding and ‘-accurate_seek’ is enabled (the default), this
extra segment between the seek point and position will be decoded and
discarded. When doing stream copy or when ‘-noaccurate_seek’ is used, it
will be preserved.
When used as an output option (before an output filename), decodes but discards
input until the timestamps reach position.
position may be either in seconds or in hh:mm:ss[.xxx] form.
‘-itsoffset offset (input)’
Set the input time offset in seconds.
[-]hh:mm:ss[.xxx] syntax is also supported.
The offset is added to the timestamps of the input files.
Specifying a positive offset means that the corresponding
streams are delayed by offset seconds.
‘-metadata[:metadata_specifier] key=value (output,per-metadata)’
Set a metadata key/value pair.
An optional metadata_specifier may be given to set metadata
on streams or chapters. See -map_metadata documentation for
details.
This option overrides metadata set with -map_metadata. It is
also possible to delete metadata by using an empty value.
For example, for setting the title in the output file:
avconv -i in.avi -metadata title="my title" out.flv |
avconv -i INPUT -metadata:s:a:0 language=eng OUTPUT |
vcd, svcd, dvd, dv,
dv50). type may be prefixed with pal-, ntsc-orfilm- to use the corresponding standard. All the format options
(bitrate, codecs, buffer sizes) are then set automatically. You can just type:
avconv -i myfile.avi -target vcd /tmp/vcd.mpg |
avconv -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg |
-frames:d, which you should use instead.
‘-frames[:stream_specifier] framecount (output,per-stream)’
Stop writing to the stream after framecount frames.
‘-q[:stream_specifier] q(output,per-stream)’
‘-qscale[:stream_specifier] q(output,per-stream)’
Use fixed quality scale (VBR). The meaning of qis
codec-dependent.
‘-b[:stream_specifier] bitrate (output,per-stream)’
Set the stream bitrate in bits per second. When transcoding, this tells the
encoder to use the specified bitrate for the encoded stream.
For streamcopy, this provides a hint to the muxer about the bitrate of the input
stream.
‘-filter[:stream_specifier] filter_graph (output,per-stream)’
filter_graph is a description of the filter graph to apply to
the stream. Use -filters to show all the available filters
(including also sources and sinks).
See also the ‘-filter_complex’ option if you want to create filter graphs
with multiple inputs and/or outputs.
‘-filter_script[:stream_specifier] filename (output,per-stream)’
This option is similar to ‘-filter’, the only difference is that its
argument is the name of the file from which a filtergraph description is to be
read.
‘-pre[:stream_specifier] preset_name (output,per-stream)’
Specify the preset for matching stream(s).
‘-stats (global)’
Print encoding progress/statistics. On by default.
‘-attach filename (output)’
Add an attachment to the output file. This is supported by a few formats
like Matroska for e.g. fonts used in rendering subtitles. Attachments
are implemented as a specific type of stream, so this option will add
a new stream to the file. It is then possible to use per-stream options
on this stream in the usual way. Attachment streams created with this
option will be created after all the other streams (i.e. those created
with -map or automatic mappings).
Note that for Matroska you also have to set the mimetype metadata tag:
avconv -i INPUT -attach DejaVuSans.ttf -metadata:s:2 mimetype=application/x-truetype-font out.mkv |
filename metadata tag
will be used.
E.g. to extract the first attachment to a file named ’out.ttf’:
avconv -dump_attachment:t:0 out.ttf INPUT |
filename tag:
avconv -dump_attachment:t "" INPUT |
-frames:v, which you should use instead.
‘-r[:stream_specifier] fps (input/output,per-stream)’
Set frame rate (Hz value, fraction or abbreviation).
As an input option, ignore any timestamps stored in the file and instead
generate timestamps assuming constant frame rate fps.
As an output option, duplicate or drop input frames to achieve constant output
frame rate fps (note that this actually causes the fps filter to be
inserted to the end of the corresponding filtergraph).
‘-s[:stream_specifier] size (input/output,per-stream)’
Set frame size.
As an input option, this is a shortcut for the ‘video_size’ private
option, recognized by some demuxers for which the frame size is either not
stored in the file or is configurable – e.g. raw video or video grabbers.
As an output option, this inserts the scale video filter to the
end of the corresponding filtergraph. Please use the scale filter
directly to insert it at the beginning or some other place.
The format is ‘wxh’ (default - same as source). The following
abbreviations are recognized:
‘sqcif’
128x96
‘qcif’
176x144
‘cif’
352x288
‘4cif’
704x576
‘16cif’
1408x1152
‘qqvga’
160x120
‘qvga’
320x240
‘vga’
640x480
‘svga’
800x600
‘xga’
1024x768
‘uxga’
1600x1200
‘qxga’
2048x1536
‘sxga’
1280x1024
‘qsxga’
2560x2048
‘hsxga’
5120x4096
‘wvga’
852x480
‘wxga’
1366x768
‘wsxga’
1600x1024
‘wuxga’
1920x1200
‘woxga’
2560x1600
‘wqsxga’
3200x2048
‘wquxga’
3840x2400
‘whsxga’
6400x4096
‘whuxga’
7680x4800
‘cga’
320x200
‘ega’
640x350
‘hd480’
852x480
‘hd720’
1280x720
‘hd1080’
1920x1080
‘2kdci’
2048x1080
‘4kdci’
4096x2160
‘uhd2160’
3840x2160
‘uhd4320’
7680x4320
‘-aspect[:stream_specifier] aspect (output,per-stream)’
Set the video display aspect ratio specified by aspect.
aspect can be a floating point number string, or a string of the
form num:den, where num and den are the
numerator and denominator of the aspect ratio. For example "4:3",
"16:9", "1.3333", and "1.7777" are valid argument values.
‘-vn (output)’
Disable video recording.
‘-vcodec codec (output)’
Set the video codec. This is an alias for -codec:v.
‘-pass[:stream_specifier] n(output,per-stream)’
Select the pass number (1 or 2). It is used to do two-pass
video encoding. The statistics of the video are recorded in the first
pass into a log file (see also the option -passlogfile),
and in the second pass that log file is used to generate the video
at the exact requested bitrate.
On pass 1, you may just deactivate audio and set output to null,
examples for Windows and Unix:
avconv -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y NUL avconv -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y /dev/null |
-filter:v.
-pix_fmts to show all the supported
pixel formats.
‘-sws_flags flags (input/output)’
Set SwScaler flags.
‘-vdt n’
Discard threshold.
‘-rc_override[:stream_specifier] override (output,per-stream)’
rate control override for specific intervals
‘-vstats’
Dump video coding statistics to ‘vstats_HHMMSS.log’.
‘-vstats_file file’
Dump video coding statistics to file.
‘-top[:stream_specifier] n(output,per-stream)’
top=1/bottom=0/auto=-1 field first
‘-dc precision’
Intra_dc_precision.
‘-vtag fourcc/tag (output)’
Force video tag/fourcc. This is an alias for -tag:v.
‘-qphist (global)’
Show QP histogram.
‘-force_key_frames[:stream_specifier] time[,time...] (output,per-stream)’
Force key frames at the specified timestamps, more precisely at the first
frames after each specified time.
This option can be useful to ensure that a seek point is present at a
chapter mark or any other designated place in the output file.
The timestamps must be specified in ascending order.
‘-copyinkf[:stream_specifier] (output,per-stream)’
When doing stream copy, copy also non-key frames found at the
beginning.
‘-init_hw_device type[=name][:device[,key=value...]]’
Initialise a new hardware device of type type called name, using the
given device parameters.
If no name is specified it will receive a default name of the form "type%d".
The meaning of device and the following arguments depends on the
device type:
‘cuda’
device is the number of the CUDA device.
‘dxva2’
device is the number of the Direct3D 9 display adapter.
‘vaapi’
device is either an X11 display name or a DRM render node.
If not specified, it will attempt to open the default X11 display ($DISPLAY)
and then the first DRM render node (/dev/dri/renderD128).
‘vdpau’
device is an X11 display name.
If not specified, it will attempt to open the default X11 display ($DISPLAY).
‘qsv’
device selects a value in ‘MFX_IMPL_*’. Allowed values are:
‘auto’
‘sw’
‘hw’
‘auto_any’
‘hw_any’
‘hw2’
‘hw3’
‘hw4’
If not specified, ‘auto_any’ is used.
(Note that it may be easier to achieve the desired result for QSV by creating the
platform-appropriate subdevice (‘dxva2’ or ‘vaapi’) and then deriving a
QSV device from that.)
‘-init_hw_device type[=name]@source’
Initialise a new hardware device of type type called name,
deriving it from the existing device with the name source.
‘-init_hw_device list’
List all hardware device types supported in this build of avconv.
‘-filter_hw_device name’
Pass the hardware device called name to all filters in any filter graph.
This can be used to set the device to upload to with the hwupload filter,
or the device to map to with the hwmap filter. Other filters may also
make use of this parameter when they require a hardware device. Note that this
is typically only required when the input is not already in hardware frames -
when it is, filters will derive the device they require from the context of the
frames they receive as input.
This is a global setting, so all filters will receive the same device.
Do not use this option in scripts that should remain functional in future
avconv versions.
‘-hwaccel[:stream_specifier] hwaccel (input,per-stream)’
Use hardware acceleration to decode the matching stream(s). The allowed values
of hwaccel are:
‘none’
Do not use any hardware acceleration (the default).
‘auto’
Automatically select the hardware acceleration method.
‘vda’
Use Apple VDA hardware acceleration.
‘vdpau’
Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration.
‘dxva2’
Use DXVA2 (DirectX Video Acceleration) hardware acceleration.
‘vaapi’
Use VAAPI (Video Acceleration API) hardware acceleration.
‘qsv’
Use the Intel QuickSync Video acceleration for video transcoding.
Unlike most other values, this option does not enable accelerated decoding (that
is used automatically whenever a qsv decoder is selected), but accelerated
transcoding, without copying the frames into the system memory.
For it to work, both the decoder and the encoder must support QSV acceleration
and no filters must be used.
This option has no effect if the selected hwaccel is not available or not
supported by the chosen decoder.
Note that most acceleration methods are intended for playback and will not be
faster than software decoding on modern CPUs. Additionally, avconv
will usually need to copy the decoded frames from the GPU memory into the system
memory, resulting in further performance loss. This option is thus mainly
useful for testing.
‘-hwaccel_device[:stream_specifier] hwaccel_device (input,per-stream)’
Select a device to use for hardware acceleration.
This option only makes sense when the ‘-hwaccel’ option is also specified.
It can either refer to an existing device created with ‘-init_hw_device’
by name, or it can create a new device as if
‘-init_hw_device’ type:hwaccel_device
were called immediately before.
‘-hwaccels’
List all hardware acceleration methods supported in this build of avconv.
-frames:a, which you should use instead.
‘-ar[:stream_specifier] freq (input/output,per-stream)’
Set the audio sampling frequency. For output streams it is set by
default to the frequency of the corresponding input stream. For input
streams this option only makes sense for audio grabbing devices and raw
demuxers and is mapped to the corresponding demuxer options.
‘-aq q(output)’
Set the audio quality (codec-specific, VBR). This is an alias for -q:a.
‘-ac[:stream_specifier] channels (input/output,per-stream)’
Set the number of audio channels. For output streams it is set by
default to the number of input audio channels. For input streams
this option only makes sense for audio grabbing devices and raw demuxers
and is mapped to the corresponding demuxer options.
‘-an (output)’
Disable audio recording.
‘-acodec codec (input/output)’
Set the audio codec. This is an alias for -codec:a.
‘-sample_fmt[:stream_specifier] sample_fmt (output,per-stream)’
Set the audio sample format. Use -sample_fmts to get a list
of supported sample formats.
‘-af filter_graph (output)’
filter_graph is a description of the filter graph to apply to
the input audio.
Use the option "-filters" to show all the available filters (including
also sources and sinks). This is an alias for -filter:a.
-tag:a.
-codec:s.
‘-sn (output)’
Disable subtitle recording.
-map option on the command line specifies the
source for output stream 0, the second -map option specifies
the source for output stream 1, etc.
A- character before the stream identifier creates a "negative" mapping.
It disables matching streams from already created mappings.
An alternative [linklabel] form will map outputs from complex filter
graphs (see the ‘-filter_complex’ option) to the output file.
linklabel must correspond to a defined output link label in the graph.
For example, to map ALL streams from the first input file to output
avconv -i INPUT -map 0 output |
-map to select which streams to place in an output file. For
example:
avconv -i INPUT -map 0:1 out.wav |
avconv -i a.mov -i b.mov -c copy -map 0:2 -map 1:6 out.mov |
avconv -i INPUT -map 0:v -map 0:a:2 OUTPUT |
avconv -i INPUT -map 0 -map -0:a:1 OUTPUT |
avconv -i INPUT -map 0:m:language:eng OUTPUT |
avconv -i in.ogg -map_metadata 0:s:0 out.mp3 |
avconv -i in.mkv -map_metadata:s:a 0:g out.mkv |
0 would work as well in this example, since global
metadata is assumed by default.
‘-map_chapters input_file_index (output)’
Copy chapters from input file with index input_file_index to the next
output file. If no chapter mapping is specified, then chapters are copied from
the first input file with at least one chapter. Use a negative file index to
disable any chapter copying.
‘-debug’
Print specific debug info.
‘-benchmark (global)’
Show benchmarking information at the end of an encode.
Shows CPU time used and maximum memory consumption.
Maximum memory consumption is not supported on all systems,
it will usually display as 0 if not supported.
‘-timelimit duration (global)’
Exit after avconv has been running for duration seconds.
‘-dump (global)’
Dump each input packet to stderr.
‘-hex (global)’
When dumping packets, also dump the payload.
‘-re (input)’
Read input at native frame rate. Mainly used to simulate a grab device
or live input stream (e.g. when reading from a file). Should not be used
with actual grab devices or live input streams (where it can cause packet
loss).
‘-vsync parameter’
Video sync method.
‘passthrough’
Each frame is passed with its timestamp from the demuxer to the muxer.
‘cfr’
Frames will be duplicated and dropped to achieve exactly the requested
constant framerate.
‘vfr’
Frames are passed through with their timestamp or dropped so as to
prevent 2 frames from having the same timestamp.
‘auto’
Chooses between 1 and 2 depending on muxer capabilities. This is the
default method.
With -map you can select from which stream the timestamps should be
taken. You can leave either video or audio unchanged and sync the
remaining stream(s) to the unchanged one.
‘-async samples_per_second’
Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps,
the parameter is the maximum samples per second by which the audio is changed.
-async 1 is a special case where only the start of the audio stream is corrected
without any later correction.
This option has been deprecated. Use the asyncts audio filter instead.
‘-copyts’
Copy timestamps from input to output.
‘-copytb’
Copy input stream time base from input to output when stream copying.
‘-shortest (output)’
Finish encoding when the shortest input stream ends.
‘-dts_delta_threshold’
Timestamp discontinuity delta threshold.
‘-muxdelay seconds (input)’
Set the maximum demux-decode delay.
‘-muxpreload seconds (input)’
Set the initial demux-decode delay.
‘-streamid output-stream-index:new-value (output)’
Assign a new stream-id value to an output stream. This option should be
specified prior to the output filename to which it applies.
For the situation where multiple output files exist, a streamid
may be reassigned to a different value.
For example, to set the stream 0 PID to 33 and the stream 1 PID to 36 for
an output mpegts file:
avconv -i infile -streamid 0:33 -streamid 1:36 out.ts |
-bsfs option
to get the list of bitstream filters.
avconv -i h264.mp4 -c:v copy -bsf:v h264_mp4toannexb -an out.h264 |
avconv -i file.mov -an -vn -bsf:s mov2textsub -c:s copy -f rawvideo sub.txt |
[file_index:stream_specifier] syntax (i.e. the same as ‘-map’
uses). If stream_specifier matches multiple streams, the first one will be
used. An unlabeled input will be connected to the first unused input stream of
the matching type.
Output link labels are referred to with ‘-map’. Unlabeled outputs are
added to the first output file.
Note that with this option it is possible to use only lavfi sources without
normal input files.
For example, to overlay an image over video
avconv -i video.mkv -i image.png -filter_complex '[0:v][1:v]overlay[out]' -map '[out]' out.mkv |
[0:v] refers to the first video stream in the first input file,
which is linked to the first (main) input of the overlay filter. Similarly the
first video stream in the second input is linked to the second (overlay) input
of overlay.
Assuming there is only one video stream in each input file, we can omit input
labels, so the above is equivalent to
avconv -i video.mkv -i image.png -filter_complex 'overlay[out]' -map '[out]' out.mkv |
avconv -i video.mkv -i image.png -filter_complex 'overlay' out.mkv |
color source:
avconv -filter_complex 'color=red' -t 5 out.mkv |
avconv -g 3 -r 3 -t 10 -b 50k -s qcif -f rv10 /tmp/b.rm |
pre option, this option takes a
preset name as input. Avconv searches for a file named preset_name.avpreset in
the directories ‘$AVCONV_DATADIR’ (if set), and ‘$HOME/.avconv’, and in
the data directory defined at configuration time (usually ‘$PREFIX/share/avconv’)
in that order. For example, if the argument is libx264-max, it will
search for the file ‘libx264-max.avpreset’.
avconv -f oss -i /dev/dsp -f video4linux2 -i /dev/video0 /tmp/out.mpg |
avconv -f x11grab -s cif -r 25 -i :0.0 /tmp/out.mpg |
avconv -f x11grab -s cif -r 25 -i :0.0+10,20 /tmp/out.mpg |
avconv -i /tmp/test%d.Y /tmp/out.mpg |
/tmp/test0.Y, /tmp/test0.U, /tmp/test0.V, /tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc... |
avconv -i /tmp/test.yuv /tmp/out.avi |
avconv -i mydivx.avi hugefile.yuv |
avconv -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg |
avconv -i /tmp/a.wav -ar 22050 /tmp/a.mp2 |
avconv -i /tmp/a.wav -map 0:a -b 64k /tmp/a.mp2 -map 0:a -b 128k /tmp/b.mp2 |
avconv -i snatch_1.vob -f avi -c:v mpeg4 -b:v 800k -g 300 -bf 2 -c:a libmp3lame -b:a 128k snatch.avi |
--enable-libmp3lame to configure.
The mapping is particularly useful for DVD transcoding
to get the desired audio language.
NOTE: To see the supported input formats, use avconv -formats.
●
You can extract images from a video, or create a video from many images:
For extracting images from a video:
avconv -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg |
-frames:vor-t option,
or in combination with -ss to start extracting from a certain point in time.
For creating a video from many images:
avconv -f image2 -i foo-%03d.jpeg -r 12 -s WxH foo.avi |
foo-%03d.jpeg specifies to use a decimal number
composed of three digits padded with zeroes to express the sequence
number. It is the same syntax supported by the C printf function, but
only formats accepting a normal integer are suitable.
●
You can put many streams of the same type in the output:
avconv -i test1.avi -i test2.avi -map 1:1 -map 1:0 -map 0:1 -map 0:0 -c copy -y test12.nut |
avconv -i myfile.avi -b 4000k -minrate 4000k -maxrate 4000k -bufsize 1835k out.m2v |
avconv -i src.ext -lmax 21*QP2LAMBDA dst.ext |
+, -,
*, /, ^.
The following unary operators are available: +, -.
The following functions are available:
‘sinh(x)’
‘cosh(x)’
‘tanh(x)’
‘sin(x)’
‘cos(x)’
‘tan(x)’
‘atan(x)’
‘asin(x)’
‘acos(x)’
‘exp(x)’
‘log(x)’
‘abs(x)’
‘squish(x)’
‘gauss(x)’
‘isinf(x)’
Return 1.0 if xis +/-INFINITY, 0.0 otherwise.
‘isnan(x)’
Return 1.0 if xis NAN, 0.0 otherwise.
‘mod(x, y)’
‘max(x, y)’
‘min(x, y)’
‘eq(x, y)’
‘gte(x, y)’
‘gt(x, y)’
‘lte(x, y)’
‘lt(x, y)’
‘st(var, expr)’
Allow to store the value of the expression expr in an internal
variable. var specifies the number of the variable where to
store the value, and it is a value ranging from 0 to 9. The function
returns the value stored in the internal variable.
‘ld(var)’
Allow to load the value of the internal variable with number
var, which was previously stored with st(var, expr).
The function returns the loaded value.
‘while(cond, expr)’
Evaluate expression expr while the expression cond is
non-zero, and returns the value of the last expr evaluation, or
NAN if cond was always false.
‘ceil(expr)’
Round the value of expression expr upwards to the nearest
integer. For example, "ceil(1.5)" is "2.0".
‘floor(expr)’
Round the value of expression expr downwards to the nearest
integer. For example, "floor(-1.5)" is "-2.0".
‘trunc(expr)’
Round the value of expression expr towards zero to the nearest
integer. For example, "trunc(-1.5)" is "-1.0".
‘sqrt(expr)’
Compute the square root of expr. This is equivalent to
"(expr)^.5".
‘not(expr)’
Return 1.0 if expr is zero, 0.0 otherwise.
Note that:
* works like AND
+ works like OR
thus
if A then B else C |
A*B + not(A)*C
|
--enable-lib option. You can list all
available decoders using the configure option --list-decoders.
You can disable all the decoders with the configure option
--disable-decoders and selectively enable / disable single decoders
with the options --enable-decoder=DECODER /
--disable-decoder=DECODER.
The option -decoders of the av* tools will display the list of
enabled decoders.
--enable-lib option. You can list all
available encoders using the configure option --list-encoders.
You can disable all the encoders with the configure option
--disable-encoders and selectively enable / disable single encoders
with the options --enable-encoder=ENCODER /
--disable-encoder=ENCODER.
The option -encoders of the av* tools will display the list of
enabled encoders.
-c:a ac3_fixed in order to use it.
room_type option is not the default value, the mixing_level
option must not be -1.
‘-room_type type’
Room Type. Describes the equalization used during the final mixing session at
the studio or on the dubbing stage. A large room is a dubbing stage with the
industry standard X-curve equalization; a small room has flat equalization.
This field will not be written to the bitstream if both the mixing_level
option and the room_type option have the default values.
‘0’
‘notindicated’
Not Indicated (default)
‘1’
‘large’
Large Room
‘2’
‘small’
Small Room
center_mixlev
and surround_mixlev options if it supports the Alternate Bit Stream
Syntax.
Average SSIM: %f |
SSIM: avg: %1.3f min: %1.3f max: %1.3f |
| b | bitrate | Libav b option is expressed in bits/s, x264 bitrate in kilobits/s. |
| bf | bframes | Maximum number of B-frames. |
| g | keyint | Maximum GOP size. |
| qmin | qpmin | Minimum quantizer scale. |
| qmax | qpmax | Maximum quantizer scale. |
| qdiff | qpstep | Maximum difference between quantizer scales. |
| qblur | qblur | Quantizer curve blur |
| qcomp | qcomp | Quantizer curve compression factor |
| refs | ref | Number of reference frames each P-frame can use. The range is from 0-16. |
| sc_threshold | scenecut | Sets the threshold for the scene change detection. |
| trellis | trellis | Performs Trellis quantization to increase efficiency. Enabled by default. |
| nr | nr | Noise reduction. |
| me_range | merange | Maximum range of the motion search in pixels. |
| subq | subme | Sub-pixel motion estimation method. |
| b_strategy | b-adapt | Adaptive B-frame placement decision algorithm. Use only on first-pass. |
| keyint_min | min-keyint | Minimum GOP size. |
| coder | cabac | Set coder to ac to use CABAC. |
| cmp | chroma-me | Set to chroma to use chroma motion estimation. |
| threads | threads | Number of encoding threads. |
| thread_type | sliced_threads | Set to slice to use sliced threading instead of frame threading. |
| flags -cgop | open-gop | Set -cgop to use recovery points to close GOPs. |
| rc_init_occupancy | vbv-init | Initial buffer occupancy. |
-x264-params level=30:bframes=0:weightp=0:cabac=0:ref=1:vbv-maxrate=768:vbv-bufsize=2000:analyse=all:me=umh:no-fast-pskip=1:subq=6:8x8dct=0:trellis=0 |
-pre option).
avconv for creating a
video from the images in the file sequence ‘img-001.jpeg’,
‘img-002.jpeg’, ..., assuming an input framerate of 10 frames per
second:
avconv -i 'img-%03d.jpeg' -r 10 out.mkv |
avconv -i img.jpeg img.png |
--list-muxers.
You can disable all the muxers with the configure option
--disable-muxers and selectively enable / disable single muxers
with the options --enable-muxer=MUXER /
--disable-muxer=MUXER.
The option -formats of the av* tools will display the list of
enabled muxers.
A description of some of the currently available muxers follows.
avconv -i INPUT -f crc out.crc |
avconv -i INPUT -f crc - |
avconv by
specifying the audio and video codec and format. For example to
compute the CRC of the input audio converted to PCM unsigned 8-bit
and the input video converted to MPEG-2 video, use the command:
avconv -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc - |
avconv -re -i <input> -map 0 -map 0 -c:a libfdk_aac -c:v libx264 -b:v:0 800k -b:v:1 300k -s:v:1 320x170 -profile:v:1 baseline -profile:v:0 main -bf 1 -keyint_min 120 -g 120 -sc_threshold 0 -b_strategy 0 -ar:a:1 22050 -use_timeline 1 -use_template 1 -window_size 5 -adaptation_sets "id=0,streams=v id=1,streams=a" -f dash /path/to/out.mpd |
avconv -i INPUT -f framecrc out.crc |
avconv -i INPUT -f framecrc - |
avconv by
specifying the audio and video codec and format. For example, to
compute the CRC of each decoded input audio frame converted to PCM
unsigned 8-bit and of each decoded input video frame converted to
MPEG-2 video, use the command:
avconv -i INPUT -c:a pcm_u8 -c:v mpeg2video -f framecrc - |
avconv -i in.mkv -c:v h264 -flags +cgop -g 30 -hls_time 1 out.m3u8 |
avconv for creating a
sequence of files ‘img-001.jpeg’, ‘img-002.jpeg’, ...,
taking one image every second from the input video:
avconv -i in.avi -vsync 1 -r 1 -f image2 'img-%03d.jpeg' |
avconv, if the format is not specified with the
-f option and the output filename specifies an image file
format, the image2 muxer is automatically selected, so the previous
command can be written as:
avconv -i in.avi -vsync 1 -r 1 'img-%03d.jpeg' |
avconv -i in.avi -f image2 -frames:v 1 img.jpeg |
avconv -i sample_left_right_clip.mpg -an -c:v libvpx -metadata STEREO_MODE=left_right -y stereo_clip.webm |
qt-faststart tool). A fragmented
file consists of a number of fragments, where packets and metadata
about these packets are stored together. Writing a fragmented
file has the advantage that the file is decodable even if the
writing is interrupted (while a normal MOV/MP4 is undecodable if
it is not properly finished), and it requires less memory when writing
very long files (since writing normal MOV/MP4 files stores info about
every single packet in memory until the file is closed). The downside
is that it is less compatible with other applications.
Fragmentation is enabled by setting one of the AVOptions that define
how to cut the file into fragments:
‘-movflags frag_keyframe’
Start a new fragment at each video keyframe.
‘-frag_duration duration’
Create fragments that are duration microseconds long.
‘-frag_size size’
Create fragments that contain up to size bytes of payload data.
‘-movflags frag_custom’
Allow the caller to manually choose when to cut fragments, by
calling av_write_frame(ctx, NULL) to write a fragment with
the packets written so far. (This is only useful with other
applications integrating libavformat, not from avconv.)
‘-min_frag_duration duration’
Don’t create fragments that are shorter than duration microseconds long.
If more than one condition is specified, fragments are cut when
one of the specified conditions is fulfilled. The exception to this is
-min_frag_duration, which has to be fulfilled for any of the other
conditions to apply.
Additionally, the way the output file is written can be adjusted
through a few other options:
‘-movflags empty_moov’
Write an initial moov atom directly at the start of the file, without
describing any samples in it. Generally, an mdat/moov pair is written
at the start of the file, as a normal MOV/MP4 file, containing only
a short portion of the file. With this option set, there is no initial
mdat atom, and the moov atom only describes the tracks but has
a zero duration.
This option is implicitly set when writing ismv (Smooth Streaming) files.
‘-movflags separate_moof’
Write a separate moof (movie fragment) atom for each track. Normally,
packets for all tracks are written in a moof atom (which is slightly
more efficient), but with this option set, the muxer writes one moof/mdat
pair for each track, making it easier to separate tracks.
This option is implicitly set when writing ismv (Smooth Streaming) files.
‘-movflags faststart’
Run a second pass moving the index (moov atom) to the beginning of the file.
This operation can take a while, and will not work in various situations such
as fragmented output, thus it is not enabled by default.
‘-movflags disable_chpl’
Disable Nero chapter markers (chpl atom). Normally, both Nero chapters
and a QuickTime chapter track are written to the file. With this option
set, only the QuickTime chapter track will be written. Nero chapters can
cause failures when the file is reprocessed with certain tagging programs.
‘-movflags omit_tfhd_offset’
Do not write any absolute base_data_offset in tfhd atoms. This avoids
tying fragments to absolute byte positions in the file/streams.
‘-movflags default_base_moof’
Similarly to the omit_tfhd_offset, this flag avoids writing the
absolute base_data_offset field in tfhd atoms, but does so by using
the new default-base-is-moof flag instead. This flag is new from
14496-12:2012. This may make the fragments easier to parse in certain
circumstances (avoiding basing track fragment location calculations
on the implicit end of the previous track fragment).
Smooth Streaming content can be pushed in real time to a publishing
point on IIS with this muxer. Example:
avconv -re <normal input/transcoding options> -movflags isml+frag_keyframe -f ismv http://server/publishingpoint.isml/Streams(Encoder1) |
id3v2_version private option controls which one is
used (3 or 4). Setting id3v2_version to 0 disables the ID3v2 header
completely.
The muxer supports writing attached pictures (APIC frames) to the ID3v2 header.
The pictures are supplied to the muxer in form of a video stream with a single
packet. There can be any number of those streams, each will correspond to a
single APIC frame. The stream metadata tags title and comment map
to APIC description and picture type respectively. See
http://id3.org/id3v2.4.0-frames for allowed picture types.
Note that the APIC frames must be written at the beginning, so the muxer will
buffer the audio frames until it gets all the pictures. It is therefore advised
to provide the pictures as soon as possible to avoid excessive buffering.
●
A Xing/LAME frame right after the ID3v2 header (if present). It is enabled by
default, but will be written only if the output is seekable. The
write_xing private option can be used to disable it. The frame contains
various information that may be useful to the decoder, like the audio duration
or encoder delay.
●
A legacy ID3v1 tag at the end of the file (disabled by default). It may be
enabled with the write_id3v1 private option, but as its capabilities are
very limited, its usage is not recommended.
Examples:
Write an mp3 with an ID3v2.3 header and an ID3v1 footer:
avconv -i INPUT -id3v2_version 3 -write_id3v1 1 out.mp3 |
avconv -i input.mp3 -i cover.png -c copy -metadata:s:v title="Album cover" -metadata:s:v comment="Cover (Front)" out.mp3 |
avconv -i input.wav -write_xing 0 -id3v2_version 0 out.mp3 |
service_provider
and service_name. If they are not set the default for
service_provider is "Libav" and the default for
service_name is "Service01".
avconv -i file.mpg -c copy \
-mpegts_original_network_id 0x1122 \
-mpegts_transport_stream_id 0x3344 \
-mpegts_service_id 0x5566 \
-mpegts_pmt_start_pid 0x1500 \
-mpegts_start_pid 0x150 \
-metadata service_provider="Some provider" \
-metadata service_name="Some Channel" \
-y out.ts
|
avconv you can use the
command:
avconv -benchmark -i INPUT -f null out.null |
avconv
syntax.
Alternatively you can write the command as:
avconv -benchmark -i INPUT -f null - |
avconv -i INPUT -f_strict experimental -syncpoints none - | processor |
avconv -i in.mkv -c hevc -flags +cgop -g 60 -map 0 -f segment -list out.list out%03d.nut |
hw:CARD[,DEV[,SUBDEV]] |
avconv from an ALSA device with
card id 0, you may run the command:
avconv -f alsa -i hw:0 alsaout.wav |
avconv:
avconv -f fbdev -r 10 -i /dev/fb0 out.avi |
avconv -f fbdev -frames:v 1 -r 1 -i /dev/fb0 screenshot.jpeg |
avconv.
# Create a JACK writable client with name "libav". $ avconv -f jack -i libav -y out.wav # Start the sample jack_metro readable client. $ jack_metro -b 120 -d 0.2 -f 4000 # List the current JACK clients. $ jack_lsp -c system:capture_1 system:capture_2 system:playback_1 system:playback_2 libav:input_1 metro:120_bpm # Connect metro to the avconv writable client. $ jack_connect metro:120_bpm libav:input_1 |
avconv use the
command:
avconv -f oss -i /dev/dsp /tmp/oss.wav |
avconv -f pulse -i default /tmp/pulse.wav |
-server server name |
-name application name |
-stream_name stream name |
-sample_rate samplerate |
-channels N
|
-frame_size bytes |
-fragment_size bytes |
avconv use the
command:
avconv -f sndio -i /dev/audio0 /tmp/oss.wav |
-list_formats all for Video4Linux2 devices.
Some usage examples of the video4linux2 devices with avconv and avplay:
# List supported formats for a video4linux2 device. avplay -f video4linux2 -list_formats all /dev/video0 # Grab and show the input of a video4linux2 device. avplay -f video4linux2 -framerate 30 -video_size hd720 /dev/video0 # Grab and record the input of a video4linux2 device, leave the framerate and size as previously set. avconv -f video4linux2 -input_format mjpeg -i /dev/video0 out.mpeg |
[hostname]:display_number.screen_number[+x_offset,y_offset] |
DISPLAY contains the default display name.
x_offset and y_offset specify the offsets of the grabbed
area with respect to the top-left border of the X11 screen. They
default to 0.
Check the X11 documentation (e.g. man X) for more detailed information.
Use the ‘dpyinfo’ program for getting basic information about the
properties of your X11 display (e.g. grep for "name" or "dimensions").
For example to grab from ‘:0.0’ using avconv:
avconv -f x11grab -r 25 -s cif -i :0.0 out.mpg # Grab at position 10,20. avconv -f x11grab -r 25 -s cif -i :0.0+10,20 out.mpg |
-follow_mouse centered|PIXELS |
avconv -f x11grab -follow_mouse centered -r 25 -s cif -i :0.0 out.mpg # Follows only when the mouse pointer reaches within 100 pixels to edge avconv -f x11grab -follow_mouse 100 -r 25 -s cif -i :0.0 out.mpg |
-show_region 1 |
avconv -f x11grab -show_region 1 -r 25 -s cif -i :0.0+10,20 out.mpg # With follow_mouse avconv -f x11grab -follow_mouse centered -show_region 1 -r 25 -s cif -i :0.0 out.mpg |
-grab_x x_offset -grab_y y_offset |
concat:URL1|URL2|...|URLN |
avplay use the
command:
avplay concat:split1.mpeg\|split2.mpeg\|split3.mpeg |
avconv
use the command:
avconv -i file:input.mpeg output.mpeg |
hls+http://host/path/to/remote/resource.m3u8 hls+file://path/to/local/resource.m3u8 |
mmsh://server[:port][/app][/playpath] |
# Write the MD5 hash of the encoded AVI file to the file output.avi.md5. avconv -i input.flv -f avi -y md5:output.avi.md5 # Write the MD5 hash of the encoded AVI file to stdout. avconv -i input.flv -f avi -y md5: |
pipe:[number] |
avconv:
cat test.wav | avconv -i pipe:0 # ...this is the same as... cat test.wav | avconv -i pipe: |
avconv:
avconv -i test.wav -f avi pipe:1 | cat > test.avi # ...this is the same as... avconv -i test.wav -f avi pipe: | cat > test.avi |
rtmp://[username:password@]server[:port][/app][/instance][/playpath] |
rtmp_app option, too.
‘playpath’
It is the path or name of the resource to play with reference to the
application specified in app, may be prefixed by "mp4:". You
can override the value parsed from the URI through the rtmp_playpath
option, too.
‘listen’
Act as a server, listening for an incoming connection.
‘timeout’
Maximum time to wait for the incoming connection. Implies listen.
Additionally, the following parameters can be set via command line options
(or in code via AVOptions):
‘rtmp_app’
Name of application to connect on the RTMP server. This option
overrides the parameter specified in the URI.
‘rtmp_buffer’
Set the client buffer time in milliseconds. The default is 3000.
‘rtmp_conn’
Extra arbitrary AMF connection parameters, parsed from a string,
e.g. like B:1 S:authMe O:1 NN:code:1.23 NS:flag:ok O:0.
Each value is prefixed by a single character denoting the type,
B for Boolean, N for number, S for string, O for object, or Z for null,
followed by a colon. For Booleans the data must be either 0 or 1 for
FALSE or TRUE, respectively. Likewise for Objects the data must be 0 or
1 to end or begin an object, respectively. Data items in subobjects may
be named, by prefixing the type with ’N’ and specifying the name before
the value (i.e. NB:myFlag:1). This option may be used multiple
times to construct arbitrary AMF sequences.
‘rtmp_flashver’
Version of the Flash plugin used to run the SWF player. The default
is LNX 9,0,124,2. (When publishing, the default is FMLE/3.0 (compatible;
<libavformat version>).)
‘rtmp_flush_interval’
Number of packets flushed in the same request (RTMPT only). The default
is 10.
‘rtmp_live’
Specify that the media is a live stream. No resuming or seeking in
live streams is possible. The default value is any, which means the
subscriber first tries to play the live stream specified in the
playpath. If a live stream of that name is not found, it plays the
recorded stream. The other possible values are live and
recorded.
‘rtmp_pageurl’
URL of the web page in which the media was embedded. By default no
value will be sent.
‘rtmp_playpath’
Stream identifier to play or to publish. This option overrides the
parameter specified in the URI.
‘rtmp_subscribe’
Name of live stream to subscribe to. By default no value will be sent.
It is only sent if the option is specified or if rtmp_live
is set to live.
‘rtmp_swfhash’
SHA256 hash of the decompressed SWF file (32 bytes).
‘rtmp_swfsize’
Size of the decompressed SWF file, required for SWFVerification.
‘rtmp_swfurl’
URL of the SWF player for the media. By default no value will be sent.
‘rtmp_swfverify’
URL to player swf file, compute hash/size automatically.
‘rtmp_tcurl’
URL of the target stream. Defaults to proto://host[:port]/app.
For example to read with avplay a multimedia resource named
"sample" from the application "vod" from an RTMP server "myserver":
avplay rtmp://myserver/vod/sample |
avconv -re -i <input> -f flv -rtmp_playpath some/long/path -rtmp_app long/app/name rtmp://username:password@myserver/ |
rtmp_proto://server[:port][/app][/playpath] options |
avconv:
avconv -re -i myfile -f flv rtmp://myserver/live/mystream |
avplay:
avplay "rtmp://myserver/live/mystream live=1" |
rtsp://hostname[:port]/path |
avconv/avplay command
line, or set in code via AVOptions or in avformat_open_input),
are supported:
Flags for rtsp_transport:
‘udp’
Use UDP as lower transport protocol.
‘tcp’
Use TCP (interleaving within the RTSP control channel) as lower
transport protocol.
‘udp_multicast’
Use UDP multicast as lower transport protocol.
‘http’
Use HTTP tunneling as lower transport protocol, which is useful for
passing proxies.
Multiple lower transport protocols may be specified, in that case they are
tried one at a time (if the setup of one fails, the next one is tried).
For the muxer, only the tcp and udp options are supported.
Flags for rtsp_flags:
‘filter_src’
Accept packets only from negotiated peer address and port.
‘listen’
Act as a server, listening for an incoming connection.
When receiving data over UDP, the demuxer tries to reorder received packets
(since they may arrive out of order, or packets may get lost totally). This
can be disabled by setting the maximum demuxing delay to zero (via
the max_delay field of AVFormatContext).
When watching multi-bitrate Real-RTSP streams with avplay, the
streams to display can be chosen with -vst nand
-ast nfor video and audio respectively, and can be switched
on the fly by pressing vand a.
Example command lines:
To watch a stream over UDP, with a max reordering delay of 0.5 seconds:
avplay -max_delay 500000 -rtsp_transport udp rtsp://server/video.mp4 |
avplay -rtsp_transport http rtsp://server/video.mp4 |
avconv -re -i input -f rtsp -muxdelay 0.1 rtsp://server/live.sdp |
avconv -rtsp_flags listen -i rtsp://ownaddress/live.sdp output |
sap://destination[:port][?options]
|
&-separated list. The following options
are supported:
‘announce_addr=address’
Specify the destination IP address for sending the announcements to.
If omitted, the announcements are sent to the commonly used SAP
announcement multicast address 224.2.127.254 (sap.mcast.net), or
ff0e::2:7ffe if destination is an IPv6 address.
‘announce_port=port’
Specify the port to send the announcements on, defaults to
9875 if not specified.
‘ttl=ttl’
Specify the time to live value for the announcements and RTP packets,
defaults to 255.
‘same_port=0|1’
If set to 1, send all RTP streams on the same port pair. If zero (the
default), all streams are sent on unique ports, with each stream on a
port 2 numbers higher than the previous.
VLC/Live555 requires this to be set to 1, to be able to receive the stream.
The RTP stack in libavformat for receiving requires all streams to be sent
on unique ports.
Example command lines follow.
To broadcast a stream on the local subnet, for watching in VLC:
avconv -re -i input -f sap sap://224.0.0.255?same_port=1 |
avconv -re -i input -f sap sap://224.0.0.255 |
avconv -re -i input -f sap sap://[ff0e::1:2:3:4] |
sap://[address][:port] |
avplay sap:// |
avplay sap://[ff0e::2:7ffe] |
srt://hostname:port[?options]
|
options srt://hostname:port |
tcp://hostname:port[?options]
|
avconv -i input -f format tcp://hostname:port?listen avplay tcp://hostname:port |
tls://hostname:port |
AVOptions):
‘ca_file’
A file containing certificate authority (CA) root certificates to treat
as trusted. If the linked TLS library contains a default this might not
need to be specified for verification to work, but not all libraries and
setups have defaults built in.
‘tls_verify=1|0’
If enabled, try to verify the peer that we are communicating with.
Note, if using OpenSSL, this currently only makes sure that the
peer certificate is signed by one of the root certificates in the CA
database, but it does not validate that the certificate actually
matches the host name we are trying to connect to. (With GnuTLS,
the host name is validated as well.)
This is disabled by default since it requires a CA database to be
provided by the caller in many cases.
‘cert_file’
A file containing a certificate to use in the handshake with the peer.
(When operating as server, in listen mode, this is more often required
by the peer, while client certificates only are mandated in certain
setups.)
‘key_file’
A file containing the private key for the certificate.
‘listen=1|0’
If enabled, listen for connections on the provided port, and assume
the server role in the handshake instead of the client role.
udp://hostname:port[?options]
|
connect(). In this case, the
destination address can’t be changed with ff_udp_set_remote_url later.
If the destination address isn’t known at the start, this option can
be specified in ff_udp_set_remote_url, too.
This allows finding out the source address for the packets with getsockname,
and makes writes return with AVERROR(ECONNREFUSED) if "destination
unreachable" is received.
For receiving, this gives the benefit of only receiving packets from
the specified peer address/port.
‘sources=address[,address]’
Only receive packets sent to the multicast group from one of the
specified sender IP addresses.
‘block=address[,address]’
Ignore packets sent to the multicast group from the specified
sender IP addresses.
Some usage examples of the udp protocol with avconv follow.
To stream over UDP to a remote endpoint:
avconv -i input -f format udp://hostname:port |
avconv -i input -f mpegts udp://hostname:port?pkt_size=188&buffer_size=65535 |
avconv -i udp://[multicast-address]:port |
unix://filepath |
AVOptions):
‘timeout’
Timeout in ms.
‘listen’
Create the Unix socket in listening mode.
--list-bsfs.
You can disable all the bitstream filters using the configure option
--disable-bsfs, and selectively enable any bitstream filter using
the option --enable-bsf=BSF, or you can disable a particular
bitstream filter using the option --disable-bsf=BSF.
The option -bsfs of the av* tools will display the list of
all the supported bitstream filters included in your build.
Below is a description of the currently available bitstream filters.
avconv -i ../some_mjpeg.avi -c:v copy frames_%d.jpg |
avconv -i mjpeg-movie.avi -c:v copy -bsf:v mjpeg2jpeg frame_%d.jpg exiftran -i -9 frame*.jpg avconv -i frame_%d.jpg -c:v copy rotated.avi |
avconv and ‘-vf’inavplay, and by the
avfilter_graph_parse()/avfilter_graph_parse2() functions defined in
‘libavfilter/avfilter.h’.
A filterchain consists of a sequence of connected filters, each one
connected to the previous one in the sequence. A filterchain is
represented by a list of ","-separated filter descriptions.
A filtergraph consists of a sequence of filterchains. A sequence of
filterchains is represented by a list of ";"-separated filterchain
descriptions.
A filter is represented by a string of the form:
[in_link_1]...[in_link_N]filter_name=arguments[out_link_1]...[out_link_M]
filter_name is the name of the filter class of which the
described filter is an instance of, and has to be the name of one of
the filter classes registered in the program.
The name of the filter class is optionally followed by a string
"=arguments".
arguments is a string which contains the parameters used to
initialize the filter instance. It may have one of two forms:
●
A ’:’-separated list of key=value pairs.
●
A ’:’-separated list of value. In this case, the keys are assumed to be
the option names in the order they are declared. E.g. the fade filter
declares three options in this order – ‘type’, ‘start_frame’ and
‘nb_frames’. Then the parameter list in:0:30 means that the value
in is assigned to the option ‘type’, 0 to
‘start_frame’ and 30to ‘nb_frames’.
If the option value itself is a list of items (e.g. the format filter
takes a list of pixel formats), the items in the list are usually separated by
’|’.
The list of arguments can be quoted using the character "’" as initial
and ending mark, and the character ’\’ for escaping the characters
within the quoted text; otherwise the argument string is considered
terminated when the next special character (belonging to the set
"[]=;,") is encountered.
The name and arguments of the filter are optionally preceded and
followed by a list of link labels.
A link label allows to name a link and associate it to a filter output
or input pad. The preceding labels in_link_1
... in_link_N, are associated to the filter input pads,
the following labels out_link_1 ... out_link_M, are
associated to the output pads.
When two link labels with the same name are found in the
filtergraph, a link between the corresponding input and output pad is
created.
If an output pad is not labelled, it is linked by default to the first
unlabelled input pad of the next filter in the filterchain.
For example in the filterchain
nullsrc, split[L1], [L2]overlay, nullsink |
sws_flags=flags;
to the filtergraph description.
Here is a BNF description of the filtergraph syntax:
NAME ::= sequence of alphanumeric characters and '_' LINKLABEL ::= "[" NAME "]" LINKLABELS ::= LINKLABEL [LINKLABELS] FILTER_ARGUMENTS ::= sequence of chars (possibly quoted) FILTER ::= [LINKLABELS] NAME ["=" FILTER_ARGUMENTS] [LINKLABELS] FILTERCHAIN ::= FILTER [,FILTERCHAIN] FILTERGRAPH ::= [sws_flags=flags;] FILTERCHAIN [;FILTERGRAPH] |
aformat=sample_fmts=u8|s16:channel_layouts=stereo |
avconv -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex amix=inputs=3:duration=first:dropout_transition=3 OUTPUT |
# Start counting PTS from zero asetpts=expr=PTS-STARTPTS # Generate timestamps by counting samples asetpts=expr=N/SR/TB # Generate timestamps from a "live source" and rebase onto the current timebase asetpts='(RTCTIME - RTCSTART) / (TB * 1000000)" |
# Set the timebase to 1/25: settb=1/25 # Set the timebase to 1/10: settb=0.1 # Set the timebase to 1001/1000: settb=1+0.001 # Set the timebase to 2*intb: settb=2*intb # Set the default timebase value: settb=AVTB # Set the timebase to twice the sample rate: asettb=sr*2 |
avconv -i INPUT -filter_complex asplit=5 OUTPUT |
avconv -i INPUT -af atrim=60:120 |
avconv -i INPUT -af atrim=end_sample=1000 |
avconv -i in.mp3 -filter_complex channelsplit out.mkv |
avconv -i in.wav -filter_complex 'channelsplit=channel_layout=5.1[FL][FR][FC][LFE][SL][SR]' -map '[FL]' front_left.wav -map '[FR]' front_right.wav -map '[FC]' front_center.wav -map '[LFE]' low_frequency_effects.wav -map '[SL]' side_left.wav -map '[SR]' side_right.wav |
in_channel-out_channelorin_channel form. in_channel can be either the name of the input
channel (e.g. FL for front left) or its index in the input channel layout.
out_channel is the name of the output channel or its index in the output
channel layout. If out_channel is not given then it is implicitly an
index, starting with zero and increasing by one for each mapping.
If no mapping is present, the filter will implicitly map input channels to
output channels, preserving indices.
For example, assuming a 5.1+downmix input MOV file,
avconv -i in.mov -filter 'channelmap=map=DL-FL|DR-FR' out.wav |
avconv -i in.wav -filter 'channelmap=1|2|0|5|3|4:5.1' out.wav |
x0/y0|x1/y1|x2/y2|....
The input values must be in strictly increasing order but the transfer function
does not have to be monotonically rising. The point 0/0 is assumed but
may be overridden (by0/out-dBn). Typical values for the transfer
function are -70/-70|-60/-20.
‘soft-knee’
Set the curve radius in dB for all joints. It defaults to 0.01.
‘gain’
Set the additional gain in dB to be applied at all points on the transfer
function. This allows for easy adjustment of the overall gain.
It defaults to 0.
‘volume’
Set an initial volume, in dB, to be assumed for each channel when filtering
starts. This permits the user to supply a nominal level initially, so that, for
example, a very large gain is not applied to initial signal levels before the
companding has begun to operate. A typical value for audio which is initially
quiet is -90 dB. It defaults to 0.
‘delay’
Set a delay, in seconds. The input audio is analyzed immediately, but audio is
delayed before being fed to the volume adjuster. Specifying a delay
approximately equal to the attack/decay times allows the filter to effectively
operate in predictive rather than reactive mode. It defaults to 0.
compand=.3|.3:1|1:-90/-60|-60/-40|-40/-30|-20/-20:6:0:-90:0.2 |
compand=.1|.1:.2|.2:-900/-900|-50.1/-900|-50/-50:.01:0:-90:.1 |
compand=.1|.1:.1|.1:-45.1/-45.1|-45/-900|0/-900:.01:45:-90:.1 |
input_idx.in_channel-out_channel
form. input_idx is the 0-based index of the input stream. in_channel
can be either the name of the input channel (e.g. FL for front left) or its
index in the specified input stream. out_channel is the name of the output
channel.
The filter will attempt to guess the mappings when they are not specified
explicitly. It does so by first trying to find an unused matching input channel
and if that fails it picks the first unused input channel.
Join 3 inputs (with properly set channel layouts):
avconv -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex join=inputs=3 OUTPUT |
avconv -i fl -i fr -i fc -i sl -i sr -i lfe -filter_complex 'join=inputs=6:channel_layout=5.1:map=0.0-FL|1.0-FR|2.0-FC|3.0-SL|4.0-SR|5.0-LFE' out |
avconv -i HDCD16.flac -af hdcd OUT24.flac |
-c:a pcm_s24le after the filter to get 24-bit PCM output.
avconv -i HDCD16.wav -af hdcd OUT16.wav avconv -i HDCD16.wav -af hdcd -c:a pcm_s24le OUT24.wav |
output_volume = volume * input_volume |
volume=volume=0.5 volume=volume=1/2 volume=volume=-6.0206dB |
volume=volume=6dB:precision=fixed |
# Set the sample rate to 48000 Hz and the channel layout to CH_LAYOUT_MONO anullsrc=48000:4 # The same as above anullsrc=48000:mono |
av_get_sample_fmt_name().
‘channel_layout’
The channel layout of the audio data, in the form that can be accepted by
av_get_channel_layout().
All the parameters need to be explicitly defined.
min(w,h)/2 for the luma and alpha planes,
and of min(cw,ch)/2 for the chroma planes.
luma_power, chroma_power, and alpha_power represent
how many times the boxblur filter is applied to the corresponding
plane.
Some examples:
●
Apply a boxblur filter with the luma, chroma, and alpha radii
set to 2:
boxblur=luma_radius=2:luma_power=1 |
boxblur=2:1:0:0:0:0 |
boxblur=luma_radius=min(h\,w)/10:luma_power=1:chroma_radius=min(cw\,ch)/10:chroma_power=1 |
# Crop the central input area with size 100x100 crop=out_w=100:out_h=100 # Crop the central input area with size 2/3 of the input video "crop=out_w=2/3*in_w:out_h=2/3*in_h" # Crop the input video central square crop=out_w=in_h # Delimit the rectangle with the top-left corner placed at position # 100:100 and the right-bottom corner corresponding to the right-bottom # corner of the input image crop=out_w=in_w-100:out_h=in_h-100:x=100:y=100 # Crop 10 pixels from the left and right borders, and 20 pixels from # the top and bottom borders "crop=out_w=in_w-2*10:out_h=in_h-2*20" # Keep only the bottom right quarter of the input image "crop=out_w=in_w/2:out_h=in_h/2:x=in_w/2:y=in_h/2" # Crop height for getting Greek harmony "crop=out_w=in_w:out_h=1/PHI*in_w" # Trembling effect "crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(n/10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(n/7)" # Erratic camera effect depending on timestamp "crop=out_w=in_w/2:out_h=in_h/2:x=(in_w-out_w)/2+((in_w-out_w)/2)*sin(t*10):y=(in_h-out_h)/2 +((in_h-out_h)/2)*sin(t*13)" # Set x depending on the value of y "crop=in_w/2:in_h/2:y:10+10*sin(n/10)" |
delogo=x=0:y=0:w=100:h=77:band=10 |
# Draw a black box around the edge of the input image drawbox # Draw a box with color red and an opacity of 50% drawbox=x=10:y=20:width=200:height=60:color=red@0.5" |
--enable-libfreetype.
To enable default font fallback and the font option you need to
configure Libav with --enable-libfontconfig.
The filter also recognizes strftime() sequences in the provided text
and expands them accordingly. Check the documentation of strftime().
It accepts the following parameters:
‘font’
The font family to be used for drawing text. By default Sans.
‘fontfile’
The font file to be used for drawing text. The path must be included.
This parameter is mandatory if the fontconfig support is disabled.
‘text’
The text string to be drawn. The text must be a sequence of UTF-8
encoded characters.
This parameter is mandatory if no file is specified with the parameter
textfile.
‘textfile’
A text file containing text to be drawn. The text must be a sequence
of UTF-8 encoded characters.
This parameter is mandatory if no text string is specified with the
parameter text.
If both text and textfile are specified, an error is thrown.
‘x, y’
The offsets where text will be drawn within the video frame.
It is relative to the top/left border of the output image.
They accept expressions similar to the overlay filter:
‘x, y’
The computed values for xand y. They are evaluated for
each new frame.
‘main_w, main_h’
The main input width and height.
‘W, H’
These are the same as main_w and main_h.
‘text_w, text_h’
The rendered text’s width and height.
‘w, h’
These are the same as text_w and text_h.
‘n’
The number of frames processed, starting from 0.
‘t’
The timestamp, expressed in seconds. It’s NAN if the input timestamp is unknown.
The default value of xand yis 0.
‘draw’
Draw the text only if the expression evaluates as non-zero.
The expression accepts the same variables x, y do.
The default value is 1.
‘alpha’
Draw the text applying alpha blending. The value can
be either a number between 0.0 and 1.0
The expression accepts the same variables x, y do.
The default value is 1.
‘fontsize’
The font size to be used for drawing text.
The default value of fontsize is 16.
‘fontcolor’
The color to be used for drawing fonts.
It is either a string (e.g. "red"), or in 0xRRGGBB[AA] format
(e.g. "0xff000033"), possibly followed by an alpha specifier.
The default value of fontcolor is "black".
‘boxcolor’
The color to be used for drawing box around text.
It is either a string (e.g. "yellow") or in 0xRRGGBB[AA] format
(e.g. "0xff00ff"), possibly followed by an alpha specifier.
The default value of boxcolor is "white".
‘box’
Used to draw a box around text using the background color.
The value must be either 1 (enable) or 0 (disable).
The default value of box is 0.
‘shadowx, shadowy’
The x and y offsets for the text shadow position with respect to the
position of the text. They can be either positive or negative
values. The default value for both is "0".
‘shadowcolor’
The color to be used for drawing a shadow behind the drawn text. It
can be a color name (e.g. "yellow") or a string in the 0xRRGGBB[AA]
form (e.g. "0xff00ff"), possibly followed by an alpha specifier.
The default value of shadowcolor is "black".
‘ft_load_flags’
The flags to be used for loading the fonts.
The flags map the corresponding flags supported by libfreetype, and are
a combination of the following values:
default
no_scale
no_hinting
render
no_bitmap
vertical_layout
force_autohint
crop_bitmap
pedantic
ignore_global_advance_width
no_recurse
ignore_transform
monochrome
linear_design
no_autohint
end table
Default value is "render".
For more information consult the documentation for the FT_LOAD_*
libfreetype flags.
‘tabsize’
The size in number of spaces to use for rendering the tab.
Default value is 4.
‘fix_bounds’
If true, check and fix text coords to avoid clipping.
For example the command:
drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text'" |
drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text':\
x=100: y=50: fontsize=24: fontcolor=yellow@0.2: box=1: boxcolor=red@0.2"
|
# Fade in the first 30 frames of video fade=type=in:nb_frames=30 # Fade out the last 45 frames of a 200-frame video fade=type=out:start_frame=155:nb_frames=45 # Fade in the first 25 frames and fade out the last 25 frames of a 1000-frame video fade=type=in:start_frame=0:nb_frames=25, fade=type=out:start_frame=975:nb_frames=25 # Make the first 5 frames black, then fade in from frame 5-24 fade=type=in:start_frame=5:nb_frames=20 |
./avconv -i in.vob -vf "fieldorder=order=bff" out.dv |
# Convert the input video to the "yuv420p" format format=pix_fmts=yuv420p # Convert the input video to any of the formats in the list format=pix_fmts=yuv420p|yuv444p|yuv410p |
# Convert left and right views into a frame-sequential video avconv -i LEFT -i RIGHT -filter_complex framepack=frameseq OUTPUT # Convert views into a side-by-side video with the same output resolution as the input avconv -i LEFT -i RIGHT -filter_complex [0:v]scale=w=iw/2[left],[1:v]scale=w=iw/2[right],[left][right]framepack=sbs OUTPUT |
FREI0R_PATH is defined, the frei0r effect is searched for in each of the
directories specified by the colon-separated list in FREIOR_PATH.
Otherwise, the standard frei0r paths are searched, in this order:
‘HOME/.frei0r-1/lib/’, ‘/usr/local/lib/frei0r-1/’,
‘/usr/lib/frei0r-1/’.
‘filter_params’
A ’|’-separated list of parameters to pass to the frei0r effect.
A frei0r effect parameter can be a boolean (its value is either
"y" or "n"), a double, a color (specified as
R/G/B, where R, G, and Bare floating point
numbers between 0.0 and 1.0, inclusive) or by an av_parse_color() color
description), a position (specified as X/Y, where
X and Yare floating point numbers) and/or a string.
The number and types of parameters depend on the loaded effect. If an
effect parameter is not specified, the default value is set.
Some examples:
# Apply the distort0r effect, setting the first two double parameters frei0r=filter_name=distort0r:filter_params=0.5|0.01 # Apply the colordistance effect, taking a color as the first parameter frei0r=colordistance:0.2/0.3/0.4 frei0r=colordistance:violet frei0r=colordistance:0x112233 # Apply the perspective effect, specifying the top left and top right # image positions frei0r=perspective:0.2/0.2|0.8/0.2 |
# Default parameters gradfun=strength=1.2:radius=16 # Omitting the radius gradfun=1.2 |
avconv:
avconv -i in.avi -vf "hflip" out.avi |
Original Original New Frame
Frame 'j' Frame 'j+1' (tff)
========== =========== ==================
Line 0 --------------------> Frame 'j' Line 0
Line 1 Line 1 ----> Frame 'j+1' Line 1
Line 2 ---------------------> Frame 'j' Line 2
Line 3 Line 3 ----> Frame 'j+1' Line 3
... ... ...
New Frame + 1 will be generated by Frame 'j+2' and Frame 'j+3' and so on
|
# Negate input video lutrgb="r=maxval+minval-val:g=maxval+minval-val:b=maxval+minval-val" lutyuv="y=maxval+minval-val:u=maxval+minval-val:v=maxval+minval-val" # The above is the same as lutrgb="r=negval:g=negval:b=negval" lutyuv="y=negval:u=negval:v=negval" # Negate luminance lutyuv=negval # Remove chroma components, turning the video into a graytone image lutyuv="u=128:v=128" # Apply a luma burning effect lutyuv="y=2*val" # Remove green and blue components lutrgb="g=0:b=0" # Set a constant alpha channel value on input format=rgba,lutrgb=a="maxval-minval/2" # Correct luminance gamma by a factor of 0.5 lutyuv=y=gammaval(0.5) |
# Force libavfilter to use a format different from "yuv420p" for the # input to the vflip filter noformat=pix_fmts=yuv420p,vflip # Convert the input video to any of the formats not contained in the list noformat=yuv420p|yuv444p|yuv410p |
cvDilate.
It accepts the parameters: struct_el|nb_iterations.
struct_el represents a structuring element, and has the syntax:
colsxrows+anchor_xxanchor_y/shape
cols and rows represent the number of columns and rows of
the structuring element, anchor_x and anchor_y the anchor
point, and shape the shape for the structuring element. shape
must be "rect", "cross", "ellipse", or "custom".
If the value for shape is "custom", it must be followed by a
string of the form "=filename". The file with name
filename is assumed to represent a binary image, with each
printable character corresponding to a bright pixel. When a custom
shape is used, cols and rows are ignored, the number
or columns and rows of the read file are assumed instead.
The default value for struct_el is "3x3+0x0/rect".
nb_iterations specifies the number of times the transform is
applied to the image, and defaults to 1.
Some examples:
# Use the default values ocv=dilate # Dilate using a structuring element with a 5x5 cross, iterating two times ocv=filter_name=dilate:filter_params=5x5+2x2/cross|2 # Read the shape from the file diamond.shape, iterating two times. # The file diamond.shape may contain a pattern of characters like this # * # *** # ***** # *** # * # The specified columns and rows are ignored # but the anchor point coordinates are not ocv=dilate:0x0+2x2/custom=diamond.shape|2 |
cvErode.
It accepts the parameters: struct_el:nb_iterations,
with the same syntax and semantics as the dilate filter.
cvSmooth.
# Draw the overlay at 10 pixels from the bottom right # corner of the main video overlay=x=main_w-overlay_w-10:y=main_h-overlay_h-10 # Insert a transparent PNG logo in the bottom left corner of the input avconv -i input -i logo -filter_complex 'overlay=x=10:y=main_h-overlay_h-10' output # Insert 2 different transparent PNG logos (second logo on bottom # right corner) avconv -i input -i logo1 -i logo2 -filter_complex 'overlay=x=10:y=H-h-10,overlay=x=W-w-10:y=H-h-10' output # Add a transparent color layer on top of the main video; # WxH specifies the size of the main input to the overlay filter color=red.3:WxH [over]; [in][over] overlay [out] # Mask 10-20 seconds of a video by applying the delogo filter to a section avconv -i test.avi -codec:v:0 wmv2 -ar 11025 -b:v 9000k -vf '[in]split[split_main][split_delogo];[split_delogo]trim=start=360:end=371,delogo=0:0:640:480[delogoed];[split_main][delogoed]overlay=eof_action=pass[out]' masked.avi |
# Add paddings with the color "violet" to the input video. The output video # size is 640x480, and the top-left corner of the input video is placed at # column 0, row 40 pad=width=640:height=480:x=0:y=40:color=violet # Pad the input to get an output with dimensions increased by 3/2, # and put the input video at the center of the padded area pad="3/2*iw:3/2*ih:(ow-iw)/2:(oh-ih)/2" # Pad the input to get a squared output with size equal to the maximum # value between the input width and height, and put the input video at # the center of the padded area pad="max(iw\,ih):ow:(ow-iw)/2:(oh-ih)/2" # Pad the input to get a final w/h ratio of 16:9 pad="ih*16/9:ih:(ow-iw)/2:(oh-ih)/2" # Double the output size and put the input video in the bottom-right # corner of the output padded area pad="2*iw:2*ih:ow-iw:oh-ih" |
format=monow, pixdesctest |
# Scale the input video to a size of 200x100 scale=w=200:h=100 # Scale the input to 2x scale=w=2*iw:h=2*ih # The above is the same as scale=2*in_w:2*in_h # Scale the input to half the original size scale=w=iw/2:h=ih/2 # Increase the width, and set the height to the same size scale=3/2*iw:ow # Seek Greek harmony scale=iw:1/PHI*iw scale=ih*PHI:ih # Increase the height, and set the width to 3/2 of the height scale=w=3/2*oh:h=3/5*ih # Increase the size, making the size a multiple of the chroma scale="trunc(3/2*iw/hsub)*hsub:trunc(3/2*ih/vsub)*vsub" # Increase the width to a maximum of 500 pixels, # keeping the same aspect ratio as the input scale=w='min(500\, iw*3/2):h=-1' |
# Select all the frames in input select # The above is the same as select=expr=1 # Skip all frames select=expr=0 # Select only I-frames select='expr=eq(pict_type\,I)' # Select one frame per 100 select='not(mod(n\,100))' # Select only frames contained in the 10-20 time interval select='gte(t\,10)*lte(t\,20)' # Select only I-frames contained in the 10-20 time interval select='gte(t\,10)*lte(t\,20)*eq(pict_type\,I)' # Select frames with a minimum distance of 10 seconds select='isnan(prev_selected_t)+gte(t-prev_selected_t\,10)' |
setdar=dar=16/9 # The above is equivalent to setdar=dar=1.77777 |
# Start counting the PTS from zero setpts=expr=PTS-STARTPTS # Fast motion setpts=expr=0.5*PTS # Slow motion setpts=2.0*PTS # Fixed rate 25 fps setpts=N/(25*TB) # Fixed rate 25 fps with some jitter setpts='1/(25*TB) * (N + 0.05 * sin(N*2*PI/25))' # Generate timestamps from a "live source" and rebase onto the current timebase setpts='(RTCTIME - RTCSTART) / (TB * 1000000)" |
setsar=sar=10/11 |
# Set the timebase to 1/25 settb=expr=1/25 # Set the timebase to 1/10 settb=expr=0.1 # Set the timebase to 1001/1000 settb=1+0.001 #Set the timebase to 2*intb settb=2*intb #Set the default timebase value settb=AVTB |
AVPictureType enum and of
the av_get_picture_type_char function defined in
‘libavutil/avutil.h’.
‘checksum’
The Adler-32 checksum of all the planes of the input frame.
‘plane_checksum’
The Adler-32 checksum of each plane of the input frame, expressed in the form
"[c0 c1 c2 c3]".
avconv -i INPUT -vf shuffleplanes=0:2:1:3 OUTPUT |
avconv -i INPUT -filter_complex split=5 OUTPUT |
L.R L.l . . -> . . l.r R.r |
L.R l.L . . -> . . l.r r.R |
L.R R.r . . -> . . l.r L.l |
L.R r.R . . -> . . l.r l.L |
avconv -i INPUT -vf trim=60:120 |
avconv -i INPUT -vf trim=duration=1 |
# Strong luma sharpen effect parameters
unsharp=luma_msize_x=7:luma_msize_y=7:luma_amount=2.5
# A strong blur of both luma and chroma parameters
unsharp=7:7:-2:7:7:-2
# Use the default values with |
./avconv -i in.avi -vf "vflip" out.avi |
buffer=width=320:height=240:pix_fmt=yuv410p:time_base=1/24:sar=1 |
"color=red@0.2:qcif:10 [color]; [in][color] overlay [out]"
|
avconv; the ‘-filter_complex’ option fully replaces it.
It accepts the following parameters:
‘filename’
The name of the resource to read (not necessarily a file; it can also be a
device or a stream accessed through some protocol).
‘format_name, f’
Specifies the format assumed for the movie to read, and can be either
the name of a container or an input device. If not specified, the
format is guessed from movie_name or by probing.
‘seek_point, sp’
Specifies the seek point in seconds. The frames will be output
starting from this seek point. The parameter is evaluated with
av_strtod, so the numerical value may be suffixed by an IS
postfix. The default value is "0".
‘stream_index, si’
Specifies the index of the video stream to read. If the value is -1,
the most suitable video stream will be automatically selected. The default
value is "-1".
It allows overlaying a second video on top of the main input of
a filtergraph, as shown in this graph:
input -----------> deltapts0 --> overlay --> output
^
|
movie --> scale--> deltapts1 -------+
|
# Skip 3.2 seconds from the start of the AVI file in.avi, and overlay it # on top of the input labelled "in" movie=in.avi:seek_point=3.2, scale=180:-1, setpts=PTS-STARTPTS [movie]; [in] setpts=PTS-STARTPTS, [movie] overlay=16:16 [out] # Read from a video4linux2 device, and overlay it on top of the input # labelled "in" movie=/dev/video0:f=video4linux2, scale=180:-1, setpts=PTS-STARTPTS [movie]; [in] setpts=PTS-STARTPTS, [movie] overlay=16:16 [out] |
# Generate a frei0r partik0l source with size 200x200 and framerate 10
# which is overlaid on the overlay filter's main input
frei0r_src=size=200x200:framerate=10:filter_name=partik0l:filter_params=1234 [overlay]; [in][overlay] overlay
|
rgbtestsrc source generates an RGB test pattern useful for
detecting RGB vs BGR issues. You should see a red, green and blue
stripe from top to bottom.
The testsrc source generates a test video pattern, showing a
color pattern, a scrolling gradient and a timestamp. This is mainly
intended for testing purposes.
The sources accept the following parameters:
‘size, s’
Specify the size of the sourced video, it may be a string of the form
widthxheight, or the name of a size abbreviation. The
default value is "320x240".
‘rate, r’
Specify the frame rate of the sourced video, as the number of frames
generated per second. It has to be a string in the format
frame_rate_num/frame_rate_den, an integer number, a floating point
number or a valid video frame rate abbreviation. The default value is
"25".
‘sar’
Set the sample aspect ratio of the sourced video.
‘duration’
Set the video duration of the sourced video. The accepted syntax is:
[-]HH[:MM[:SS[.m...]]] [-]S+[.m...] |
av_parse_time() function.
If not specified, or the expressed duration is negative, the video is
supposed to be generated forever.
For example the following:
testsrc=duration=5.3:size=qcif:rate=10 |
;FFMETADATA1 title=bike\\shed ;this is a comment artist=Libav troll team [CHAPTER] TIMEBASE=1/1000 START=0 #chapter ends at 0:01:00 END=60000 title=chapter \#1 [STREAM] title=multi\ line |