voice-bala
nce
’, ‘voice-duration
’, ‘voice-pitch
’,
‘voice-rang
e
’, and ‘voice-stress
’.
The CSS Speech module is a community effort and if you would like to
help with implementation and driving the specification forward along the
W3C Recommendation track, please contact the editors.
voice
-volume
’
property
●5.2.
The ‘voi
ce-balance
’
property
●6. Speaking
properties
●6.1. The
‘sp
eak
’ property
●6.2. The
‘speak-as
’ property
●7. Pause properties
●7.1.
The ‘pause-be
fore
’ and
‘pause-a
fter
’ properties
●7.2. The
‘paus
e
’ shorthand property
●7.3. Collapsing
pauses
●8. Rest properties
●8.1.
The ‘re
st-b
efor
e
’ and
‘rest
-aft
er
’ properties
●8.2. The
‘res
t
’ shorthand property
●9. Cue properties
●9.1.
The ‘cue
-be
for
e
’ and
‘c
ue-
aft
er
’ properties
●9.2. The
‘cue
’ shorthand property
●10. Voice
characteristic properties
●10.1.
The ‘v
oic
e-f
ami
ly
’
property
●10.2. The
‘v
oic
e-r
ate
’ property
●10.3.
The ‘v
oic
e-p
itc
h
’
property
●10.4.
The ‘v
oic
e-r
ang
e
’
property
●10.5.
The ‘v
oic
e-s
tre
ss
’
property
●11. Voice duration
property
●11.1.
The ‘v
oic
e-d
ura
tio
n
’
property
●12. List items and counters
styles
●13. Inserted and replaced
content
●14. Pronunciation,
phonemes
●Appendix A — Property
index
●Appendix B — Index
●Appendix C — Definitions
●Glossary
●Conformance
●CR exit criteria
●Appendix D — Acknowledgements
●Appendix E — Changes from
previous draft
●Appendix F — References
●Normative
references
●Other references
m
edi
a
attribute of the
li
nk
element, or with the @me
dia
at-rule, or
within an @
imp
ort
statement. When doing so, the styles
authored within the scope of such conditional statements are ignored by
user-agents that do not support this module.
spa
n
inherits the voice-family from its parent
paragraph).
h1, h2, h3, h4, h5, h6 { voice-family: paul; voice-stress: moderate; cue-before: url(../audio/ping.wav); voice-volume: medium 6dB; } p.heidi { voice-family: female; voice-balance: left; voice-pitch: high; voice-volume: -6dB; } p.peter { voice-family: male; voice-balance: right; voice-rate: fast; } span.special { voice-volume: soft; pause-after: strong; } ... <h1>I am Paul, and I speak headings.</h1> <p class="heidi">Hello, I am Heidi.</p> <p class="peter"> <span class="special">Can you hear me ?</span> I am Peter. </p>
res
t
’, ‘c
ue
’ and ‘pau
se
’ properties (from the innermost to
the outermost position). These can be seen as aural equivalents to
‘pad
din
g
’, ‘b
ord
er
’ and ‘mar
gin
’, respectively. When used, the
‘:
bef
ore
’ and ‘:a
fte
r
’ pseudo-elements [CSS21] get inserted between the
element's contents and the ‘r
est
’.
The following diagram illustrates the equivalence between properties of
the visual and aural box models, applied to the selected <element>:
voi
ce-
vol
ume
’ property
Name: | voice-volume |
Value: | silent | [[x-soft | soft | medium | loud | x-loud] || <decibel>] |
Initial: | medium |
Applies to: | all elements |
Inherited: | yes |
Percentages: | N/A |
Media: | speech |
Computed value: | a keyword value, and optionally also a decibel offset (if not zero) |
v
oic
e-v
olu
me
’ property allows authors to
control the amplitude of the audio waveform generated by the speech
synthesiser, and is also used to adjust the relative volume level of audio cues within the audio
"box" model.
Note that the functionality provided by this property is
related to the vol
ume
attribute of the pro
sod
y
element from the SSML markup
language [SSML].
silent
Specifies that no sound is generated (the text is read "silently").
Corresponds to negative infinity in dB units.
Note that there is a difference between an element whose
‘vo
ice
-vo
lum
e
’ property has a value of
‘si
len
t
’, and an element whose
‘s
pea
k
’
property has the value ‘non
e
’.
With the former, the selected element takes up the same time as if it
was spoken, including any pause before and after the element, but no
sound is generated (descendants can override the ‘voi
ce-
vol
ume
’
value and may therefore generate audio output). With the latter, the
selected element is not rendered in the aural dimension and no time is
allocated for playback (descendants can override the ‘spe
ak
’ value and may
therefore generate audio output).
x-soft, soft,
medium, loud, x-loud
This sequence of keywords corresponds to monotonically non-decreasing
volume levels, mapped to implementation-dependent values (i.e. inferred
by the user-agent) that meet the user's requirements in terms of
perceived sound loudness . The keyword ‘x
-so
ft
’ maps to the user's minimum
audible volume level, ‘x
-lo
ud
’ maps to the user's maximum
tolerable volume level, ‘m
edi
um
’ maps to the user's
preferred volume level, ‘s
oft
’ and ‘lo
ud
’ map to intermediary values.
<decibel>
Anumber immediately followed by "dB"
(decibel unit). This represents a change (positive or negative) relative
to the given keyword value (see enumeration above), or to the default
value for the root element, or otherwise to the inherited volume level
(which may itself be a combination of a keyword value and of a decibel
offset, in which case the decibel values are combined additively). When
the inherited volume level is ‘si
len
t
’, this ‘vo
ice
-vo
lum
e
’
resolves to ‘si
len
t
’ too,
regardless of the specified <decibel> value. Decibels represent
the ratio of the squares of the new signal amplitude (a1) and the
current amplitude (a0), as per the following logarithmic equation:
volume(dB) = 20 log10 (a1 / a0)
Note that -6.0dB is approximately half the amplitude of
the audio signal, and +6.0dB is approximately twice the amplitude.
Note that the actual perceived volume levels depend on
various factors, such as the listening environment and personal user
preferences. The effective volume variation between ‘x
-so
ft
’ and ‘x-l
oud
’ represents the dynamic range (in terms
of loudness) of the speech output. Typically, this range would be
compressed in a noisy context, i.e. the perceived loudness corresponding
to ‘x-s
oft
’ would effectively be
closer to ‘x-
lou
d
’ than it would
be in a quiet environment. There may also be situations where both
‘x-s
oft
’ and ‘x-
lou
d
’ would map to low volume levels, such
as in listening environments requiring discretion (e.g. library,
night-reading).
voi
ce-
bal
anc
e
’ property
Name: | voice-balance |
Value: | <number> | left | center | right | leftwards | rightwards |
Initial: | center |
Applies to: | all elements |
Inherited: | yes |
Percentages: | N/A |
Media: | speech |
Computed value: | the specified value resolved to a <number> between
‘-100 ’ and ‘100 ’ (inclusive)
|
v
oic
e-b
ala
nce
’ property controls the
spatial distribution of audio output across a lateral sound stage: one
extremity is on the left, the other extremity is on the right hand side,
relative to the listener's position. Authors can specify intermediary
steps between left and right extremities, to represent the audio
separation along the resulting left-right axis.
Note that the functionality provided by this property has no
match in the SSML markup language [SSML].
<number>
Anumber between ‘-
100
’ and ‘10
0
’ (inclusive). Values smaller than
‘-1
00
’ are clamped to ‘-
100
’. Values greater than ‘100
’ are clamped to ‘100
’. The value ‘-
100
’ represents the left side, and the value
‘10
0
’ represents the right side. The
value ‘0
’ represents the center point
whereby there is no discernible audio separation between left and right
sides (in a stereo sound system, this corresponds to equal distribution
of audio signals between left and right speakers).
left
Same as ‘-10
0
’.
center
Same as ‘0
’.
right
Same as ‘100
’.
leftwards
Moves the sound to the left, by subtracting 20 from the inherited
‘voi
ce-
bal
anc
e
’ value, and by clamping
the resulting number to ‘-10
0
’.
rightwards
Moves the sound to the right, by adding 20 to the inherited ‘vo
ice
-ba
lan
ce
’ value, and by clamping
the resulting number to ‘100
’.
User agents may be connected to different kinds of sound systems,
featuring varying audio mixing capabilities. The expected behavior for
mono, stereo, and surround sound systems is defined as follows:
●When user-agents produce audio via a mono-aural sound system (i.e.
single-speaker setup), the ‘voi
ce-
bal
anc
e
’ property has no effect.
●When user-agents produce audio through a stereo sound system (e.g.
two speakers, a pair of headphones), the left-right distribution of audio
signals can precisely match the authored values for the ‘voi
ce-
bal
anc
e
’ property.
●When user-agents are capable of mixing audio signals through more
than 2 channels (e.g. 5-speakers surround sound system, including a
dedicated center channel), the physical distribution of audio signals
resulting from the application of the ‘v
oic
e-b
ala
nce
’ property should be
performed so that the listener perceives sound as if it was coming from a
basic stereo layout. For example, the center channel as well as the
left/right speakers may be used altogether in order to emulate the
behavior of the ‘c
ent
er
’ value.
Future revisions of the CSS Speech module may include support for
three-dimensional audio, which would effectively enable authors to specify
"azimuth" and "elevation" values. In the future, content authored using
the current specification may therefore be consumed by user-agents which
are compliant with the version of CSS Speech that supports
three-dimensional audio. In order to prepare for this possibility, the
values enabled by the current ‘voi
ce-
bal
anc
e
’ property are designed to
remain compatible with "azimuth" angles. More precisely, the mapping
between the current left-right audio axis (lateral sound stage) and the
envisioned 360 degrees plane around the listener's position is defined as
follows:
●The value ‘0
’ maps to zero degrees
(‘ce
nte
r
’). This is in "front" of
the listener, not from "behind".
●The value ‘-
100
’ maps to -40
degrees (‘lef
t
’). Negative angles
are in the counter-clockwise direction (the audio stage is seen from the
top).
●The value ‘1
00
’ maps to 40 degrees
(‘ri
ght
’). Positive angles are in
the clockwise direction (the audio stage is seen from the top).
●Intermediary values on the scale from ‘-10
0
’ to ‘10
0
’
map to the angles between -40 and 40 degrees in a numerically
linearly-proportional manner. For example, ‘-
50
’ maps to -20 degrees.
Note that sound systems may be configured by users in such a
way that it would interfere with the left-right audio distribution
specified by document authors. Typically, the various "surround" modes
available in modern sound systems (including systems based on basic stereo
speakers) tend to greatly alter the perceived spatial arrangement of audio
signals. The illusion of a three-dimensional sound stage is often achieved
using a combination of phase shifting, digital delay, volume control
(channel mixing), and other techniques. Some users may even configure
their system to "downgrade" any rendered sound to a single mono channel,
in which case the effect of the ‘vo
ice
-ba
lan
ce
’ property would obviously
not be perceivable at all. The rendering fidelity of authored content is
therefore dependent on such user customizations, and the ‘voi
ce-
bal
anc
e
’
property merely specifies the desired end-result.
Note that many speech synthesizers only generate mono sound,
and therefore do not intrinsically support the ‘vo
ice
-ba
lan
ce
’
property. The sound distribution along the left-right axis consequently
occurs at post-synthesis stage (when the speech-enabled user-agent mixes
the various audio sources authored within the document)
spe
ak
’ property
Name: | speak |
Value: | auto | none | normal |
Initial: | auto |
Applies to: | all elements |
Inherited: | yes |
Percentages: | N/A |
Media: | speech |
Computed value: | specified value |
s
pea
k
’
property determines whether or not to render text aurally.
Note that the functionality provided by this property has no
match in the SSML markup language [SSML].
auto
Resolves to a computed value of ‘non
e
’ when ‘dis
pla
y
’ is ‘no
ne
’, otherwise resolves to a computed
value of ‘a
uto
’ which yields a
used value of ‘n
orm
al
’.
Note that the ‘non
e
’ value of the ‘d
isp
lay
’ property cannot be overridden
by descendants of the selected element, but the ‘aut
o
’ value of ‘sp
eak
’ can however be overridden using
either of ‘non
e
’ or ‘no
rma
l
’.
none
This value causes an element (including pauses, cues, rests and
actual content) to not be rendered (i.e., the element has no effect in
the aural dimension).
Note that any of the descendants of the affected element
are allowed to override this value, so descendants can actually take
part in the aural rendering despite using ‘no
ne
’ at this level. However, the pauses,
cues, and rests of the ancestor element remain "deactivated" in the
aural dimension, and therefore do not contribute to the collapsing of pauses or additive behavior of
adjoining rests.
normal
The element is rendered aurally (regardless of its ‘di
spl
ay
’ value and the ‘di
spl
ay
’ and ‘spe
ak
’ values of its
ancestors).
Note that using this value can result in the element being
rendered in the aural dimension even though it would not be rendered on
the visual canvas.
spe
ak-
as
’ property
Name: | speak-as |
Value: | normal | spell-out || digits || [ literal-punctuation | no-punctuation ] |
Initial: | normal |
Applies to: | all elements |
Inherited: | yes |
Percentages: | N/A |
Media: | speech |
Computed value: | specified value |
s
pea
k-a
s
’ property determines in what
manner text gets rendered aurally, based upon a basic predefined list of
possible values.
Note that the functionality provided by this property is
related to the say
-as
element from the SSML markup language [SSML], whose values are described in
the [SSML-SAYAS] W3C Note.
normal
Uses language-dependent pronunciation rules for rendering the
element's content. For example, punctuation is not spoken as-is, but
instead rendered naturally as appropriate pauses.
spell-out
Spells the text one letter at a time (useful for acronyms and
abbreviations). In languages where accented characters are rare, it is
permitted to drop accents in favor of alternative unaccented spellings.
As as example, in English, the word "rôle" can also be written as
"role". A conforming implementation would thus be able to spell-out
"rôle" as "R O L E".
digits
Speak numbers one digit at a time, for instance, "twelve" would be
spoken as "one two", and "31" as "three one".
Speech synthesizers are knowledgeable about what is and
what is not a number. The ‘s
pea
k-a
s
’ property enables authors to
control how the user-agent renders numbers, and may be implemented as a
preprocessing step before passing the text to the actual speech
synthesizer.
literal-punctuation
Punctuation such as semicolons, braces, and so on is named aloud
(i.e. spoken literally) rather than rendered naturally as appropriate
pauses.
no-punctuation
Punctuation is not rendered: neither spoken nor rendered as pauses.
pau
se-
bef
ore
’ and ‘pa
use
-af
ter
’
properties
Name: | pause-before |
Value: | <time> | none | x-weak | weak | medium | strong | x-strong |
Initial: | none |
Applies to: | all elements |
Inherited: | no |
Percentages: | N/A |
Media: | speech |
Computed value: | specified value |
Name: | pause-after |
Value: | <time> | none | x-weak | weak | medium | strong | x-strong |
Initial: | none |
Applies to: | all elements |
Inherited: | no |
Percentages: | N/A |
Media: | speech |
Computed value: | specified value |
p
aus
e-b
efo
re
’ and ‘pau
se-
aft
er
’
properties specify a prosodic boundary (silence with a specific duration)
that occurs before (or after) the speech synthesis rendition of the
selected element, or if any ‘cue
-be
for
e
’ (or ‘c
ue-
aft
er
’) is
specified, before (or after) the cue within the audio "box" model.
Note that the functionality provided by this property is
related to the bre
ak
element from the SSML markup language [SSML].
<time>
Expresses the pause in absolute time units
(seconds and milliseconds, e.g. "+3s", "250ms"). Only non-negative
values are allowed.
none
Equivalent to 0ms (no prosodic break is produced by the speech
processor).
x-weak, weak,
medium, strong, and
x-strong
Expresses the pause by the strength of the prosodic break in speech
output. The exact time is implementation-dependent. The values indicate
monotonically non-decreasing (conceptually increasing) break strength
between elements.
Note that stronger content boundaries are typically
accompanied by pauses. For example, the breaks between paragraphs are
typically much more substantial than the breaks between words within a
sentence.
This example illustrates how the default strengths of prosodic breaks
for specific elements (which are defined by the user-agent stylesheet)
can be overridden by authored styles.
p { pause: none } /* pause-before: none; pause-after: none */
pau
se
’ shorthand
property
Name: | pause |
Value: | <‘pause-before ’> <‘pause-after ’>? |
Initial: | N/A (see individual properties) |
Applies to: | all elements |
Inherited: | no |
Percentages: | N/A |
Media: | speech |
Computed value: | N/A (see individual properties) |
p
aus
e
’
property is a shorthand property for ‘p
aus
e-b
efo
re
’ and ‘pau
se-
aft
er
’. If
two values are given, the first value is ‘pau
se-
bef
ore
’
and the second is ‘pau
se-
aft
er
’. If only one value is given,
it applies to both properties.
Examples of property values:
h1 { pause: 20ms; } /* pause-before: 20ms; pause-after: 20ms */ h2 { pause: 30ms 40ms; } /* pause-before: 30ms; pause-after: 40ms */ h3 { pause-after: 10ms; } /* pause-before: unspecified; pause-after: 10ms */
p
aus
e-a
fte
r
’ of an aural "box" and the
‘pau
se-
aft
er
’ of its last child, provided
the former has no ‘r
est
-af
ter
’ and no ‘cu
e-a
fte
r
’.
(二)The ‘p
aus
e-b
efo
re
’ of an aural "box" and the
‘pa
use
-be
for
e
’ of its first child,
provided the former has no ‘r
est
-be
for
e
’ and no ‘c
ue-
bef
ore
’.
(三)The ‘p
aus
e-a
fte
r
’ of an aural "box" and the
‘pau
se-
bef
ore
’ of its next sibling.
(四)The ‘p
aus
e-b
efo
re
’ and ‘pau
se-
aft
er
’ of
an aural "box", if the the "box" has a ‘vo
ice
-du
rat
ion
’ of "0ms" and no ‘re
st-
bef
ore
’ or
‘res
t-a
fte
r
’ and no ‘c
ue-
bef
ore
’ or
‘cue
-af
ter
’, or if the the "box" has no
rendered content at all (see ‘spe
ak
’).
A collapsed pause is considered adjoining to another pause if any of its
component pauses is adjoining to that pause.
Note that ‘p
aus
e
’ has been moved from between the
element's contents and any ‘cue
’ to outside the ‘cue
’. This is not
backwards compatible with the informative CSS2.1 Aural appendix [CSS21].
res
t-b
efo
re
’ and ‘res
t-a
fte
r
’
properties
Name: | rest-before |
Value: | <time> | none | x-weak | weak | medium | strong | x-strong |
Initial: | none |
Applies to: | all elements |
Inherited: | no |
Percentages: | N/A |
Media: | speech |
Computed value: | specified value |
Name: | rest-after |
Value: | <time> | none | x-weak | weak | medium | strong | x-strong |
Initial: | none |
Applies to: | all elements |
Inherited: | no |
Percentages: | N/A |
Media: | speech |
Computed value: | specified value |
r
est
-be
for
e
’ and ‘r
est
-af
ter
’
properties specify a prosodic boundary (silence with a specific duration)
that occurs before (or after) the speech synthesis rendition of an element
within the audio "box" model.
Note that the functionality provided by this property is
related to the bre
ak
element from the SSML markup language [SSML].
<time>
Expresses the rest in absolute time units
(seconds and milliseconds, e.g. "+3s", "250ms"). Only non-negative
values are allowed.
none
Equivalent to 0ms (no prosodic break is produced by the speech
processor).
x-weak, weak,
medium, strong, and
x-strong
Expresses the rest by the strength of the prosodic break in speech
output. The exact time is implementation-dependent. The values indicate
monotonically non-decreasing (conceptually increasing) break strength
between elements.
As opposed to pause properties, the rest is
inserted between the element's content and any ‘cu
e-b
efo
re
’ or
‘c
ue-
aft
er
’ content. Adjoining rests are
treated additively, and do not collapse.
res
t
’ shorthand
property
Name: | rest |
Value: | <‘rest-before ’> <‘rest-after ’>? |
Initial: | N/A (see individual properties) |
Applies to: | all elements |
Inherited: | no |
Percentages: | N/A |
Media: | speech |
Computed value: | N/A (see individual properties) |
r
est
’
property is a shorthand for ‘re
st-
bef
ore
’ and ‘re
st-
aft
er
’. If
two values are given, the first value is ‘res
t-b
efo
re
’ and
the second is ‘r
est
-af
ter
’. If only one value is given,
it applies to both properties.
cue
-be
for
e
’ and ‘c
ue-
aft
er
’
properties
Name: | cue-before |
Value: | <uri> <decibel>? | none |
Initial: | none |
Applies to: | all elements |
Inherited: | no |
Percentages: | N/A |
Media: | speech |
Computed value: | specified value |
Name: | cue-after |
Value: | <uri> <decibel>? | none |
Initial: | none |
Applies to: | all elements |
Inherited: | no |
Percentages: | N/A |
Media: | speech |
Computed value: | specified value |
c
ue-
bef
ore
’ and ‘cu
e-a
fte
r
’
properties specify auditory icons (i.e. pre-recorded / pre-generated sound
clips) to be played before (or after) the selected element within the audio "box" model.
Note that the functionality provided by this property is
related to the aud
io
element from the SSML markup language [SSML].
<uri>
The URI designates an auditory icon resource. When a user agent is not
able to render the specified auditory icon (e.g. missing file resource,
or unsupported audio codec), it is recommended to produce an alternative
cue, such as a bell sound.
none
Specifies that no auditory icon is used.
<decibel>
Anumber immediately followed by "dB"
(decibel unit). This represents a change (positive or negative) relative
to the computed value of the ‘vo
ice
-vo
lum
e
’ property within the aural "box" model of the selected element.
Decibels express the ratio of the squares of the new signal amplitude
(a1) and the current amplitude (a0), as per the following logarithmic
equation: volume(dB) = 20 log10 (a1 / a0)
When the ‘vo
ice
-vo
lum
e
’ property is set to
‘s
ile
nt
’, the audio cue is also
set to ‘si
len
t
’ (regardless of
this specified <decibel> value). Otherwise (when not ‘s
ile
nt
’), ‘voi
ce-
vol
ume
’
values are always specified relatively to the volume level keywords,
which map to a user-configured scale of "preferred" loudness settings
(see the definition of ‘v
oic
e-v
olu
me
’). If the inherited
‘v
oic
e-v
olu
me
’ value already contains a
decibel offset, the dB offset specific to the audio cue is combined
additively.
The desired effect of an audio cue set at +0dB is that the volume
level during playback of the pre-recorded / pre-generated audio signal
is effectively the same as the loudness of live (i.e. real-time) speech
synthesis rendition. In order to achieve this effect, speech processors
are capable of directly controlling the waveform amplitude of generated
text-to-speech audio, user agents must be able to adjust the volume
output of audio cues (i.e. amplify or attenuate audio signals based on
the intrinsic waveform amplitude of digitized sound clips), and last but
not least, authors must ensure that the "normal" volume level of
pre-recorded audio cues (on average, as there may be discrete loudness
variations due to changes in the audio stream, such as intonation,
stress, etc.) matches that of a "typical" TTS voice output (based on the
‘vo
ice
-fa
mil
y
’ intended for use), given
standard listening conditions (i.e. default system volume levels,
centered equalization across the frequency spectrum). This latter
prerequisite sets a baseline that enables a user agent to align the
volume outputs of both TTS and cue audio streams within the same aural
"box" model. Due to the complex relationship between perceived audio
characteristics and the processing applied to the digitized audio
signal, we will simplify the definition of "normal" volume levels by
referring to a canonical recording scenario, whereby the attenuation is
typically indicated in decibels, ranging from 0dB (maximum audio input,
near clipping threshold) to -60dB (total silence). In this common
context, a "standard" audio clip would oscillate between these values,
the loudest peak levels would be close to -3dB (to avoid distortion),
and the relevant audible passages would have average (RMS) volume levels
as high as possible (i.e. not too quiet, to avoid background noise
during amplification). This would roughly provide an audio experience
that could be seamlessly combined with text-to-speech output (i.e. there
would be no discernible difference in volume levels when switching from
pre-recorded audio to speech synthesis). Although there exists no
industry-wide standard to support such convention, TTS engines usually
generate comparably-loud audio signals when no gain or attenuation is
specified. For voice and soft music, -15dB RMS seems to be pretty
standard.
Note that -6.0dB is approximately half the amplitude of
the audio signal, and +6.0dB is approximately twice the amplitude.
Note that there is a difference between an audio cue whose
volume is set to ‘s
ile
nt
’ and
one whose value is ‘no
ne
’. In
the former case, the audio cue takes up the same time as if it had been
played, but no sound is generated. In the latter case, the there is no
manifestation of the audio cue at all (i.e. no time is allocated for the
cue in the aural dimension).
Examples of property values:
a { cue-before: url(/audio/bell.aiff) -3dB; cue-after: url(dong.wav); } h1 { cue-before: url(../clips-1/pop.au) +6dB; cue-after: url(../clips-2/pop.au) 6dB; } div.caution { cue-before: url(./audio/caution.wav) +8dB; }
cue
’ shorthand property
Name: | cue |
Value: | <‘cue-before ’> <‘cue-after ’>? |
Initial: | N/A (see individual properties) |
Applies to: | all elements |
Inherited: | no |
Percentages: | N/A |
Media: | speech |
Computed value: | N/A (see individual properties) |
c
ue
’
property is a shorthand for ‘cue
-be
for
e
’ and ‘c
ue-
aft
er
’. If two
values are given the first value is ‘cue
-be
for
e
’ and the second is ‘cu
e-a
fte
r
’. If
only one value is given, it applies to both properties.
Example of shorthand notation:
h1 { cue-before: url(pop.au); cue-after: url(pop.au); } /* ...is equivalent to: */ h1 { cue: url(pop.au); }
v
oic
e-f
ami
ly
’ property
Name: | voice-family |
Value: | [[<name> | <generic-voice>],]* [<name> | <generic-voice>] | preserve |
Initial: | implementation-dependent |
Applies to: | all elements |
Inherited: | yes |
Percentages: | N/A |
Media: | speech |
Computed value: | specified value |
v
oic
e-f
ami
ly
’ property specifies a
prioritized list of component values that are separated by commas to
indicate that they are alternatives (this is analogous to ‘
fon
t-f
ami
ly
’ in visual style
sheets). Each component value potentially designates a speech synthesis
voice instance, by specifying match criteria (see the voice selection section on this topic).
<generic-voice> = [<age>? <gender>
<integer>?]
Note that the functionality provided by this property is
related to the voi
ce
element from the SSML markup language [SSML].
<name>
Values are specific voice instances (e.g., Mike, comedian, mary,
carlos2, "valley girl"). Voice names must either be given quoted as strings, or unquoted as a sequence of one or
more identifiers.
Note that as a result, most punctuation characters, or
digits at the start of each token, must be escaped in unquoted voice
names.
If a sequence of identifiers is given as a voice name, the computed
value is the name converted to a string by joining all the identifiers
in the sequence by single spaces.
Voice names that happen to be the same as the gender keywords
(‘ma
le
’, ‘f
ema
le
’ and ‘neu
tra
l
’) or that happen to match the
keywords ‘inh
eri
t
’ or
‘pr
ese
rve
’ must be quoted to
disambiguate with these keywords. The keywords ‘i
nit
ial
’ and ‘de
fau
lt
’ are reserved for future use and
must also be quoted when used as voice names.
Note that in [SSML], voice names are
space-separated and cannot contain whitespace characters.
It is recommended to quote voice names that contain white space,
digits, or punctuation characters other than hyphens - even if these
voice names are valid in unquoted form - in order to improve code
clarity. For example: v
oic
e-f
ami
ly:
"j
ohn
do
e",
"H
enr
y
t
he-
8th
";
<age>
Possible values are ‘chi
ld
’,
‘y
oun
g
’ and ‘o
ld
’, indicating the preferred age category
to match during voice selection. The mapping with [SSML] ages is defined as follows:
‘c
hil
d
’ = 6 y/o, ‘yo
ung
’ = 24 y/o, ‘ol
d
’ = 75 y/o (note that more flexible age
ranges may be used by the processor-dependent voice-matching algorithm).
Note that the interpretation of the relationship between a
person's age and a recognizable type of voice cannot realistically be
defined in a universal manner, as it effectively depends on numerous
criteria (cultural, linguistic, biological, etc.). The values provided
by this specification therefore represent a simplified model that can be
reasonably applied to a broad variety of speech contexts, albeit at the
cost of a certain degree of approximation. Future versions of this
specification may refine the level of precision of the voice-matching
algorithm, as speech processor implementations become more standardized.
<gender>
One of the keywords ‘mal
e
’,
‘f
ema
le
’, or ‘n
eut
ral
’, specifying a male, female, or
neutral voice, respectively.
<integer>
Aninteger indicating the preferred variant
(e.g. "the second male child voice"). Only positive integers (i.e.
excluding zero) are allowed. The value "1" refers to the first of all
matching voices.
preserve
Indicates that the ‘v
oic
e-f
ami
ly
’ value gets inherited and
used regardless of any potential language change within the content
markup (see the section below about voice selection and language
handling). This value behaves as ‘inh
eri
t
’ when applied to the root element.
Note that descendants of the selected element
automatically inherit the ‘pr
ese
rve
’ value, unless it is explicitly
overridden by other ‘voi
ce-
fam
ily
’ values (e.g. name, gender,
age).
Examples of invalid declarations:
voice-family: john/doe; /* forward slash character should be escaped */ voice-family: john "doe"; /* identifier sequence cannot contain strings */ voice-family: john!; /* exclamation mark should be escaped */ voice-family: john@doe; /* "at" character should be escaped */ voice-family: #john; /* identifier cannot start with hash character */ voice-family: john 1st; /* identifier cannot start with digit */
v
oic
e-f
ami
ly
’ property is used to guide
the selection of the speech synthesis voice instance. As part of this
selection process, speech-capable user agents must also take into account
the language of the selected element within the markup content. The
"name", "gender", "age", and preferred "variant" (index) are voice
selection hints that get carried down the content hierarchy as the
‘voi
ce-
fam
ily
’ property value gets
inherited by descendant elements. At any point within the content
structure, the language takes precedence (i.e. has a higher priority) over
the specified CSS voice characteristics.
The following list outlines the voice selection algorithm (note that
the definition of "language" is loose here, in order to cater for
dialectic variations):
(一)If only a single voice instance is available for the language of the
selected content, then this voice must be used, regardless of the
specified CSS voice characteristics.
(二)If several voice instances are available for the language of the
selected content, then the chosen voice is the one that most closely
matches the specified name, or gender, age, and preferred voice variant.
The actual definition of "best match" is processor-dependent. For
example, in a system that only has male and female adult voices
available, a reasonable match for "voice-family: young male" may well be
a higher-pitched female voice, as this tone of voice would be close to
that of a young boy. If no voice instance matches the characteristics
provided by any of the ‘v
oic
e-f
ami
ly
’ component values, the first
available voice instance (amongst those suitable for the language of the
selected content) must be used.
(三)If no voice is available for the language of the selected content, it
is recommended that user-agents let the user know about the lack of
appropriate TTS voice.
The speech synthesizer voice must be re-evaluated (i.e. the selection
process must take place once again) whenever any of the CSS voice
characteristics change within the content flow. The voice must also be
re-calculated whenever the content language changes, unless the
‘pr
ese
rve
’ keyword is used (this
may be useful in cases where embedded foreign language text can be spoken
using a voice not designed for this language, as demonstrated by the
example below).
Note that dynamically computing a voice may lead to
unexpected lag, so user-agents should try to resolve concrete voice
instances in the document tree before the playback starts.
Examples of property values:
h1 { voice-family: announcer, old male; } p.romeo { voice-family: romeo, young male; } p.juliet { voice-family: juliet, young female; } p.mercutio { voice-family: young male; } p.tybalt { voice-family: young male; } p.nurse { voice-family: amelie; } ... <p class="romeo" xml:lang="en-US"> The French text below will be spoken with an English voice: <span style="voice-family: preserve;" xml:lang="fr-FR">Bonjour monsieur !</span> The English text below will be spoken with a voice different than that corresponding to the class "romeo" (which is inherited from the "p" parent element): <span style="voice-family: female;">Hello sir!</span> </p>
v
oic
e-r
ate
’
property
Name: | voice-rate |
Value: | [normal | x-slow | slow | medium | fast | x-fast] || <percentage> |
Initial: | normal |
Applies to: | all elements |
Inherited: | yes |
Percentages: | refer to default value |
Media: | speech |
Computed value: | a keyword value, and optionally also a percentage relative to the keyword (if not 100%) |
v
oic
e-r
ate
’ property manipulates the rate
of generated synthetic speech in terms of words per minute.
Note that the functionality provided by this property is
related to the rat
e
attribute of the pr
oso
dy
element from the SSML markup
language [SSML].
normal
Represents the default rate produced by the speech synthesizer for the
currently active voice. This is processor-specific and depends on the
language, dialect and on the "personality" of the voice.
x-slow, slow,
medium, fast and
x-fast
A sequence of monotonically non-decreasing speaking rates that are
implementation and voice -specific. For example, typical values for the
English language are (in words per minute) x-slow = 80, slow = 120,
medium = between 180 and 200, fast = 500.
<percentage>
Only non-negative percentage values are
allowed. This represents a change relative to the given keyword value
(see enumeration above), or to the default value for the root element,
or otherwise to the inherited speaking rate (which may itself be a
combination of a keyword value and of a percentage, in which case
percentages are combined multiplicatively). For example, 50% means that
the speaking rate gets multiplied by 0.5 (half the value).
Examples of inherited values:
<body> <e1> <e2> <e3> ... </e3> </e2> </e1> </body> body { voice-rate: inherit; } /* the initial value is 'normal' (the actual speaking rate value depends on the active voice) */ e1 { voice-rate: +50%; } /* the computed value is ['normal' and 50%], which will resolve to the rate corresponding to 'normal' multiplied by 0.5 (half the speaking rate) */ e2 { voice-rate: fast 120%; } /* the computed value is ['fast' and 120%], which will resolve to the rate corresponding to 'fast' multiplied by 1.2 (one and a half times the speaking rate) */ e3 { voice-rate: normal; /* "resets" the speaking rate to the intrinsic voice value, the computed value is 'normal' (see comment below for actual value) */ voice-family: "another-voice"; } /* because the voice is different, the calculated speaking rate may vary compared to "body" (even though the computed 'voice-rate' value is the same) */
v
oic
e-p
itc
h
’
property
Name: | voice-pitch |
Value: | <frequency> && absolute | [[x-low | low | medium | high | x-high] || [<frequency> | <semitones> | <percentage>]] |
Initial: | medium |
Applies to: | all elements |
Inherited: | yes |
Percentages: | refer to inherited value |
Media: | speech |
Computed value: | one of the predefined pitch keywords if only the keyword is specified by itself, otherwise an absolute frequency calculated by converting the keyword value (if any) to a fixed frequency based on the current voice-family and by applying the specified relative offset (if any) |
v
oic
e-p
itc
h
’ property specifies the
"baseline" pitch of the generated speech output, which depends on the used
‘voi
ce-
fam
ily
’ instance, and varies across
speech synthesis processors (it approximately corresponds to the average
pitch of the output). For example, the common pitch for a male voice is
around 120Hz, whereas it is around 210Hz for a female voice.
Note that the functionality provided by this property is
related to the pit
ch
attribute of the p
ros
ody
element from the SSML markup
language [SSML].
<frequency>
A value in frequency units (Hertz or
kiloHertz, e.g. "100Hz", "+2kHz"). Values are restricted to positive
numbers when the ‘a
bso
lut
e
’
keyword is specified. Otherwise (when the ‘ab
sol
ute
’ keyword is not specified), a
negative value represents a decrement, and a positive value represents
an increment, relative to the inherited value. For example, "2kHz" is a
positive offset (strictly equivalent to "+2kHz"), and "+2kHz absolute"
is an absolute frequency (strictly equivalent to "2kHz absolute").
absolute
If specified, this keyword indicates that the specified frequency
represents an absolute value. If a negative frequency is specified, the
computed frequency will be zero.
<semitones>
Specifies a relative change (decrement or increment) to the inherited
value. The syntax of allowed values is a <number> followed immediately by "st"
(semitones). A semitone interval corresponds to the step between each
note on an equal temperament chromatic scale. A semitone can therefore
be quantified as the difference between two consecutive pitch
frequencies on such scale. The ratio between two consecutive frequencies
separated by exactly one semitone is the twelfth root of two
(approximately 11011/10393, which equals exactly 1.0594631). As a
result, the value in Hertz corresponding to a semitone offset is
relative to the initial frequency the offset is applied to (in other
words, a semitone doesn't correspond to a fixed numerical value in
Hertz).
<percentage>
Positive and negative percentage values
are allowed, to represent an increment or decrement (respectively)
relative to the inherited value. Computed values are calculated by
adding (or subtracting) the specified fraction of the inherited value,
to (from) the inherited value. For example, 50% (which is equivalent to
+50%) with a inherited value of 200Hz results in 2
00
+
(
200
*0.
5)
= 300Hz. Conversely, -50% results in
20
0-(
200
*0.
5)
= 100Hz.
x-low, low, medium,
high, x-high
A sequence of monotonically non-decreasing pitch levels that are
implementation and voice specific. When the computed value for a given
element is only a keyword (i.e. no relative offset is specified), then
the corresponding absolute frequency will be re-evaluated on a voice
change. Conversely, the application of a relative offset requires the
calculation of the resulting frequency based on the current voice at the
point at which the relative offset is specified, so the computed
frequency will inherit absolutely regardless of any voice change further
down the style cascade. Authors should therefore only use keyword values
in cases where they wish that voice changes trigger the re-evaluation of
the conversion from a keyword to a concrete, voice-dependent frequency.
Computed absolute frequencies that are negative are clamped to zero
Hertz. Speech-capable user agents are likely to support a specific range
of values rather than the full range of possible calculated numerical
values for frequencies. The actual values in user agents may therefore be
clamped to implementation-dependent minimum and maximum boundaries. For
example: although the 0Hz frequency can be legitimately calculated, it may
be clamped to a more meaningful value in the context of the speech
synthesizer.
Examples of property values:
h1 { voice-pitch: 250Hz; } /* positive offset relative to the inherited absolute frequency */ h1 { voice-pitch: +250Hz; } /* identical to the line above */ h2 { voice-pitch: +30Hz absolute; } /* not an increment */ h2 { voice-pitch: absolute 30Hz; } /* identical to the line above */ h3 { voice-pitch: -20Hz; } /* negative offset (decrement) relative to the inherited absolute frequency */ h4 { voice-pitch: -20Hz absolute; } /* illegal syntax => value ignored ("absolute" keyword not allowed with negative frequency) */ h5 { voice-pitch: -3.5st; } /* semitones, negative offset */ h6 { voice-pitch: 25%; } /* this means "add a quarter of the inherited value, to the inherited value" */ h6 { voice-pitch: +25%; } /* identical to the line above */
v
oic
e-r
ang
e
’
property
Name: | voice-range |
Value: | <frequency> && absolute | [[x-low | low | medium | high | x-high] || [<frequency> | <semitones> | <percentage>]] |
Initial: | medium |
Applies to: | all elements |
Inherited: | yes |
Percentages: | refer to inherited value |
Media: | speech |
Computed value: | one of the predefined pitch keywords if only the keyword is specified by itself, otherwise an absolute frequency calculated by converting the keyword value (if any) to a fixed frequency based on the current voice-family and by applying the specified relative offset (if any) |
v
oic
e-r
ang
e
’ property specifies the
variability in the "baseline" pitch, i.e. how much the fundamental
frequency may deviate from the average pitch of the speech output. The
dynamic pitch range of the generated speech generally increases for a
highly animated voice, for example when variations in inflection are used
to convey meaning and emphasis in speech. Typically, a low range produces
a flat, monotonic voice, whereas a high range produces an animated voice.
Note that the functionality provided by this property is
related to the ran
ge
attribute of the p
ros
ody
element from the SSML markup
language [SSML].
<frequency>
A value in frequency units (Hertz or
kiloHertz, e.g. "100Hz", "+2kHz"). Values are restricted to positive
numbers when the ‘a
bso
lut
e
’
keyword is specified. Otherwise (when the ‘ab
sol
ute
’ keyword is not specified), a
negative value represents a decrement, and a positive value represents
an increment, relative to the inherited value. For example, "2kHz" is a
positive offset (strictly equivalent to "+2kHz"), and "+2kHz absolute"
is an absolute frequency (strictly equivalent to "2kHz absolute").
absolute
If specified, this keyword indicates that the specified frequency
represents an absolute value. If a negative frequency is specified, the
computed frequency will be zero.
<semitones>
Specifies a relative change (decrement or increment) to the inherited
value. The syntax of allowed values is a <number> followed immediately by "st"
(semitones). A semitone interval corresponds to the step between each
note on an equal temperament chromatic scale. A semitone can therefore
be quantified as the difference between two consecutive pitch
frequencies on such scale. The ratio between two consecutive frequencies
separated by exactly one semitone is the twelfth root of two
(approximately 11011/10393, which equals exactly 1.0594631). As a
result, the value in Hertz corresponding to a semitone offset is
relative to the initial frequency the offset is applied to (in other
words, a semitone doesn't correspond to a fixed numerical value in
Hertz).
<percentage>
Positive and negative percentage values
are allowed, to represent an increment or decrement (respectively)
relative to the inherited value. Computed values are calculated by
adding (or subtracting) the specified fraction of the inherited value,
to (from) the inherited value. For example, 50% (which is equivalent to
+50%) with a inherited value of 200Hz results in 2
00
+
(
200
*0.
5)
= 300Hz. Conversely, -50% results in
20
0-(
200
*0.
5)
= 100Hz.
x-low, low, medium,
high, x-high
A sequence of monotonically non-decreasing pitch levels that are
implementation and voice specific. When the computed value for a given
element is only a keyword (i.e. no relative offset is specified), then
the corresponding absolute frequency will be re-evaluated on a voice
change. Conversely, the application of a relative offset requires the
calculation of the resulting frequency based on the current voice at the
point at which the relative offset is specified, so the computed
frequency will inherit absolutely regardless of any voice change further
down the style cascade. Authors should therefore only use keyword values
in cases where they wish that voice changes trigger the re-evaluation of
the conversion from a keyword to a concrete, voice-dependent frequency.
Computed absolute frequencies that are negative are clamped to zero
Hertz. Speech-capable user agents are likely to support a specific range
of values rather than the full range of possible calculated numerical
values for frequencies. The actual values in user agents may therefore be
clamped to implementation-dependent minimum and maximum boundaries. For
example: although the 0Hz frequency can be legitimately calculated, it may
be clamped to a more meaningful value in the context of the speech
synthesizer.
Examples of inherited values:
<body> <e1> <e2> <e3> <e4> <e5> <e6> ... </e6> </e5> </e4> </e3> </e2> </e1> </body> body { voice-range: inherit; } /* the initial value is 'medium' (the actual frequency value depends on the current voice) */ e1 { voice-range: +25%; } /* the computed value is ['medium' + 25%] which resolves to the frequency corresponding to 'medium' plus 0.25 times the frequency corresponding to 'medium' */ e2 { voice-range: +10Hz; } /* the computed value is [FREQ + 10Hz] where "FREQ" is the absolute frequency calculated in the "e1" rule above. */ e3 { voice-range: inherit; /* this could be omitted, but we explicitly specify it for clarity purposes */ voice-family: "another-voice"; } /* this voice change would have resulted in the re-evaluation of the initial 'medium' keyword inherited by the "body" element (i.e. conversion from a voice-dependent keyword value to a concrete, absolute frequency), but because relative offsets were applied down the style cascade, the inherited value is actually the frequency calculated at the "e2" rule above. */ e4 { voice-range: 200Hz absolute; } /* override with an absolute frequency which doesn't depend on the current voice */ e5 { voice-range: 2st; } /* the computed value is an absolute frequency, which is the result of the calculation: 200Hz + two semitones (reminder: the actual frequency corresponding to a semitone depends on the base value to which it applies) */ e6 { voice-range: inherit; /* this could be omitted, but we explicitly specify it for clarity purposes */ voice-family: "yet-another-voice"; } /* despite the voice change, the computed value is the same as for "e5" (i.e. an absolute frequency value, independent from the current voice) */
v
oic
e-s
tre
ss
’ property
Name: | voice-stress |
Value: | normal | strong | moderate | none | reduced |
Initial: | normal |
Applies to: | all elements |
Inherited: | yes |
Percentages: | N/A |
Media: | speech |
Computed value: | specified value |
v
oic
e-s
tre
ss
’ property manipulates the
strength of emphasis, which is normally applied using a combination of
pitch change, timing changes, loudness and other acoustic differences. The
precise meaning of the values therefore depend on the language being
spoken.
Note that the functionality provided by this property is
related to the emp
has
is
element from the SSML markup language [SSML].
normal
Represents the default emphasis produced by the speech synthesizer.
none
Prevents the synthesizer from emphasizing text it would normally
emphasize.
moderate and strong
These values are monotonically non-decreasing in strength. Their
application results in more emphasis than what the speech synthesizer
would normally produce (i.e. more than the value corresponding to
‘n
orm
al
’).
reduced
Effectively the opposite of emphasizing a word.
Examples of property values, with HTML sample:
span.default-emphasis { voice-stress: normal; } span.lowered-emphasis { voice-stress: reduced; } span.removed-emphasis { voice-stress: none; } span.normal-emphasis { voice-stress: moderate; } span.huge-emphasis { voice-stress: strong; } ... <p>This is a big car.</p> <!-- The speech output from the line above is identical to the line below: --> <p>This is a <span class="default-emphasis">big</span> car.</p> <p>This car is <span class="lowered-emphasis">massive</span>!</p> <!-- The "span" below is totally de-emphasized, whereas the emphasis in the line above is only reduced: --> <p>This car is <span class="removed-emphasis">massive</span>!</p> <!-- The lines below demonstrate increasing levels of emphasis: --> <p>This is a <span class="normal-emphasis">big</span> car!</p> <p>This is a <span class="huge-emphasis">big</span> car!!!</p>
v
oic
e-d
ura
tio
n
’ property
Name: | voice-duration |
Value: | auto | <time> |
Initial: | auto |
Applies to: | all elements |
Inherited: | no |
Percentages: | N/A |
Media: | speech |
Computed value: | specified value |
v
oic
e-d
ura
tio
n
’ property specifies how
long it should take to render the selected element's content (not
including audio cues, pauses and rests ). Unless the value
‘au
to
’ is specified, this property
takes precedence over the ‘v
oic
e-r
ate
’ property, and should be used
to determine a suitable speaking rate for the voice. An element for which
the ‘v
oic
e-d
ura
tio
n
’ property value is not
‘a
uto
’ may have descendants for
which the ‘v
oic
e-d
ura
tio
n
’ and ‘v
oic
e-r
ate
’
properties are specified, but these must be ignored. In other words, when
a ‘tim
e
’ is specified for the
‘vo
ice
-du
rat
ion
’ of a selected element, it
applies to the entire element subtree (children cannot override the
property).
Note that the functionality provided by this property is
related to the dur
ati
on
attribute of the p
ros
ody
element from the SSML markup
language [SSML].
auto
Resolves to a used value corresponding to the duration of the speech
synthesis when using the inherited ‘voi
ce-
rat
e
’.
<time>
Specifies a value in absolute time units
(seconds and milliseconds, e.g. "+3s", "250ms"). Only non-negative
values are allowed.
lis
t-s
tyl
e-t
ype
’ property of [CSS21] specifies three
types of list item markers: glyphs, numbering systems, and alphabetic
systems. The values allowed for this property are also used for the
counter() function of the ‘con
ten
t
’ property. The CSS Speech module
defines how to render these styles in the aural dimension, using speech
synthesis. The ‘
lis
t-s
tyl
e-i
mag
e
’ property of [CSS21] is ignored, and
instead the ‘
lis
t-s
tyl
e-t
ype
’ is used.
Note that the speech rendering of new features from the CSS
Lists and Counters Module Level 3 [CSS3LIST] is not covered in this
level of CSS Speech, but may be defined in a future specification.
disc, circle, square
For these list item styles, the user-agent defines (possibly based on
user preferences) what equivalent phrase is spoken or what audio cue is
played. List items with graphical bullets are therefore announced
appropriately in an implementation-dependent manner.
decimal, decimal-leading-zero, lower-roman, upper-roman,
georgian, armenian
For these list item styles, corresponding numbers are spoken as-is by
the speech synthesizer, and may be complemented with additional audio
cues or speech phrases in the document's language (i.e. with the same
TTS voice used to speak the list item content) in order to indicate the
presence of list items. For example, when using the English language,
the list item counter could be prefixed with the word "Item", which
would result in list items being announced with "Item one", "Item two",
etc.
lower-latin, lower-alpha, upper-latin, upper-alpha,
lower-greek
These list item styles are spelled out letter-by-letter by the speech
synthesizer, in the document language (i.e. with the same TTS voice used
to speak the list item content). For example, ‘lo
wer
-gr
eek
’ in English would be read out as
"alpha", "beta", "gamma", etc. Conversely, ‘u
ppe
r-l
ati
n
’ in French would be read out as
/a/, /be/, /se/, etc. (phonetic notation)
Note that it is common for user-agents such as screen readers
to announce the nesting depth of list items, or more generally, to
indicate additional structural information pertaining to complex
hierarchical content. The verbosity of these additional audio cues and/or
speech output can usually be controlled by users, and contribute to
increasing usability. These navigation aids are implementation-dependent,
but it is recommended that user-agents supporting the CSS Speech module
ensure that these additional audio cues and speech output don't generate
redundancies or create inconsistencies (for example: duplicated or
different list item numbering scheme).
con
ten
t
’
property can be used to replace one string by another. The functionality
provided by this property is related to the ali
as
attribute of the s
ub
element from the SSML markup
language [SSML].
In this example, the abbreviation is rendered using the content of the
title attribute instead of the element's content.
/* This replaces the content of the selected element by the string "World Wide Web Consortium". */ abbr { content: attr(title); } ... <abbr title="World Wide Web Consortium">W3C</abbr>In a similar way, text strings in a document can be replaced by a previously recorded version. In this example - assuming the format is supported, the file is available and the UA is configured to do so - a recording of Sir John Gielgud's declamation of the famous monologue is played. Otherwise the UA falls back to render the text using synthesized speech.
.hamlet { content: url(./audio/gielgud.wav); } ... <div class="hamlet"> To be, or not to be: that is the question: </div>Furthermore, authors (or users via a user stylesheet) may add some information to ease the understanding of structures during non-visual interaction with the document. They can do so by using the ‘
:
:be
for
e
’ and ‘:
:af
ter
’ pseudo-elements. Note that different
stylesheets can be used to define the level of verbosity for additional
information spoken by screen readers.
This example inserts the string "Start list: " before a list and the
string "List item: " before the content of each list item. Likewise, the
string "List end: " gets inserted after the list to inform the user that
the list speech output is over.
ul::before { content: "Start list: "; } ul::after { content: "List end. "; } li::before { content: "List item: "; }Detailed information can be found in the CSS3 Generated and Replaced Content module [CSS3GENCON].
r
el
value allows importing pronunciation lexicons in HTML
documents using the l
ink
element (similar to how CSS
stylesheets can be included). The W3C PLS (Pronunciation Lexicon
Specification) [PRONUNCIATION-LEXICON]
is one format that can be used to describe such a lexicon.
Additionally, an attribute-based mechanism can be used within the
markup to author text-pronunciation associations. At the time of writing,
such mechanism isn't formally defined in the W3C HTML standard(s).
However, the EPUB 3.0 draft
specification allows (x)HTML5 documents to contain attributes derived
from the [SSML]
specification, that describe how to pronounce text based on a particular
phonetic alphabet.
Property | Values | Initial | Applies to | Inh. | Percentages | Media |
---|---|---|---|---|---|---|
cue | <‘cue-before’> <‘cue-after’>? | N/A (see individual properties) | all elements | no | N/A | speech |
cue-after | <uri> <decibel>? | none | none | all elements | no | N/A | speech |
cue-before | <uri> <decibel>? | none | none | all elements | no | N/A | speech |
pause | <‘pause-before’> <‘pause-after’>? | N/A (see individual properties) | all elements | no | N/A | speech |
pause-after | <time> | none | x-weak | weak | medium | strong | x-strong | none | all elements | no | N/A | speech |
pause-before | <time> | none | x-weak | weak | medium | strong | x-strong | none | all elements | no | N/A | speech |
rest | <‘rest-before’> <‘rest-after’>? | N/A (see individual properties) | all elements | no | N/A | speech |
rest-after | <time> | none | x-weak | weak | medium | strong | x-strong | none | all elements | no | N/A | speech |
rest-before | <time> | none | x-weak | weak | medium | strong | x-strong | none | all elements | no | N/A | speech |
speak | auto | none | normal | auto | all elements | yes | N/A | speech |
speak-as | normal | spell-out || digits || [ literal-punctuation | no-punctuation ] | normal | all elements | yes | N/A | speech |
voice-balance | <number> | left | center | right | leftwards | rightwards | center | all elements | yes | N/A | speech |
voice-duration | auto | <time> | auto | all elements | no | N/A | speech |
voice-family | [[<name> | <generic-voice>],]* [<name> | <generic-voice>] | preserve | implementation-dependent | all elements | yes | N/A | speech |
voice-pitch | <frequency> && absolute | [[x-low | low | medium | high | x-high] || [<frequency> | <semitones> | <percentage>]] | medium | all elements | yes | refer to inherited value | speech |
voice-range | <frequency> && absolute | [[x-low | low | medium | high | x-high] || [<frequency> | <semitones> | <percentage>]] | medium | all elements | yes | refer to inherited value | speech |
voice-rate | [normal | x-slow | slow | medium | fast | x-fast] || <percentage> | normal | all elements | yes | refer to default value | speech |
voice-stress | normal | strong | moderate | none | reduced | normal | all elements | yes | N/A | speech |
voice-volume | silent | [[x-soft | soft | medium | loud | x-loud] || <decibel>] | medium | all elements | yes | N/A | speech |
cl
ass
="e
xam
ple
"
, like this:
This is an example of an informative example.
Informative notes begin with the word "Note" and are set apart from the
normative text with c
las
s="
not
e"
, like this:
Note, this is an informative note.
Conformance to the CSS3 Speech module is defined for three classes:
style
sheet
ACSS
style sheet.
renderer
A UA that interprets the semantics of a style sheet and renders documents that use them.
authoring tool
A UA that writes a style sheet.
A style sheet is conformant to the CSS3 Speech module if all of its
declarations that use properties defined in this module have values that
are valid according to the generic CSS grammar and the individual grammars
of each property as given in this module.
A renderer is conformant to the CSS3 Speech module if, in addition to
interpreting the style sheet as defined by the appropriate specifications,
it supports all the properties defined by CSS3 Speech module by parsing
them correctly and rendering the document accordingly. However the
inability of a UA to correctly render a document due to limitations of the
device does not make the UA non-conformant. (For example, a UA is not
required to render color on a monochrome monitor.)
An authoring tool is conformant to CSS3 Speech module if it writes
syntactically correct style sheets, according to the generic CSS grammar
and the individual grammars of each property in this module.
voi
ce-
pit
ch-
ran
ge
’ to
‘v
oic
e-r
ang
e
’, which is compatible with
SSML's notation, and removes the possibility to interpret this property
as being a subset of ‘voi
ce-
pit
ch
’.
●Fixed "computed value" for ‘vo
ice
-pi
tch
’ and ‘vo
ice
-ra
nge
’
properties, and added the possibility to combine a keyword with a
relative change.
●Removed the "phonemes" property (and its associated "@alphabet"
at-rule).
●Renamed ‘spe
aka
bil
ity
’ to
‘spe
ak
’, and
‘spe
ak
’ to
‘s
pea
k-a
s
’. Reorganized the ‘spe
ak-
as
’ values
to allow mixing different types.
●Added support for lists and counters (item styles, numbering, etc.).
●Adjusted the [initial] value for shorthand properties, to be
consistent with other CSS specifications (i.e. "see individual
properties"), and removed the erroneous "inherit" value.
●Fixed ‘vo
ice
-vo
lum
e
’ by conforming to SSML 1.1
(dB scale, etc.).
●Fixed the [initial] values for ‘pa
use
’ and ‘re
st
’, which should be zero (were
"implementation-dependent").
●Corrected the [initial] values for ‘voi
ce-
ran
ge
’ and ‘voi
ce-
pit
ch
’ to
"medium".
●Added an "auto" value to ‘v
oic
e-d
ura
tio
n
’, which is the [initial]
property value as well.
●Handling of ‘vo
ice
-ba
lan
ce
’ values outside of the
allowed range (clamping).
●Fixed ‘vo
ice
-ba
lan
ce
’ prose to better explain
the relationship between author intent (stereo sound distribution) and
actual user sound system setup (mono, stereo, or surround speaker layout
/ mixing capabilities).
●Added prose for ‘v
oic
e-b
ala
nce
’ to describe the mapping
between stereo left-right sound axis and three-dimensional sound stage
(azimuth support in future versions of CSS-Speech).
●Fixed the "computed value" for ‘v
oic
e-b
ala
nce
’.
●Added the ‘n
orm
al
’ value for
voice-rate ("default" in SSML 1.1).
●Fixed the "computed value" for voice-rate, and added the possibility
to combine keywords and percentages (to be consistent with ‘voi
ce-
vol
ume
’). Added an example to
illustrate inheritance and value resolution.
●Renamed voice-family fields to be consistent with SSML.
●Improved the ‘v
oic
e-f
ami
ly
’ selection algorithm to
cater for language changes.
●Separated definition of semitones (pitch properties).
●More consistent behavior when audio cue URI fails (for whatever
reason).
●Enabled voice-family names to contain spaces, matching ‘f
ont
-fa
mil
y
’ syntax which is based on quoted
strings and concatenated identifiers.
●Added a new section to define the relationship of this specification
with CSS2.1.
●Added the missing "Computed value" line to each property definition.
●Cleaned-up the list of module dependencies, and removed redundant
"module dependencies" section.
●Voice age keywords now mapped to SSML ages.
●Improved the pause collapsing prose, removed redundant paragraphs.
●Added the missing ‘no
rma
l
’
value for ‘v
oic
e-s
tre
ss
’.
●Separated the ‘abs
olu
te
’
keyword for ‘v
oic
e-p
itc
h
’ and ‘v
oic
e-r
ang
e
’.
●Improved document structure by adding sub-sections.
●Removed the implicit ‘in
her
it
’
value for all properties.
●Fixed typos and made other minor edits.