True or False?

Paragon

Interstellar traveler
I read this in a Mac magazine that in DVD Studio Pro you can use a variable bit rate for your movies. Now in there they say that if u have a still picture, like a scenery, the bit rate is low and high for fast moving pictures. I thought it was the other way around or am I wrong?
 
I think you're wrong. From what I know about video editing (which is about this -> <- much (that isn't a lot, btw ;) )) the still images would be a lower bit rate.

For a still image you can tell the display to hold the same image. For an image which changes only a little, you tell the display "Hold the image the same but change this area to display that". For a moving image (assuming that the full image is moving, not just part) you have to transmit to the display the entire viewable area. Note that I refer to 'display', but in reality I mean the 'device that does the decoding of the digital signal and figures out how to display it on the viewing device'. Display is just shorter. :)

I am not sure if this is how DVDs are viewed, but this is how I was taught to do computer animation. Why transmit the whole darn image again if only a section is going to change?
 
Hrm. I don't know much. But i was under the impression that it's the other way around. Slow moving scenery and such get high rates, because you're more likely to notice artifacts and poor quality than when stuff is going crazy and action-oriented and such, during which time you get the low bitrates. Kind of like VBR in mp3s. It's high when the music is calmer, like classical, and lower when it's speedy deathmetal, cuz you're less likely to notice. See my thread on VBR for more info - i asked the same question, and what i wrote above was what I learned!

:-)
 
Most codecs (coder-decoders; examples are MPEG, Sorensen 3, Cinepak, RGB planar, etc...) use some sort of psychovisual algorithm to determine which content from frame to frame needs to be changed and which doesn't; an explosion sequence, for example, in which each pixel of the frame will change its color to a great degree, demands a higher bit rate; a still image which stays on screen for, say, five seconds (120 frames on film, 150 DV frames, 149.85 NTSC frames, ect...) will not change at all. The psychovisual componenets of the codec will pick up on this and lower the bit rate accordingly.
 
Easy, unlearnthetruth--audio and video codecs work differently. Video needs to trick the eye, which is easier than tricking the ear. Advanced audio codecs allow for the bit rate to change throughout a track, so, for example, the thirty second shouting that occurs before the music at a rock concert can be encoded at, say, eight bits per sample, 11k samples per second, and on only one channel, while the actual music might be recorded at 8 bits per sample, 22k samples per second, on two (i.e., stereo) tracks. Locically, the intro should be part of the same track as the song, but it can be VBR'd to reduce file size.
 
Chenly: so what you are saying is that in video you need a low bit rate for slow pictures, but in music it's the other way around. In music you need a low bit rate for "noisy" music.?
 
You can utilize an extremely slow bit rate for still images in video sequences. Audio is more subjective; I would never rip, say, Beethoven into MP3 if I could possibly get around it, but Eminem would be fine. If I were to put ol' Ludwig into MP3, it would be at 320 kbps and full stereo (as long as the original recording was stereo); Eminem would probably sound fine at 160 kbps in stereo, but that's just my opinion. Again, since audio codecs cannot fool the ears the way video codecs fool the eyes, it's one big judgement call. Essentially, there are tricks available for video encoding, but no freebies at all in audio; a higher bit rate will ALWAYS sound better than a lower one; the challenge of the audio engineer is to find the sweet spot between small file size and high sound quality for a given recording.
 
Back
Top