• Hello MLAers! We've re-enabled auto-approval for accounts. If you are still waiting on account approval, please check this thread for more information.

Apple Video System

I think CelGen meant the media captured was interlaced up until RGB component output devices came along. AFAIK the digitized images displayed in the AVP window were progressive scan, though I could be wrong, there's an oddity about that window. When you do a screen cap with an active AVP widow it's a blank frame within the desktop image.

As far as the interpolated/pixel doubled full screen image goes. My kid's media station's 17" Trinitron seemed more crisp than the same VHS/Cable image on the 15" Trinitron TV in the master bedroom. Never did a side by side comparison, different room and all, but it was good enough that the tuner card and a 21" Radius Trinitron display was my VHS/DVD home theater (went without TV from Y2k until I got a Roku last month) setup. Component output from VCR and DVD looked fine even on the 32" 720p flat screen I picked up seven years ago.

Forget any kind of serious VidCap. Clips and and especially stills from a VidCam were tons of fun in the mid-nineties on a consumer grade Mac with economy digitizer. That experience is what you should be looking for from the card in this day and age.

 
Just because the NTSC standard says something doesn't mean VHS is up to it.

Regular VHS was pretty abysmal, really, and your recording would be about 250 lines for both VHS and Beta.

SVHS is said to do about 400 lines.

As far as I could tell, the cards in the performas were really meant for pulling extremely short clips off of cameras and integrating them with HyperCard projects or re-sequencing them in Premiere or the home-focused Avid product.

If you had Premiere and a really good deck (one with serial control) you could do "sort of non-real-time" video capture, where it would pull in like five to ten seconds of video at 30fps, put it on the disk, rewind, rinse, and repeat. It's exactly as painful as it sounds, and this is probably why big expensive cards with lots of compression cards existed in Macs where you could DMA the video onto a beefy SCSI card.

Desktop video didn't really become good (at least for big projects) until the Power Macintosh G3, and I'm tempted to reserve "good" for the blue-and-white, when Firewire and big/fast IDE disks meant you could pull down a whole DV tape without losing any frames in real time.

It's fun to capture a little bit anyway, though. it's one of the things I'm eventually going to be set up to do with my 840.

 
Media 100 Nubus and Radius Videovision Nubus were pretty good way before the G3 came out, and AVID Medicomposer systems on Q950's were editing whole movies before the G3 came out also.

By the time firewire cameras were out home video editing was cheap since you didn't need special hardware for capturing (but you still needed special hardware for real time effects like the MAtrox RTMAC for example). These days everything is done with software and RAID arrays for storage.

 
One thing that's slightly confusing when discussing video resolution in the TV/video world is that there are two different usages of the word "lines".

The first refers to scan lines: ie, how many horizontal lines the beam draws per frame / field.

The other common usage - often cited as "TV lines" - refers to the horizontal resolution of the system - ie, how many discrete pixels per scan line you can expect to be able to distinguish.

It's often difficult to make out which is being cited unless you look closely at context.  A CRT broadcast monitor for, say, PAL, will have a set number of scan lines (400-something, 480 including the overscan), but the second spec "TV lines" will tell you what horizontal resolution it's capable of displaying.

I guess the second usage comes from the use of standard test pattern cards with different sized line grids on them, so you can see how much fine detail the system allows.

 
Last edited by a moderator:
Back
Top