October 27, 2014

5 Things Every Photographer Learning Video Should Know about the Panasonic GH4 and 4K

… (and 4K Video)


by Ron Dawson

When I first sat down to write this review, I was in a quandary. What could I contribute to the Panasonic GH4 conversation that has been going on since the beginning of this year? My research was quickly swept into an acronym, spec, and nomenclature-heavy stream of information. Then I stumbled on a forum discussion about the claim that the 4K 4:2:0 8-bit video captured by the GH4 could be converted to 2K 10-bit 4:4:4 color space when transcoded to CineForm or ProRes.

If your eyes glazed over reading that last sentence, I’m not surprised. I consider myself a technically capable and informed filmmaker, and I’ve been doing it professionally now for more than a dozen years. I have instructed on the topic for a number of media outlets and national seminars, and even I felt tech-timidated. Photographers who are just learning video don’t need to be dunked into the deep end head-first like that.

Here are five short GH4/4K nuggets of information that will give you enough information to understand and follow the reviews and information out there and to give you the foundation to make an informed decision when considering this camera and other 4K technology.

1. Two flavors of 4K. There’s true 4K, specifically 4,096x2,160-pixel resolution (Cinema 4K), and then there’s Ultra HD (aka UHD), which is 3,840x2,160. UHD is four times the resolution of the standard HD spec: 1,920x1,080 (twice the length and twice the height). Most consumer TVs are UHD.

2. Micro Four Thirds sensor size. The GH4 is a Micro Four Thirds (MFT) sensor size, which has viewing area of 17.3x13.0mm (21.6 mm diagonal). This is important to know because of the crop factor (just over 2X). Here’s how it compares to other DSLRs.


Now before you write off this camera because of the tiny crop size, consider that an MFT sensor is larger than Super 16mm film, a format that was used to shoot part of Darren Aronofsky’s Oscar-nominated feature “Black Swan.” Many other well-known feature filmmakers have shot some classic films on Super 16 (e.g. Spike Lee’s “She’s Gotta Have It,” Aronofsky’s “Primer,” Robert Rodriquez’s “El Mariachi,” and Kevin Smith’s “Clerks”). My point: don’t let sensor size prevent you from making a choice to use a camera, unless there is some very specific aspect of a smaller (or larger) sensor that is truly significant.


For instance, larger sensors do have a shallower depth of field, so if for any reason you need super shallow DoF, a smaller sensor could be an issue.

Another point about MFT sensors worth noting is that there are so many lenses and lens adaptors already out there that can fit these cameras. The Metabones Speedbooster is a popular adapter that will allow you to connect full-frame lenses. They won’t make your field of view full-frame, but you’ll get a field of view closer to an APS-C (1.6X crop).

3. Recording format and quality. The biggest appeal of this camera is its ability to record 4K (both Cinema and UHD) directly to an SD card. When I was first using the camera, I had a heck of a time finding out how to do that. I discovered there are two menus you need to set. Recording Format (either AVCHD, MP4, MP4 LPCM, or MOV) and Recording Quality. It is in the Quality menu where you make the selection for 4K.



SIDEBAR: Format and quality primer

Here’s where I’d like to provide some filmmaker insight that may cause some of the aforementioned head-spinning. You’ll notice that the GH4 has literally dozens of format and quality settings:

201410we_gh4 specs.png

Where on earth do you start? Here’s a quick primer:

Mbps is Megabits per second. The higher the number, the better the quality of the video. To give you perspective, the Canon EOS 5D Mark II h.264 video is approximately 45Mbps. The Nikon D7100 is in the neighborhood of 24Mbps. Traditional HD camcorders produced in the early to mid-2000s were also in the mid-20s.

All-I is “intra-frame” compression and IPB is “inter-frame” compression. The former looks at and compresses each frame individually. IPB looks at frames before and after and bases its compression on changes in the image. Theoretically, All-I will give you a better quality, but takes up more space and processing power. Which is why you’ll notice that none of the 4K formats are ALL-I. If you’re concerned about file sizes (i.e. hard drive space or SD card size) and processing power (for instance, if your computer is older), then stick with IPB formats.

The formats are AVCHD, MP4, and MOV. These are all what’s called “wrappers.” In the world of video there are codecs (how the image is compressed) and wrappers (the format the compressed video is placed in). The GH4 uses for the aforementioned wrappers for its H.264 compressed video. (Note: the MP4 wrapper here should NOT be confused with the MPEG4 codec.) You can have any number of different codecs for any one kind of wrapper. On the GH4, you’ll find the 4K quality resolution settings in the MP4 and MOV formats. AVCHD is a format common in consumer camcorders and lower end cinema cameras like Canon’s C100. It provides relatively high quality footage with a low Mbps compression rate. It can be tricky editing AVCHD footage, though, depending on which editing software you use. It’s not as easy as just dropping clips into a folder. AVCHD files are self-contained in a “Package.” Just about all the current versions of the major editing programs can “decode” this package and extract the videos you need. But each handles it differently. Know how you’re editing program works before choosing AVCHD.

LPCM and AAC are different audio compression formats. Frankly, I wouldn’t make any kind of decision on which format to choose based on this. If you’re using this camera to record video, and you need audio too, even though this camera will allow you to record (and monitor) audio, I still highly recommend using an audio digital recorder. Note that when choosing a format though, if you decide to go with the MP4, the LPCM option is best suited for video you want to edit later.


4. High Speed Recording. For my money, in addition to the 4K recording capability, another huge benefit of this camera is the ability to shoot slow motion IN CAMERA, and at full 1080p resolution. As a quick review: most of you will most likely be shooting in either 29.97 frames per second (aka 30p), 23.98 fps (aka 24p) or 25 fps if you’re in a PAL country. You can always slow down video in your editing software, but this reduces the quality and can make it look muddy. To achieve true slow motion with better quality, you need to shoot at a frame rate higher than the rate in which you’re editing. Most traditional DSLRs have been able to shoot up to 60 fps, which in a 24 fps project will yield 40% slow motion (24/60 = 40%). However, they have to drop their resolution down to 1,280x720 (aka 720p) in order to do that.

The GH4 allows you to record up to 96 fps in camera using the Variable Frame Rate (VFR) function. In fact, using this function, you can step your frame rate anywhere from 2 fps (which will give you a sped up video equivalent to 1200% speed at 24p), up to 96 fps, giving you 25% slow motion at 24 fps. All at full 1,920x1,080 resolution.


The VFR has to first be turned on in the format menu (either MP4 (LPCM) or MOV mode). Once you’ve set the format, you must change the Quality setting to 1080p at either 29.97 or 23.98 fps. You then exit back to the Motion Picture menu to turn VFR on.


What is very cool about this feature is that the GH4 will now shoot in the target 24 fps, but take into account the VFR setting, in effect, giving you slow motion in camera. Usually, you have to shoot at the higher frame rate, then change that frame rate to your project frame rate in your editing software. This is known as “conforming.” In short, by using GH4s VFR feature, you don’t have to conform your footage. It will be imported into your editing software already slow.

It’s worth mentioning that you can still shoot regular high speed rates (i.e. 60 fps) at full 1080p, then conform later if you like. Why might you do that? Because the VFR functions are only available at the 100Mbps compression level (remember, the higher the Mbps, the better the quality). In the MOV mode, you can shoot up to 60 fps (technically, it’s 59.94) at 200 Mbps, All-I. If you need that extra quality and you don’t need slower than 40% slow motion, you might select this in lieu of the VFR.

5. Downscaling 4K to 1080p. As amazing as it may be to shoot 4K in camera, the truth is, most people do not have the ability to view videos in 4K. Unless you’re shooting something to be shown in a movie theatre, a full-sized 4K video will be useless to your client. But, fear not. There are two very significant reasons why shooting in 4K is better, even if your final output is traditional 1,920x1,080. And both are related to downscaling the video.

If you take a 4K video and edit it in a 1080p project, you now have a video that is 4X the viewing space. That gives you the ability to “push in” for close ups or reposition your image without losing quality. Here are three screen shots from a video I shot at UHD 4K to illustrate:

A 4K UHD image set to 50% of the video size (which fits perfectly into a 1080p project)

uhd in 1080 timeline - 50.png

Here’s the same shot, but with the video size adjusted to 75%

uhd in 1080 timeline - 75.png

And here’s the shot at 100%, giving me a nice close-up of the subject.

uhd in 1080 timeline - 100.png

It’s not uncommon for commercial video jobs to set up two cameras, one for the wide shot and a second for the close up. If you shoot in 4K, you can get both in one shot. Or, do like I did, and use the second camera for a super wide shot. (My second camera was a Canon C100 shooting at 1080p, ungraded using the Wide Dynamic range profile).


Just think about all the editing options afforded you by having one of your angles shot at 4K resolution.

The second benefit of shooting in 4K actually relates to the head-spinning experience I mentioned at the beginning of this article: the ability to convert the 4:2:0 8-bit 4K video to 10-bit 4:4:4.

As I promised, my goal with this piece is to prevent you from being overwhelmed with all the technical jargon you may not be familiar with if you’re not a classically trained cinematographer (or a color scientist or mathematician). So I’ll keep this simple as possible.

Most DSLRs shoot in a 4:2:0 color space. This is a ratio of luminance and chroma (Wikipedia: Chroma subsampling). Again, the higher the numbers, the better (4 being the highest). The other color space combinations that are popular in the video world are 4:2:2 (as in ProRes 422) and 4:4:4 (or even 4:4:4:4, with the fourth 4 representing an Alpha channel).

Furthermore, the color depth of the GH4 video is only 8-bit (as opposed to 10-bit). Many Professional Photographer readers are well familiar with bit depth. Without getting into all the math, 10-bit is exponentially higher than 8-bit.

The theory is that if you transcode (i.e. convert) 4K footage to a ProRes 444 or CineForm 444 codec, the 4X resolution, when compressed down to 1080p, actually yields a richer color space. The extra pixels, in essence, increase your chroma values (the 2 and the 0), so that 4:2:0 becomes 4:4:4 (FYI: CineForm is created by GoPro and is most common on Windows machines whereas ProRes is common on Macs). The math for this actually works out. However, there is still debate on whether you actually get a 10-bit image from an 8-bit video. But it doesn’t matter. The 4:4:4 color space will give you a higher quality 1080p image than if you shot the video at 1080p. This will allow for better color grading or motion graphics work.

I ran a test where I compared 4K footage (transcoded to ProRes 4444 at 1,920x1,080) to 4K 4:2:0 footage dropped in a 1080p timeline. I then applied a Curves filter to it and adjusted some of the color values. The transcoded footage is on the left. The 4K footage (dropped in the 1080p timeline) is on the right.


If you click to view the full-size image at 100%, you should be able to see a noticeable quality difference.

(You can download the 200% image on my blog and check it out at http://j.mp/ddmag-gh4test1.)

You’ll need to use a program like MPEG Streamclip (squared5.com) or Apple Compressor, or GoPro’s CineForm to convert the footage. But it’s well worth having that extra color quality, particularly if you plan to do a lot of color grading and/or motion graphics work.

The Final Verdict

In my opinion, this is an amazing camera and deserves serious consideration. I definitely plan to use it for my shoots for all the reasons I mentioned above. And what I haven’t mentioned yet, which you likely already know, is that it’s only $1,700 U.S. (That boggles the mind!) Still, I strongly encourage you to rent it first. Lensprotogo.com is my go-to rental house, and they were kind enough to loan me the camera for this review (Shipping is included in their rates and that every order ships in Pelican cases. Using the code x180 will give you a 10% discount). Whoever you use, there’s no reason why you can’t invest the time and money to at least try this camera out.

I encourage you to do more research. Hopefully this article will provide some insight, fill in the knowledge gaps, and make your exploration all the more effective.


Looking for that "Yum!" Factor

By  Jim Scherer

Try this exercise: Close your eyes and imagine something really really delicious, something that makes you crave. Write down the specific attributes of that mental image. Then cross out everything on your list that relies on senses other than sight. Whatever you finally come up with, those are the things a food photographer has to work with in making an effective image.

What’s on your list? I come up with color, texture, glisten, moisture, and so on. Those are the obvious ones, but there are others. Point of view (where is your eye?), scale (how close are you?), composition (does your eye know where to look?).  What about implied motion—like a drop about to fall, or anything just on the verge of happening. Light itself can sometimes imply motion. Let’s go on … what else is on the list? Mood, which is a broad term, can definitely affect whether something is mouthwatering. Mood comes from a combination of lighting, camera point of view, color, and surroundings, surfaces, and props. 


In the photo above, bright clean light conveys a happy morning while the softened butter patty says the pancakes are still hot. Each blueberry looks perfectly round, so the viewer knows the skin will pop in the mouth and burst with sweet juice. Even the syrup is ladled on in just the right amount, not too much and not too little. ©Jim Scherer

And of course, there is styling. The presentation of the food, often done by a food stylist,  is a huge factor in appetite appeal. Think of a muffin whole, compared to a muffin broken open with butter melting and some crumbs on the plate. This line of thought leads to lots of new things for our list … seeing inside something, seeing the bits and ingredients, and presenting a plate with the invitation to dig in.


This photo of coffee-rubbed steak speaks to getting the look of authenticity right. The amount of juice and rub on the cutting board matches the cut pieces of steak, and the antique cutlery adds mood to the image. The overhead angle is bold.  ©Jim Scherer

Here’s yet another aspect of getting appetite appeal—authenticity. Does what you’re looking at look fake? Does it look too perfect? Is it a Disney World simulation of some ideal? These are all unappetizing. Looking real means seeing the imperfections, the personality, and celebrating the fact that a dish looks different every time it comes out of the oven.

As you continue developing your food photography, begin considering all these factors. That’s what will make your viewers say, “Yum”!

When it comes to acquiring the skill it takes to capture food photography that will get you noticed, you must start with the basics and build from there. Here are 13 simple steps you can take to begin to be a better food photographer.

13 Ways to Improve Your Food Photos  

  1. Use a tripod or camera support when possible
  2. Adjust (or correct) your white balance accordingly
  3. Avoid on-camera flash at all costs!
  4. Simplify your composition and decide where you want the eye to look
  5. Come in closer, and sometimes lower!
  6. Pay attention to your background
  7. Learn to use your camera in manual mode
  8. Shoot raw, and learn how to optimize each file for web use
  9. Shoot tethered, unless you are roaming around a market or other location
  10. Try using some silver and white reflector boards, as well as black cards, to modify and shape your light
  11. Buy a simple light source to supplement your window light
  12. Keep shooting, and be sure to take shots from alternate angles, so you can self-critique afterward and learn from your mistakes and successes
  13. Take a workshop


Jim Scherer has been the photographer of record for the food pages of the Boston Globe Sunday Magazine for the past 32 years in addition to scores of other commercial projects. His work was recognized as some of the Best of 2014 by the American Society of Media Photographers this year. You can see more of his work at jimscherer.com

October 24, 2014

Excerpt: "Shoot Macro: Techniques for Photography Up Close"

This is an excerpt from "Shoot Macro: Techniques for Photography Up Close" (Amherst Media), $27.95

Acclaimed photographer and Professional Photographer contributor Stan Sholik takes you deep into the lighting and shooting techniques used to produce otherworldly images of tiny subjects. Step-by-step techniques show you how to choose and use the right equipment, solve common problems, and make best use of the specialized equipment designed for this technically demanding genre.



A rose is such an over-photographed subject. A new macro photo must add a little something unique to the way we see it, or it’s not worth doing. This is my take on it from a few years ago when Lensbaby first introduced their macro lens kit.


For those unfamiliar with Lensbabies, it is an optic system that replaces the lens on Canon and Nikon DSLRs, many 4/3 cameras, or PL-mount video cameras. There are several lens bodies that allow you to shift the focus point while blurring other areas of the image, and a range of optics that mount into the bodies and simulate various camera lenses. There are also lens accessories, such as the macro lens kit that I use as well as newer macro converter extension tubes.


For this photo I used a Lensbaby Composer body on my Nikon with the Lensbaby double glass optic, an f/8 aperture disc, and both macro close-up lenses. The double glass optic is a well corrected, multicoated lens that is quite sharp when the Composer is not shifted. But when you manually shift the Composer, the area of sharpness moves and blurred edges appear. The amount of sharpness and blur, as well as the exposure, are determined by the lens aperture. The aperture is adjusted by discs that you place into the Composer.

For close-up and macro photography there is a +4 and a +10 diopter available in a kit. These you can screw onto most of the optics either individually or in combination.

As always, choosing the right subject is important. I found these small roses at a market when I was shopping and bought them to photograph. While I was shooting them with my Nikon macro lens and producing results that I was happy with, I thought of the Lensbaby macro lenses that I had recently acquired.


Using available sunlight, I started playing around with the macro lenses individually and in combination. With the +4 diopter mounted on the double glass optic and the f/8 aperture disc installed, I shifted the Composer quite a way off axis to create an area of sharp focus and streaks of blur around the area. This is my favorite from that day. The shot works because there is enough color contrast in the red to clearly show the blur. With a single color rose, it doesn’t work well at all.

September 24, 2014

Music Licensing for Film and Video

By Ron Dawson

There is perhaps no topic as important and contentious in the industry as the legal use of music in the production of videos, particularly event videos. Even if you or your client buys a song on iTunes, you’re still not freed from the obligation to attain proper licensing. And using copyrighted music in a clients’ personal videos does not constitute fair use.

By law, in order to use a song in a film or video you need two types of licenses: a master use license (controlled by the record label) and a synchronization license (controlled by the publisher). The former is for the rights to the song from the originator. The latter is for the rights of the specific version of the song you want to use. In some cases, the label and the publisher may be the same entity. But in many cases they are not.

Let’s say you want to use the 2010 Haiti Charity remake of R.E.M.’s classic “Everybody Hurts” for some non-profit video you’ve made. You’d need to get a master use license from Warner Bros. music label (from which the original R.E.M. version hails), and a synchronization license from Simon Cowell’s company (which produced the remake).

If a song is older than 70 years, it may be in the public domain, but you still may need a sync license. For instance, if you wanted to use Chris Tomlin’s “Amazing Grace,” as a hymn older than 70 years, the song is in the public domain, so there’s no master use license needed. However, you’d still need to get the sync license from Chris Tomlin’s publisher. However, if you got your 16-year-old daughter to write and sing her own arrangement, you wouldn’t need any license.

For a while the record companies did not seem to mind that there were literally hundreds (if not thousands) of professionally produced wedding videos online, all with illegal use of copyrighted music. But in late 2011 they started taking wedding videographers to court and winning large settlements, so take this very seriously.

Fortunately, there is a growing number of music licensing companies that make licensing quality music easy and affordable. Keep in mind that traditional music licenses can cost many hundreds, even tens of thousands of dollars, depending on the type of film or video, where it’s played, and how it’s distributed.

There are many quality resources out there, but a few rise to the top in terms of the variety and quality of songs in their catalogs, and in particular, their connection and understanding of DSLR filmmakers. Pay close attention to the license terms such as how long you can use a song and in how many productions.

Triple Scoop Music (triplescoopmusic.com): Triple Scoop Music’s service is tuned specifically to wedding and event photographers and videographers. Many of their songs are from Grammy-award winning artists, and you can find high-quality songs, both with and without lyrics. As of this writing, their licenses for personal videos such as a fusion wedding presentation is only $60 for an indefinite use, perpetual license. Commercial related licenses range from $99 to $299.


The Music Bed (themusicbed.com): TMB has a particularly strong connection to the filmmaking industry. They have an eclectic mix of high-quality music, including some from well-known bands like Need to Breathe. Their licenses start at $49 for single use, perpetual wedding or portrait licenses. Corporate licenses range from $199 to $399 depending on the size of the organization.


PremiumBeat: (premiumbeat.com): PremiumBeat is a poplar go-to site for small companies and agencies shooting commercial work. All the songs in their curated catalog are just $39.95 for unlimited use in perpetuity. None of their songs have lyrics (aside from a few with background vocals), so they may not be the best choice if you need songs to prime emotion, but for commercial work they’re hard to beat.


Marmoset Music (marmosetmusic.com): Marmoset Music has a tool on their site that allows you to search for songs by pacing, type of project, energy level, etc. Their licenses start at $99 for wedding and portrait perpetual, single use. Corporate rates start at $199 and climb to $999, depending on company size.

marmoset filter.png

Song Freedom (songfreedom.com): Song Freedom made a name for themselves by being one of the first sites to provide pop songs from artists like One Republic and Colbie Collait. Their license rates are $49.99 for wedding and portrait single use, and $199 for commercial. Their licensing is a little confusing in that they also have a corporate licensing rate, which to me seems the same thing as commercial. Be sure to read their FAQs on the difference.


Other popular sites worth checking are AudioJungle.net and Stock20.com.


There’s one music resource on the Internet that allows you to use music for free under Creative Commons 3.0, so long as you put proper credits in the video: incompetech.com by Kevin MacLeod. You may not find the quality of music as high as the sites mentioned above, but it’s a great resource if you need a fun silent movie era song, or a popular classical music piece. If you have a client with a small budget (or no budget), this is a great resource.

Know Your Codecs (and other useful technical information)

By Ron Dawson

Some seemingly minor video technical video details are seldom taught in workshops, but they're definitely worth learning as they can help you decide how to compress for the web, what camera to choose, how to solve that pesky editing problem, and whether or not to get that fancy new HDTV.

Decoding Codecs

Codec stands for compression-decompression and it’s the algorithms used to compress large video files into something more manageable. Some of the most widely used codecs are MPEG-4 (including .M4V and .MP4), H.264, DivX, MPEG-2 (typically used for DVDs) and Apple’s ProRes.

Apple’s ProRes is a favorite among video editors because of its quality and how easily it's handled by various non-linear editing programs (NLEs). There are five popular versions of ProRes (from lowest to highest quality): ProRes Proxy, ProResLT, ProRes 422, ProRes HQ, and ProRes 4444.

QuickTime (.MOV) is not a codec. It’s a video format, also called a wrapper. You could have a .MOV video format compressed with H.264, one compressed with ProRes, or one compressed with MPEG-4. They all would technically be QuickTime files, but would perform very differently in NLEs.

AVCHD is a proprietary video format created by Sony and Panasonic, originally for the consumer video market. A number of years ago professional and prosumer camcorders adopted the format as well. Sony’s FS100, the Panasonic AF100, and Canon’s C100 currently all use this format.

Transcoding is when you convert one form of codec into another. For instance, although most NLEs can manage most codecs, many of them still have a much easier time handling ProRes. So many editors will transcode DSLR files from H.264 into one of the “flavors” of ProRes and spit it out into a .MOV wrapper. MPEG Streamclip is a free transcoding software and one of the most popular used to perform this task.

Fields, Frame Rates & Flavors of HD

Progressive vs. Interlaced: To conserve bandwidth over the airwaves, traditional video was interlaced. Each frame was comprised of two fields with 60 alternating vertical lines (thus the 60i you often see) that when played back at 29.97 frames per second (aka 30 fps) gave you a solid image (and giving you that stark “video” look). Progressive video is when each frame of video is one solid frame and field, like traditional film (thus the more cinematic look).

Frames per second: Also referred to as fps (frames per second), the number you usually see isn’t the actual rate. When you hear people talk about 24 fps (sometimes shown as 23.98), in actuality it’s 23.976. Here are some other values:

25 fps (Pal) = 25

30 fps = 29.97

60 fps = 59.94

Resolution Values:

Standard Definition = 720 x 480
High Definition 720p = 1280 x 720 progressive
HD 1080i = 1920 x 1080 interlaced
HD 1080p = 1920 x 1080 progressive
HD 2K = 2048 x 1080
Ultra High Def, aka UHD, aka “fake 4K” = 3840 x 2160
HD 4K = 4096 x 2160

There is lots of discussion and debate about whether or not it makes sense to shoot in 4K. A lot of factors go into making that decision: your intended audience, how they will view the video, the kind of story you’re telling, etc. As always, make the best decision you can afford given the resources at your disposal. Some of the most powerful and poignant videos I’ve seen on the internet were shot on a Flip Video camera.

Video SEO Myth vs. Reality

By Ron Dawson

You’ve prepped, shot, and edited your video. It’s ready for prime time. But the question is, How do I host it? YouTube? Vimeo? Do I upload it to my website? And what about video SEO? Where does that come into play? This article will answer those questions, and likely challenge you a bit, too. 

Myth #1: Good Video SEO Is Getting As Many Views as Possible

I see a lot of video producers writing blog posts and telling clients that video SEO is about getting as many views as possible for your video and racking them up on YouTube. That’s well and good, but it isn’t SEO. SEO stands for search engine optimization. It’s about optimizing search engine results for the people searching and for the sites being searched. You want the right people to find you and your business via organic search results. Getting lots of views may be a good ego boost, and it definitely can help with brand recognition, but it doesn’t necessarily translate into good SEO. Are those views leading people to your site? Are they converting into business? This is not to minimize the positive effect of lots of views. Just don’t confuse it with SEO.

Myth #2: Putting Your Videos on YouTube Increases Your Search Rankings

The rankings of your page are based on a range of factors like relevance of content, keywords, link backs from other sites, etc. Whereas having a relevant video can help boost search engine results overall since rich content like video is a plus for SEO, all things being equal, a YouTube video won’t rank your page any higher than any other video.

Myth #3: YouTube Is Good for SEO Because It’s the Second Largest Search Engine

This is perhaps the most frustrating myth. You often hear people proclaim that because YouTube is the second largest search engine (second only to Google), that is reason alone to put all your videos on YouTube. The problem with that thinking is that people aren’t searching for promotional videos or show reels on YouTube. They’re looking for education or entertainment. If they need a product or service they’ll start their search on Google (or Bing or Yahoo! or some other popular search engine). The goal of a good video SEO strategy is to get the search engine result to link to your page, not YouTube.

What is Good Video SEO?

The primary objective of an effective video SEO strategy is to maximize the traffic to your website via the effective production and distribution of video. You accomplish this three ways:

  • Host your video on a self-hosted, professional platform (e.g. Wistia, Vimeo Pro, etc.)
  • Use video sitemaps to register your videos with search engines
  • Post the right kind of content to YouTube to drive traffic which is already there back to your site.

Using a service like Vimeo Pro or Wistia, in conjunction with video sitemaps, will allow you to establish the web pages on which you post your videos as the canonical version of the video. Basically, all that means is that when search engines see your video, they consider those web pages as the “owners” and will link people there when those videos show up in search engine results. (How to create a video sitemap is beyond the scope of this article. Wistia will create one automatically for you. If you need to create one manually, just Google it and read Google’s support page on the topic.)

Using sitemaps can also increase the chance of you getting a rich snippet video thumbnail in your search results. People are more likely to click on a rich snippet than a plain link. And if you establish your site as the canonical owner of the video, search results for your video will rank higher than their YouTube counterparts. (see image).


The short film about Jerry Ghionis I produced (see top result) has a rich snippet, links directly to my site, and ranks higher than the YouTube version of the video I uploaded.

So, that’s how you drive traffic using video from search engine results. But you may be wondering: “What about all those people already on YouTube?” Let’s address that.

A More Effective YouTube Strategy

As I mentioned above, people conducting searches on YouTube are looking for educational or entertaining content. So here’s a small list of ideas of how you may use YouTube to improve your SEO:

  • A wedding videographer creates an online video podcast or show interviewing local vendors, giving wedding tips, etc.
  • A family portrait photographer starts a series  giving moms tips on how to use their DSLRs to get great photos of their kids.
  • A senior portrait photographer creates a YouTube channel giving seniors advice about prom fashion, makeup tips for photo sessions, tips for photographing groups of friends.
  • A commercial video producer creates a series of tips on how to effectively use video in your marketing strategy.
  • A creative brand or ad agency creates a series on how to effectively build a brand or use social media

You can spend hours during research on proper video SEO. Save yourself the time, improve your SEO, impress your clients, and just follow the tips in this article. You’ll thank me later.

PrinTao8: Take the Frustration Out of Printing

By Ellis Vener

LaserSoft Imaging is a Germany and Florida-based company best known for their high-end SilverFast scanning software. Recently they expanded their range of products to include printing software that dramatically simplifies the entire printing process. They call it PrintTao 8. Unlike most printing interfaces PrintTao8 is blissfully straightforward to use. Its secret is that it does something really terrific: it completely bypasses a computer’s operating system and the printer’s drivers to communicate directly with the printer. Removing the Operating System as middleman not only simplifies the workflow, but by bypassing the OS you stay away from any issues that changes to an OS might introduce, and at least to my eyes, get purer and more vibrant color in your prints.

How it works is similar to a raster image processor (RIP). A raster graphics image is digital data rendered as a bitmap of pixels so the data is rendered on a display and when making printing from digital files. Almost all applications we use to process images are raster based, including Adobe Photoshop and Corel Painter. Adobe Illustrator, on the other hand, is a vector-based program.
Software based RIPs handle color management to the n+1 degree, nesting (printing multiple images on a single sheet of paper), along with resizing (interpolating), rotating, and rearranging images to make the most efficient use of the space on a page, and more. Designed primarily for the mass printing market and not individual photography studios, RIP software licenses are tremendously expensive.

PrinTao 8 does almost all of the things RIP software does as well, but it is not a traditional RIP. Before you read further there are two caveats. The first is that PrinTao 8 is currently only for Apple OS X users. The second is its price. 

Compared to the invisible cost (because it is built into the computer’s OS and image processing software) of an OS + print driver combination, PrinTao isn’t cheap. For a desktop model Epson inkjet printer the price is $99 to $299 (depending on the model), and for various Epson Stylus Pro and Canon iPF wide-format printers it’s $399 to $699. Pricey, yes, but still nowhere near the cost of a ColorByte, Fiery, or Wasatch RIP that runs into the thousands of dollars. 

Why should you consider using it? The simple answer is that it takes the frustration out of printing. The interface could not be easier to understand and navigate. Right now it works with a handful of printers, mostly big machines designed primarily for roll printing. As of mid-July 2014 the list is split equally between Epson and Canon. On the Epson side there are the desktop Stylus Photo R2880 and R3000 and really big desktop Stylus Pro 3880, 4880 and 4900 models, and the wide format Stylus Pro 7890, 7900, 9890, 9900, and 11880 models. No Canon desktop model are supported but virtually the entire big Canon imagePROGRAF printers are. I tested it with a 12-ink 24-inch wide format Canon iPF6300 imagePROGRAF.

Whether you do your own printing, or have someone else in your studio do it for you, even a new intern, you’ll be up and running in a few minutes.

On the start page you choose the printer, paper, print quality, paper source (roll or sheet) and size. Once you have completed these five basic settings clicking on the button labeled “Create” and the main window opens. For custom page sizes, chose a custom page size is on the Preview page. In the Print tab, specify the Paper Source as a Cut Sheet, chose “Custom Size” as Paper Size, and enter the dimensions. After you have done this once, you can skip this process next time by choosing a size from the “Recent Documents Menu” or the “Use Last Settings.”

What about color management and profiling?  Because PrinTao 8 bypasses the OS it doesn’t have access to Apple’s ColorSync profile library. PrinTao profiles reside inside the program. Starting on the Start-Pilot page, after choosing the printer model, in the Print on tab click on the “Add print media” button. For the Canon iPF printers LaserSoft currently has profiles available for papers from Breathing Color, Canon (default), Canson, Hahnemühle, Innova, Red River, and Tecco.  Choose the profile for the paper you want to use and click on the install button.

PrinTao 8 Start page.jpeg

Don’t see the paper you want to use? You’ll need to add a custom profile. Start by specifying an already installed Base Paper material that most closely matches the paper you’ll be using, and in the ICC profiles menu choose one of three quality selections, Fast, Normal, or High Quality. At this point the process gets technical. Under  “Advance Media Control” specify the settings you want for the five options listed. 

One of the delightful things about PrinTao 8 is that you don’t need to resize an image before sending it through the printing cycle. Import a full-resolution image and then it changes the print size to fit your needs. And here is the genius part: PrinTao 8 shows you, using a yellow, green, and red scale what is the optimum size based on the size of the file in pixels and the optimum resolution range for the printer.

All printers interpolate image resolution: Epsons print at 360 dpi and Canons at 300dpi. Printers will interpolate the data to fit any size print you want to make, but they will print at either 300 or 360 dpi. But there is a range around those resolutions where the printer does its best job of interpolating that resolution. PrinTao indicates the optimum range with green. With the Canon iPF printers that range is 200 to 1,200 dpi. Once you get into the red zone—199 dpi and below—you’ll start to see pixelization and in the yellow zone—1,201dpi and above—you might start seeing some unexpected, unwanted artifacts. This one feature is really is genius and for me, along with its general ease of use, and really beautiful color and black and white printing, makes PrinTao worth it.

PrinTao 8 main page window.jpeg

September 2, 2014

September 2014 Issue


Powered by
Movable Type 5.2.7