Have you ever seen a live stream where the presenters don’t know which camera to look at? It looks a bit amateurish!
When you’re watching the news, the newscaster knows where to look because he or she is trained to look for a tally light above the camera that is currently live. When the tally light switches to another camera, the newscaster knows to look in that direction instead.
Unfortunately not everyone has the practice and training as professional actors and news anchors. And without this training, keeping track of tally lights, and making transitions between cameras look seamless and natural is difficult.

Presenters aren’t looking at the live camera, they’re looking off-screen to the one camera at their left
Does this mean you are stuck with presentations like the example above? Ones where the presenters don’t know which camera to look at and are constantly switching between cameras, resulting in them looking out of the frame half the time?
No, you’re not!
Get better results with a single camera
You can avoid this problem, even without professional actors and tally lights.
If I was to re-create the event shown above, I would use just one camera and I would create three different shots (where the presenters are always looking at the camera):
- A wide shot of the both presenters (Alex and Anthony)
- A cropped shot of Alex
- A cropped shot of Anthony
Using Pearl’s multi-source layout creator, I would put these together with an input capturing the slides from this presentation. For the wide shot, it would look like this.
Notice in this layout, both presenters are looking at the one camera in front of them. Having one camera is easier for them, because they always know where to look, and it’s also easier on the viewer, because the viewer can pay attention to the discussion without being distracted by wondering what’s so exciting off-screen!

Presenters are now both looking at the (only) one camera in the room.
Remind me why I need multiple views?
Well, you could, of course, just use this static view from the camera. There’s nothing wrong with that! But if you’re reading this post, you probably already like the idea of creating dynamic video by switching the camera view while you’re streaming and recording.
Using Pearl’s front touch screen, I can select and switch live between the layouts I created. Meaning as the presentation progresses, I can choose the appropriate camera view – either the wide shot of both presenters or a cropped shot of the current speaker.
But… but… doesn’t cropping mean bad quality?
The short answer is not always.
The longer answer is that cropping and upscaling can mean bad quality. For example, cropping and upscaling causes this picture of a flower (left) to appear pixelated and a bit boxy. Depending on the amount of upscaling you’re doing, it can vary from looking OK to looking terrible. So be careful with upscaling.
Why does upscaling look bad? Upscaling is problematic because computers can’t insert more detail than was originally in the picture, so they approximate it, and it looks bad to our eyes. However the reverse is not true. Computers have very good algorithms for reducing the size of an image. So downscaling doesn’t typically suffer from the same issues as upscaling.
So let’s do some simple math
For my example, let’s assume I’m streaming with a frame size of 1280×720 (720p). That is to say the stream is 1280 pixels wide by 720 pixels tall. Since I’m always maintaining aspect ratio while I scale (each component is always 16×9), I’ll do the math for the width of my component parts. The relative changes in height follow naturally.
In the diagram below you can see that I’m using a multi-source layout where the slide presentation takes up about 2/3 of the width of the frame size. Two thirds of 1280 is approximately 850 pixels, leaving only about 430 (1280 minus 850) pixels of width for my camera view.
Given that the HDMI camera source is also 1280×720 (720p), without any cropping at all I’m already downscaling the camera feed to about one third of its original size.
If I crop my camera feed to show only one of the two presenters (see the example below), I have about 850 pixels of content (2/3 of 1280 pixels), which is almost twice as much as I need for my space. So I’m not going to get grainy blown up pixels, because I’m still scaling down, not up.
In fact even if my live stream was at 1080p instead of 720p, I would still end up downscaling the camera feed! [At 1920×1080 I would have 1280 pixels for my slides (two thirds of 1920 is 1280) and 640 pixels wide for the camera view (1920 minus 1280 is 640). So the 850 pixel cropped camera view would still be downscaled to fit in the 640 pixels of space.]
As you can see, when downscaling because the camera is taking up only part of a multi-source layout, cropping the camera view doesn’t impact quality.
Go forth and stream!
Now it’s your turn! Go out and experiment with this. If you have only one presenter, experiment with non-standard aspect ratios for the presenter – why does it have to be 16×9? Why not square? And even if you have multiple cameras, you can still use cropped views on some of those cameras to keep your presentation engaging.
A really simple rule of thumb: If your slides take up three quarters of the width of a multi-source layout, and your incoming camera feed is the same size as your resulting stream frame size (e.g. both are 720p or both are 1080p) you can safely crop away up to three quarters of width of the camera view without resulting in any quality degradation. (Or, if your slides take up 2/3 the width of the stream frame, you can crop away 2/3 of the camera view, if they take up half, you can crop away half of the camera view, etc.)
As long as you’re scaling down, or only scaling up a little bit, your quality will be fine!