The opposite use of Facebook

I’m now on the other side of media. After spending so many years working in PBS and in publicmedia I just began a new gig in commercial television. Consolidation, concentration and economies of scale are what has and is going on. I guess the thought is that with overall viewership down, combining operations to reduce inefficiencies and cross promote programs through other outlets (transmedia) the cost per viewer will be acceptable. Social technologies are used on the commercial and non-commercial sides to support “audience engagement” or promote “the conversation” and to harvest sources. In one instance, Facebook “Likes” and Twitter “Follows” further qualify traditional ratings. In another, to drive clicks to web properties.  To what end?  Beats me – could the rationale be to keep increasing the pennies generated by “other media”?

All this “why” aside, broadcasters are driving viewers to their Facebook pages and twitter streams. That’s the goal. Up the likes. Up the followers. Pour the bread in to the webpage and hope for a better tomorrow. I kinda understand this and have learned a good many techniques and underlying principles to help reach this goal. The idea is that development on the net will pay off eventually and benefit core viewers the most for now and the brand later. But here is a different objective. Use this stuff to increase the ratings. To drive more viewers to the show. Huh? Right?

OK. Let’s give it a shot. Why tune in? How to make appointment TV more relevant? It’s asking a lot these days to have a viewer stop what they’re doing – and tune in to your show at a specific time. This is further complicated when the show isn’t live. Two thoughts here. “Hey friends, look at me, I’m on TV!”, but you have to watch at a specific time and day because it’s fleeting and has little value beyond now. And two; host a conversation during the airing to gossip with me about it! This plus a less throw back goal, let’s build a network of sources and develop a citizen corps.

In our Knight Foundation News Challenge grant proposal, we refer to one of the techniques we’re developing as “Enhanced Live blogging.” I’ll describe just how and why these techniques should be included in your production design to stream line workflow and create more material to serve and engage your constituents. Let’s start with that term constituent. The conversation about our lives, what’s new,  has been going on before TV, newspapers and the internet. As media evolved into a convenient collection of information for our consumption, it subtly promoted a sometimes unspoken agenda by directing what our focus should be. The notion of Eyewitness news – unless “we” witnessed it and share it with you,  it isn’t news – is based on the idea that there are limited resources to gather information. A gatekeeper must prioritize the stories and decide which are worthwhile to bring to you, the audience. With the wealth of information now available to all people in all areas turns this notion around. A passive audience that absorbs only what is placed in front of them is fading away. Many take an active role in their information consumption and share it in conversations. This sharing takes many forms. Some comment on blogs, others call in on talk shows while others still write letters to the editor.  Some just shout at their TV’s.  A real digital conversation can now occur, we’ll talk more about this later.  But what to we call this new relationship? Not an audience, not a user, not a pure content consumer. The relationship has changed. The media is no longer a lecturer standing at the front of the auditorium reading notes to listeners copying them into their notebooks. It’s more like a study group where participants share their discoveries to further the dialog. So if we are engaged with others and providing the mechanisms to help further their conversation perhaps it is constituents that we serve. The enhanced live blogging concept was designed for live programs, but can also be used for the broadcast premiere of pre-recorded events. Live blogging is a written play by play account of an event from someone at the event as it happens – enhanced live blogging adds video, audio and transcription. Four or more unmanned cameras are positioned around a “talking head” discussion. They are switched live and sent to be broadcast, streamed  or to a recorder. The product of the switched output is a traditional public affairs program. It can be used in long form for audio and video podcasts. Each camera is focused on the individual participant of the discussion. These “quote” cams along with the audio feed of the conversation are isolated and sent along to blogging stations. The live blogger selects relevant statements by marking in and out points.  The product of the blogging station is a branded video clip of the statement and a transcription of the statement that are posted for distribution.  A shortened link that points to the clip is generated and inserted in to the twitter template where the blogger writes a brief headline.  A tweet containing the quote, the link to the video and branding information is tweeted. At the same time RSS feeds are updated and text messages are send to subscribers. A transcription of the quote with the link is also inserted into the topics “chat” stream where others continue the conversation. The relevant points made in this chat stream are curated by an additional blogger or social avatar who participates in the televised conversation by introducing chat points to the televised guests. The live blogger position takes the place of camera operators. The ability to quickly determine quotable or relevant statements are needed instead of knowing how to operator a camera. The live blogger position can be virtualised by sending the isolated audio and video streams to remote individuals who have demonstrated the skills necessary to operate as an enhanced live blogger.

12 media predictions for 2012

Predictions are always the most interesting AFTER time has passed. These are certainly worth considering if you are in the media space.

1. Social TV startup activity will double
2. Nielsen will buy a social TV data company
3. Get ready for a nasty battle over the second screen
4. Twitter will want to watch TV with you
5. Facebook will increasingly power social TV recommendations
6. TV check-ins will become passive activities
8. Social TV companies will launch (lots) of connected TV apps
9. Netflix will get social, but it won’t dazzle
10. Google TV will make a comeback
11. Start saying good-bye to TV remote controls (finally!)
12. Forget the last channel, TVs will start up smart with social guides

source :


#PU27 was one of the first “Television you can talk too” experiments. A PBS tie-in with the Nova series Making Stuff. Princeton University organized a technology exposition for high school students with exhibitors promoting the idea of studying engineering. It involved doing a live broadcast on the internet using UStream and having viewers direct the program. The twitter hash tag PU27 (#PU27) was used to communicate direction to the production crew and for viewers to discuss the exhibits.


Video Shooting : iPhone/touch are very difficult to mount and carry while shooting. Less so with the flipped cam mode (self shooting), but still awkward especially for left handed operators. The units were gaffer taped to a glass window for some shots. For oblique angles the unit was rested on a cup or can angled against the corner ledge of the window and then taped from the window frame to the device. In this way the viewfinder/display screen is used to ball park the shot. A laptop pointing to the stream of the iTouch was only partially useful as a viewfinder because of the excessive lag between the camera and the display.

The wideness of the angle is pretty good and focus remains constant though as you would expect depth of field is sacrificed a bit.

Audio: Sound recording is good to excellent when the operator is speaking or the camera is close and in front of the subject. Once the sound originates a further distance away or is a mix of more than one sound source external microphones are needed. Rich has done some research in this are. Once the 4 conductor 3.5 mm plug issue is resolved for input and output to the iPhone/iTouch a mini Shure mixer with conventional mics. is being explored. The 4 ring assignments for Android phones are thought to be different than the iPhone configuration. This should be confirmed. Additionally, less expensive, wireless, high quality audio need more research. Bluetooth enabled units may be something worth exploring


The IP application iWebcam was not purchased for this experiment. It or a similar tool is required to maximize limited bandwidth instances.  3.1.11 Got it – not impressed – HUGE lag P-P will retry later perhaps a VLC solution here makes more sense.  Interesting side note – 2 or more iPods (and I assume iPhones) can be cloned along with installed apps. This may not be very practical for a personal comm. device but ideal for production. That is, using two cloned devices, when device one draws down battery, device two can be turned on and pick right up using all the apps, accounts and configurations of of the drained device 🙂

IP connectivity to a laptop would allow for more responsive updating of images so that it can be used as a remote viewfinder. Chicken of the Sea (VNC for Mac) is one way to accomplish this but has not been updated to the newer iOS. Even without robotic articulation (pan/zoom/tilt) remote access to the device via IP could be very useful. A laptop set beside the camera can serve 2 functions. Assuming independent broadcast from the camera (that defeats the bandwidth conservation spoken of earlier) functions like initiating the broadcast, audio mute, and channel selection are more easily done remotely without disturbing the shot composition. A producer, using the laptop, at the site can communicate via text to control to users with supplementary information not seen or heard by the imaging unit.

Without the DSI/Newssharks there was no IP video. IP video is critical for limited bandwidth as only the selected image and audio sources are transmitted to the users.

Control Room

It’s still not clear if the UStream “Pro” producer upgrade allows switching between multiple cameras from non-hardware connected cameras. After Ed discovered the multiple channels on a single Ustream account it seems that the pro version might allow switching much like the crude router we used on the NJN website.

VidBlaster is a software based control room that includes graphics for titling, sound, animation, playlists, screen capture transmission and IP based cameras. Running this application on a Toaster/Tricaster or in a cascading configuration has some interesting implications. Used in combination with Skype, eeVoo, or other conferencing applications and more traditional switching devices the mashup makes for an ad hoc transmission facility with routing along with switching. In other words, more than just a control room in a box – a broadcasting facility in a box. Lessons learned using this equipment will be valuable as more mobile assets are integrated into the traditional plant.

VidBlaster needs a free Adobe Flash encoder to upload to Ustream. I was unable to get it working in this way, but saw that the Ustream Producer recognized it. One way or the other this combo looks like it will work. 3.1.11 Adobe FLASH encoder now working fine and can be sent directly to NJN streaming account. Has also been used in tandem with

Front and back channel communications.

Definition – Front channel communication includes any public means to direct or comment on the live event. One example is a teacher asking a person at a remote site to focus on an object and make a query: “What’s that red thing on the right – can the presenter please explain what it’s used for to us?” Another example is a student commenting on the type of apparatus being used to shoot the image: “Hey, are you guys using an iPhone to shoot this, it looks like my setup?” Yet another example of front channel communication: “ I was at that site earlier and it’s not as cool as it looks, anyone going for burgers after this is over?”

Back channel communication includes private group messaging to update participants on ongoing situations. They may include camera direction, location information, and timing information. This can includes pre-production information – production information – and after production summaries.

Application – Front comm. was only somewhat successfully. The plan was to include tweets from twitter along with ustream chat in a more universal chatting tool, mibbit. This stream of information would be filtered (manually or automatically) with relevant information being routed to the back channel. Cam1 would get the question about the red thing in the example above, but not the comments about the burgers. Funneling all public communication into a single channel may be a bit difficult to follow for the uninitiated, but in time participants should get used to it.

Backchannel communication was not handled well at all. A mish mash of telephone (private and hotlines), email,, mail lists and doodle were all attempted with poor results. A common, secure and consistent way to keep all those involved informed is crucial. As we learned this is particularly important when weather is a factor. A scheduling function is a useful feature along with mobile links. Distributing the information to outlook or other calendars along with email notifications as an opt in. The ability to paste directions, pictures, sounds even video along with chat are things we’ll want to have available. Once the right tool is found we must be sure that everyone involved knows how to get to it and can use it. It will also serve as a convenient way to document our issues and solutions. The ability to all look to one definitive source for information is even more critical when we have participants in different groups.

During the event itself we need to be able to communicate to each other easily and with consistency. This has to be mandated by the producing group. We should strive to use a tool that transcends individuals’ tool-sets. Not everyone uses outlook or an iPhone but a message about a change in plans has to be universal.

The original plan called for an individual(s) to parse the direction and questions when tagged with a Cam1(2,3,4) from the pool of information and resend them to the back channel for the producer of the unit to act upon. A well versed camera operator can take direction while maintaining composition, focus and possibly monitoring audio levels. To have them also ask questions and follow action might be too much to ask. Therefore a unit producer will need to carry a device for communication. For smaller or other events a single camera-operator/producer may be able to handle everything but a system that includes acceptable uses will need to be prepared.

Work has begun on building a chatting bridge that filters, messages and supports twitter and other texting platforms along with mibbit. IRC (Internet Relay Chat) is a mature, stable and feature rich environment to develop our application.


The original plan called for remote units to be directed by a centralized control room. The control room would convey direction from viewers watching a single program. It was not the intention of this exercise for the audience to direct a multiple camera shoot of an event, but audience guided coverage of several single camera crews covering different events at a single venue.

While the main “show” was broadcast, the other units would be streaming their audio and video feeds without user direction. This approach was chosen for several reasons. First, the amount of manpower required to parse the direction from general chatter and then relay it to the remote unit was too much. We believed that an on-camera producer was needed at each site and a unit producer to moderate/parse direction from the crowd was also required. These problems could be overcome but it raises some questions. Some users may want to be in complete control of what’s going on. For them, a single event site that allows individual selection of different views of the events would be attractive. Users preferring more passive consumption might want to just see a more traditional treatment of the subject with occasional input to the crew. Still others will just observe with no desire to interact. One other component not included in this treatment is that of participants at the event. Streaming our feed to the attendees’ smartphones and including their participation add another dimension to the event that may engage actual and virtual participants.

Ed suggested another experiment of multiple cameras documenting a single event from different perspectives, a sporting event for example. Individuals equipped with their own camera phones would shoot the action from their seats. The resulting images would be aggregated to a single website with all the cameras visible. The viewer would then be able to select a single angel.

This leads us to the site(s) itself.

The original concept was to have one main landing page. It would serve as an interactive viewing platform. The user sees video of the selected remote crew in one window that can be popped out an repositioned by the user. A second chat window, also detachable, is used for commenting and to relay direction to the producer. There were not enough people to process directions for all the crews. Instead each crew would “freelance” or assume they were independently live until they were called to appear on the main screen. On a second page video from the three crews would be displayed in smaller windows and the user would select the one they wanted. The selected crew would be enlarged and the others would fade back. A corresponding chat window would be joined to each crew for users comments and conversation, but there would be no user direction. Questions with this approach: How to drive the windows with real time updates? Chat components – pop out function – after site – pre site – directions are good – intuitive design is better