In our Knight Foundation News Challenge grant proposal, we refer to one of the techniques we’re developing as “Enhanced Live blogging.” I’ll describe just how and why these techniques should be included in your production design to stream line workflow and create more material to serve and engage your constituents. Let’s start with that term constituent. The conversation about our lives, what’s new, has been going on before TV, newspapers and the internet. As media evolved into a convenient collection of information for our consumption, it subtly promoted a sometimes unspoken agenda by directing what our focus should be. The notion of Eyewitness news – unless “we” witnessed it and share it with you, it isn’t news – is based on the idea that there are limited resources to gather information. A gatekeeper must prioritize the stories and decide which are worthwhile to bring to you, the audience. With the wealth of information now available to all people in all areas turns this notion around. A passive audience that absorbs only what is placed in front of them is fading away. Many take an active role in their information consumption and share it in conversations. This sharing takes many forms. Some comment on blogs, others call in on talk shows while others still write letters to the editor. Some just shout at their TV’s. A real digital conversation can now occur, we’ll talk more about this later. But what to we call this new relationship? Not an audience, not a user, not a pure content consumer. The relationship has changed. The media is no longer a lecturer standing at the front of the auditorium reading notes to listeners copying them into their notebooks. It’s more like a study group where participants share their discoveries to further the dialog. So if we are engaged with others and providing the mechanisms to help further their conversation perhaps it is constituents that we serve. The enhanced live blogging concept was designed for live programs, but can also be used for the broadcast premiere of pre-recorded events. Live blogging is a written play by play account of an event from someone at the event as it happens – enhanced live blogging adds video, audio and transcription. Four or more unmanned cameras are positioned around a “talking head” discussion. They are switched live and sent to be broadcast, streamed or to a recorder. The product of the switched output is a traditional public affairs program. It can be used in long form for audio and video podcasts. Each camera is focused on the individual participant of the discussion. These “quote” cams along with the audio feed of the conversation are isolated and sent along to blogging stations. The live blogger selects relevant statements by marking in and out points. The product of the blogging station is a branded video clip of the statement and a transcription of the statement that are posted for distribution. A shortened link that points to the clip is generated and inserted in to the twitter template where the blogger writes a brief headline. A tweet containing the quote, the link to the video and branding information is tweeted. At the same time RSS feeds are updated and text messages are send to subscribers. A transcription of the quote with the link is also inserted into the topics “chat” stream where others continue the conversation. The relevant points made in this chat stream are curated by an additional blogger or social avatar who participates in the televised conversation by introducing chat points to the televised guests. The live blogger position takes the place of camera operators. The ability to quickly determine quotable or relevant statements are needed instead of knowing how to operator a camera. The live blogger position can be virtualised by sending the isolated audio and video streams to remote individuals who have demonstrated the skills necessary to operate as an enhanced live blogger.
#PU27 was one of the first “Television you can talk too” experiments. A PBS tie-in with the Nova series Making Stuff. Princeton University organized a technology exposition for high school students with exhibitors promoting the idea of studying engineering. It involved doing a live broadcast on the internet using UStream and having viewers direct the program. The twitter hash tag PU27 (#PU27) was used to communicate direction to the production crew and for viewers to discuss the exhibits.
Video Shooting : iPhone/touch are very difficult to mount and carry while shooting. Less so with the flipped cam mode (self shooting), but still awkward especially for left handed operators. The units were gaffer taped to a glass window for some shots. For oblique angles the unit was rested on a cup or can angled against the corner ledge of the window and then taped from the window frame to the device. In this way the viewfinder/display screen is used to ball park the shot. A laptop pointing to the stream of the iTouch was only partially useful as a viewfinder because of the excessive lag between the camera and the display.
The wideness of the angle is pretty good and focus remains constant though as you would expect depth of field is sacrificed a bit.
Audio: Sound recording is good to excellent when the operator is speaking or the camera is close and in front of the subject. Once the sound originates a further distance away or is a mix of more than one sound source external microphones are needed. Rich has done some research in this are. Once the 4 conductor 3.5 mm plug issue is resolved for input and output to the iPhone/iTouch a mini Shure mixer with conventional mics. is being explored. The 4 ring assignments for Android phones are thought to be different than the iPhone configuration. This should be confirmed. Additionally, less expensive, wireless, high quality audio need more research. Bluetooth enabled units may be something worth exploring
The IP application iWebcam was not purchased for this experiment. It or a similar tool is required to maximize limited bandwidth instances. 3.1.11 Got it – not impressed – HUGE lag P-P will retry later perhaps a VLC solution here makes more sense. Interesting side note – 2 or more iPods (and I assume iPhones) can be cloned along with installed apps. This may not be very practical for a personal comm. device but ideal for production. That is, using two cloned devices, when device one draws down battery, device two can be turned on and pick right up using all the apps, accounts and configurations of of the drained device 🙂
IP connectivity to a laptop would allow for more responsive updating of images so that it can be used as a remote viewfinder. Chicken of the Sea (VNC for Mac) is one way to accomplish this but has not been updated to the newer iOS. Even without robotic articulation (pan/zoom/tilt) remote access to the device via IP could be very useful. A laptop set beside the camera can serve 2 functions. Assuming independent broadcast from the camera (that defeats the bandwidth conservation spoken of earlier) functions like initiating the broadcast, audio mute, and channel selection are more easily done remotely without disturbing the shot composition. A producer, using the laptop, at the site can communicate via text to control to users with supplementary information not seen or heard by the imaging unit.
Without the DSI/Newssharks there was no IP video. IP video is critical for limited bandwidth as only the selected image and audio sources are transmitted to the users.
It’s still not clear if the UStream “Pro” producer upgrade allows switching between multiple cameras from non-hardware connected cameras. After Ed discovered the multiple channels on a single Ustream account it seems that the pro version might allow switching much like the crude router we used on the NJN website.
VidBlaster is a software based control room that includes graphics for titling, sound, animation, playlists, screen capture transmission and IP based cameras. Running this application on a Toaster/Tricaster or in a cascading configuration has some interesting implications. Used in combination with Skype, eeVoo, or other conferencing applications and more traditional switching devices the mashup makes for an ad hoc transmission facility with routing along with switching. In other words, more than just a control room in a box – a broadcasting facility in a box. Lessons learned using this equipment will be valuable as more mobile assets are integrated into the traditional plant.
VidBlaster needs a free Adobe Flash encoder to upload to Ustream. I was unable to get it working in this way, but saw that the Ustream Producer recognized it. One way or the other this combo looks like it will work. 3.1.11 Adobe FLASH encoder now working fine and can be sent directly to NJN streaming account. Has also been used in tandem with
Front and back channel communications.
Definition – Front channel communication includes any public means to direct or comment on the live event. One example is a teacher asking a person at a remote site to focus on an object and make a query: “What’s that red thing on the right – can the presenter please explain what it’s used for to us?” Another example is a student commenting on the type of apparatus being used to shoot the image: “Hey, are you guys using an iPhone to shoot this, it looks like my setup?” Yet another example of front channel communication: “ I was at that site earlier and it’s not as cool as it looks, anyone going for burgers after this is over?”
Back channel communication includes private group messaging to update participants on ongoing situations. They may include camera direction, location information, and timing information. This can includes pre-production information – production information – and after production summaries.
Application – Front comm. was only somewhat successfully. The plan was to include tweets from twitter along with ustream chat in a more universal chatting tool, mibbit. This stream of information would be filtered (manually or automatically) with relevant information being routed to the back channel. Cam1 would get the question about the red thing in the example above, but not the comments about the burgers. Funneling all public communication into a single channel may be a bit difficult to follow for the uninitiated, but in time participants should get used to it.
Backchannel communication was not handled well at all. A mish mash of telephone (private and hotlines), email, moot.ly, mail lists and doodle were all attempted with poor results. A common, secure and consistent way to keep all those involved informed is crucial. As we learned this is particularly important when weather is a factor. A scheduling function is a useful feature along with mobile links. Distributing the information to outlook or other calendars along with email notifications as an opt in. The ability to paste directions, pictures, sounds even video along with chat are things we’ll want to have available. Once the right tool is found we must be sure that everyone involved knows how to get to it and can use it. It will also serve as a convenient way to document our issues and solutions. The ability to all look to one definitive source for information is even more critical when we have participants in different groups.
During the event itself we need to be able to communicate to each other easily and with consistency. This has to be mandated by the producing group. We should strive to use a tool that transcends individuals’ tool-sets. Not everyone uses outlook or an iPhone but a message about a change in plans has to be universal.
The original plan called for an individual(s) to parse the direction and questions when tagged with a Cam1(2,3,4) from the pool of information and resend them to the back channel for the producer of the unit to act upon. A well versed camera operator can take direction while maintaining composition, focus and possibly monitoring audio levels. To have them also ask questions and follow action might be too much to ask. Therefore a unit producer will need to carry a device for communication. For smaller or other events a single camera-operator/producer may be able to handle everything but a system that includes acceptable uses will need to be prepared.
Work has begun on building a chatting bridge that filters, messages and supports twitter and other texting platforms along with mibbit. IRC (Internet Relay Chat) is a mature, stable and feature rich environment to develop our application.
The original plan called for remote units to be directed by a centralized control room. The control room would convey direction from viewers watching a single program. It was not the intention of this exercise for the audience to direct a multiple camera shoot of an event, but audience guided coverage of several single camera crews covering different events at a single venue.
While the main “show” was broadcast, the other units would be streaming their audio and video feeds without user direction. This approach was chosen for several reasons. First, the amount of manpower required to parse the direction from general chatter and then relay it to the remote unit was too much. We believed that an on-camera producer was needed at each site and a unit producer to moderate/parse direction from the crowd was also required. These problems could be overcome but it raises some questions. Some users may want to be in complete control of what’s going on. For them, a single event site that allows individual selection of different views of the events would be attractive. Users preferring more passive consumption might want to just see a more traditional treatment of the subject with occasional input to the crew. Still others will just observe with no desire to interact. One other component not included in this treatment is that of participants at the event. Streaming our feed to the attendees’ smartphones and including their participation add another dimension to the event that may engage actual and virtual participants.
Ed suggested another experiment of multiple cameras documenting a single event from different perspectives, a sporting event for example. Individuals equipped with their own camera phones would shoot the action from their seats. The resulting images would be aggregated to a single website with all the cameras visible. The viewer would then be able to select a single angel.
This leads us to the site(s) itself.
The original concept was to have one main landing page. It would serve as an interactive viewing platform. The user sees video of the selected remote crew in one window that can be popped out an repositioned by the user. A second chat window, also detachable, is used for commenting and to relay direction to the producer. There were not enough people to process directions for all the crews. Instead each crew would “freelance” or assume they were independently live until they were called to appear on the main screen. On a second page video from the three crews would be displayed in smaller windows and the user would select the one they wanted. The selected crew would be enlarged and the others would fade back. A corresponding chat window would be joined to each crew for users comments and conversation, but there would be no user direction. Questions with this approach: How to drive the windows with real time updates? Chat components – pop out function – after site – pre site – directions are good – intuitive design is better
Things have changed
The Internet is over 40 years old. The World Wide Web is over 20. Many of the conventions we are now accustomed to, like websites promoting a single program or product, banner ads, email contact information, etc. are the standards that makeup the internet experience to many of people who use it. A great deal of time goes into designing the art and technology used to craft these sites, but the model is based on the same principles we adhere to for television. If we build it, they will come – if we can just cut through all the clutter and make viewers aware it exists.
We say the web is a “lean forward” environment meaning a visitor who peruses a website is more engaged than a passive television viewer. This is a false assumption. Technology has made TV viewing a much more engaging activity if only to find the content itself. While some viewers still sit down at a scheduled time to see a show, many take a much more active role. They’ll tell their DVR/Tivo to find the shows they like or heard about. They’ll go to a video rental shop (if they can still find one in their neighborhood) or subscribe to Netflix and rent an entire season. Then play them in the car for the kids or copy them to their iPhone to watch on the train ride in to work. If this much has changed for ancient old television how can we not expect a more nimble Internet hasn’t grown as well?
Perhaps you’ve heard of Web2.0. While this term is considered passé today, the concepts behind it are ones to consider for a new approach. From a design perspective the sites look dull or even uninspired. Visually, these websites look stripped down and bare. They have uncluttered layouts with very few distractions. The juice that makes them run is content. Constantly updated content. Not just weekly, daily or even hourly – constantly. Many of them aren’t even much of a “website” in the sense we know websites. The Internet is going “real time” and that is a great development for those in the television business.
Let’s take a step back to remember our media history and think about this for a second. Driven by the technology of the day, television was born a “real time” medium – Live. With no way to record television or video, other than filming a TV monitor, techniques were developed to communicate. Borrowing heavily from the language of film, live editing or switching between multiple cameras, pre-positioned around a live event became what television was. Eventually videotape was invented. This allowed more traditional film production techniques to be used in producing content. It took until today for digital post-production to become so technically sophisticated that it’s affordable and accessible to almost anyone who wants to “say” something.
But the core competency of television, and you could make a case for radio also, is the live dissemination of information. Event coverage or news. Whatever you want to call it – there is a constant stream of information coming out and an audience with an insatiable appetite for more. Along side the infostream, and contributing to it, is analysis of what’s happening and the editing (both figuratively and contextually) of this information. These too, are areas that TV/Radio organizations have considerable strength in.
The convergence of media and telecommunications – TVphones – helps grow the audience. The technologies have merged and live television has an advantage. Cellular, G3, G4 and WiFi are all radio technologies. While not exactly the same thing as microwave and satellite transmission there are enough similarities that becoming expert in these concentrations is not that great a stretch. I don’t use the term “mobile” because there is a television bias in that term. For now, let’s table any discussion of “Mobile TV” because the financial and technical hurdles associated with that space makes it a tomorrow possibility at best. What is real today? What do iPhone and Android phone users and iPad and tablet users “get”? As mentioned earlier, traditional television programs and movies are available or can be made (ripped) ready for consumption on these devices. Obviously, content producers who want their stuff seen will make it easily available on all these devices.
Here’s where it really gets different. The interaction of users with content when using these devices. Whether voice or finger swipe initiated, searched content is delivered TO the user DIRECTLY. Some of this is “app” driven – another discussion for another time. The point being information or content finds it’s way to the user. Does this come from a website? Kind of, but not exactly. Does the user care that there’s a website associated with the information they seek? Maybe, if they visit that site for a recap later from home. And, depending on the type of information, is it fresh or only updated in the morning? If it‘s dated information will the user wait for the trusted source to update or find another more timely trusted source?
This is happening now. One of the first “hacks” for the iPad was how to have it setup at the Apple store. The iPad requires initializing with an iTunes account while tethered to a PC. So, in order for the iPad to work you had to own a Mac or PC. Enough people didn’t own a computer that the “hack” of using a Mac at the Apple store to initialize the iPad was widely circulated. By 2011 there will be more smart phones than PC’s and that doesn’t even count hybrids like Slate/tablets and SmartTVs/SetTopBoxes.
Although these devices are also used to visit websites, their design is for “content consumption.” Content consumption being a more passive experience. Content is played or viewed through the device. Sports Illustrated, the New York Times and ABC News are redefining how their content is consumed while Hulu, UStream, YouTube and others play in this newish medium. Most of these devices are inexpensive compared to the price of a full PC’s, and don’t require much technical skill to use. They are bought by folks with some disposable income who are not necessarily very web savvy and they are impatient. These users will expect the content they want to find them.
The most immediate challenge is to create content quickly that is easily updated. Then to figure out ways to get it to users in whatever way they want, whenever and wherever they want and to have them keep wanting to get it from the same source. An Internet Services Group (ISG) will facilitate an entire organizations participation in meeting these goals.
Surrender the homepage and main websites to participants. There are several providers that offer all the services needed for robust delivery of traditional websites along with the ability to update these pages without expert intervention. ISG will select pre-built and in some rare instances, change pre-build or construct new designs to accommodate new projects. At the beginning of each new media project an Internet strategy will be defined. Perhaps a webpage isn’t appropriate for a project. A webpage for a “Reader Services for the Blind” project can be constructed for visually impaired persons, but a podcast describing the offer might be a better use of resources.
After a webpage has been determined to be an appropriate use of Internet services a layout will be agreed upon along with a maintenance schedule. Detailed instructions for the regular maintenance of the site and any training, if needed, will be provided by the ISG. Rather than having content pushed from project producers to web workers up to the net – the content creators take responsibility for managing their own content. This is one way to insure that information is distributed in the most timely way and that content producers have a greater stake in their online content. Formal workshops and less formal one-on-one instruction is key to making this successful. These same procedures can be extended to outside partners to host even more, non-home-grown content.