QR Codes, take two! and augmented reality

Imagine these bugs or non-bugs embedded on TV sets (stages not monitors)  and at events – also embedded into the  locations where references from content are made. An environmental report about a stream or a politician’s advocating for commuters at a crowded bridge. Particularly powerful when commuters are aware of this tech on their ride in or out to work. That is how augmented reality extends and engages. Just tell folks what to do and they’ll dig it – another thing to jump on now and lead the pack NOT another me too

If you don’t have google goggles  – get it and try it – you talk or see search instead of typing the query.


Supply and demand content

because media is now created by demand rather than supply — which is to say the next web page is printed when someone wants it to be printed, not printed and stored in a warehouse in advance if someone who may want it.

on Not just re-purposing content – re designing it!

Rafat Ali is the founder of PaidContent. Here he talks about the idea of content delivery via mobile devices. Read the full article here.

Rafat Ali: As the deluge of media and information has become overwhelming and we are in a state of persistent media stimuli, the tactile nature of touchscreens allow us, for the first time in the digital age, to “get a handle” on this flow, this stream, this river, this flood. It becomes a lot more personal, a lot more immersive, just by the sheer fact that we are touching it.

Also, despite the somewhat shiny-object nature of the apps revolution, what it has shown us is that people are yearning for simpler, faster, more utilitarian and more contained experiences. Touch is the underpinning that enables that thinking.

Further, what that means is that we move away from the click frenzied nature of the web, and the scroll. I think these are among the worst things to be invented from a human-centered design perspective. We will move from a vertical model of digital media consumption to a hybrid-horizontal model, which by its nature is immersive. You can already see it in iPad apps such as Flipboard, which I think are just the 0.1 versions of what is to come.

It is also a matter of time before these devices become a lot cheaper, and mainstream across all societies worldwide. Hence my belief.

AA: How can publishers take advantage of properties that exist on touchscreen devices that were not commercially viable only a few years ago?

RA: I think it requires a lot more than publishers can grasp at this point. It means re-architecting your company around this evolving mode of media consumption. It means bringing design thinking around every piece of content being created within a media company. It may also mean that the start of your media strategy going ahead could be based around portability and mobility, not just another device your content has to be ported to, the dominant media thinking currently.

It also leads you to thinking about packaging, bundling AND even unbundling in a lot of different ways than couldn’t have been imagined just two to three years ago. Consider the example of a children’s books publisher. Imagine the future of the company’s assets, as it gets reimagined for these portable touch platforms. The possibilities are only limited by your imagination, and implementation.

Then in terms of revenues/payments, the upsell becomes a seamless possibility like never before. Virtual goods is the most cliched but clearly viable example of this. Lots more to come there, both from start-ups and big established players.

On flipping – out

By now we are all familiar with the concept of “surfing” the net. How the term surfing came about is a somewhat interesting story you can be read about here. Using the tools of scrolling through and clicking on to follow links and find more information are all products of the technology we  had to access that information. Using a computer mouse or keyboard to interact with this information is accomplished with Graphical User Interface or GUI pronounced gooey. This technology does not easily lend itself to mobility, location orientation or many of the other features available with more traditional information dissemination vehicles. For instance, you can tuck away a magazine or newspaper (what’s that?) on your way through your day while traveling in a train, plane or automobile. Then when the situation allows, take it back out and thumb through it to find some other interesting information to fill your head with until you arrive at the next situation you need to be aware of.

The advent of mobile devices like smart phones, tablets and possibly even information kiosks that use touch screens, requires a different or modified GUI.  The point of the  accompanying post is one person’s take on how the touch interface and “flipping” through information is a better way to sift through all the information we now have access to.


FB Facts


Thank you Gareth Case

Facebook, its growth and success in unprecedented. The 50 facts below highlight the true scale of Facebook and its users’ activity. But will we ever see a communications platform of this size again? What’s going to be the next big thing? Google+ hasn’t had quite the impact I was expecting. If a brand like Google can’t conquer the social networking world then what can? Social media as it is today, in my opinion is saturated, surely only a new breed of media and communications could have an impact big enough to make a dent on the current landscape.

1.1 in every 13 people on Earth is on Facebook
2.35+ demographic represents more than 30% of the entire user base
3.71.2 % of all USA internet users are on Facebook
4.In 20 minutes 1,000,000 links are shared on Facebook
5.In 20 minutes 1,484,000 event invites are posted
6.In 20 minutes 1,323,000 photos are tagged
7.In 20 minutes 1,851,000 status updates are entered
8.In 20 minutes 1.972 million friend requests are accepted
9.In 20 minutes 2,716,000 photos are uploaded
10.In 20 minutes 2,716,000 messages are sent
11.In 20 minutes 10.2 million comments are posted
12.In 20 minutes 1,587,000 wall posts are written
13.750 million photos were uploaded to Facebook over New Year’s weekend
14.48% of young Americans said they found out about news through Facebook
15.48% of 18 to 34 year olds check Facebook right when they wake up
16.50% of active users log on to Facebook in any given day
17.Average user has 130 friends
18.People spend over 700 billion minutes per month on Facebook
19.There are over 900 million objects that people interact with (pages, groups, events and community pages)
20.Average user is connected to 80 community pages, groups and events
21.Average user creates 90 pieces of content each month
22.More than 30 billion pieces of content (web links, news stories, blog posts, notes, photo albums, etc.) shared each month.
23.More than 70 translations available on the site
24.About 70% of Facebook users are outside the United States
25.Over 300,000 users helped translate the site through the translations application
26.Entrepreneurs and developers from more than 190 countries build with Facebook Platform
27.People on Facebook install 20 million applications every day
28.Every month, more than 250 million people engage with Facebook on external websites
29.Since social plugins launched in April 2010, an average of 10,000 new websites integrate with Facebook every day
30.More than 2.5 million websites have integrated with Facebook, including over 80 of comScore’s U.S. Top 100 websites and over half of comScore’s Global Top 100 websites
31.There are more than 250 million active users currently accessing Facebook through their mobile devices
32.People that use Facebook on their mobile devices are twice as active on Facebook than non-mobile users.
33.There are more than 200 mobile operators in 60 countries working to deploy and promote Facebook mobile products
34.Al Pacino’s face was on the original Facebook homepage
35.One early Facebook function was a file sharing service
36.The first “Work Networks” as well as the original educational networks included Apple and Microsoft
37.The meaning of the term poke has never been defined
38.There is an ‘App’ to see what’s on the Facebook cafe menu
39.Mark Zuckerburg (CEO of Facebook) calls himself a “Harvard Graduate” when in fact he didn’t graduate (apparently his reply is that “there isn’t a setting for dropout”)
40.Australian’s spend more time per month on Facebook than any other country at over 7 hours on average
41.A Facebook employee hoodie sold for $4,000 on eBay
42.Facebook was initially bank-rolled by Peter Thiel the co-founder of PayPal for $500,000
43.It is the second biggest website by traffic behind Google (at the moment)
44.Facebook is now valued at approximately $80 billion
45.Facebook makes money through advertising and virtual products
46.Facebook was almost shut down by a lawsuit by ConnectU who claimed that Zuckerburg stole the idea and Technology for Facebook (the issue was settled out of court)
47.The USA has the largest Facebook user base with 155 million people which represents 23.6% of Facebook’s total users
48.There is over 16,000,000 Facebook fan pages
49.Texas Hold’em Poker is the most popular Facebook page with over 41 million fans
50.More than 650 million active users

How information is gotten

Don”t know why I had to post this lift from the blog post on rarity here. Just seems good when someone take the time to think and then write down why and how things work in this world.

Historically, the two main types of obstacles to information discovery have been barriers of awareness, which encompass all the information we can’t access because we simply don’t know about its existence in the first place, and barriers of accessibility, which refer to the information we do know is out there but remains outside of our practical, infrastructural or legal reach. What the digital convergence has done is solve the latter, by bringing much previously inaccessible information into the public domain, made the former worse in the process, by increasing the net amount of information available to us and thus creating a wealth of information we can’t humanly be aware of due to our cognitive and temporal limitations, and added a third barrier — a barrier of motivation.


#PU27 was one of the first “Television you can talk too” experiments. A PBS tie-in with the Nova series Making Stuff. Princeton University organized a technology exposition for high school students with exhibitors promoting the idea of studying engineering. It involved doing a live broadcast on the internet using UStream and having viewers direct the program. The twitter hash tag PU27 (#PU27) was used to communicate direction to the production crew and for viewers to discuss the exhibits.


Video Shooting : iPhone/touch are very difficult to mount and carry while shooting. Less so with the flipped cam mode (self shooting), but still awkward especially for left handed operators. The units were gaffer taped to a glass window for some shots. For oblique angles the unit was rested on a cup or can angled against the corner ledge of the window and then taped from the window frame to the device. In this way the viewfinder/display screen is used to ball park the shot. A laptop pointing to the stream of the iTouch was only partially useful as a viewfinder because of the excessive lag between the camera and the display.

The wideness of the angle is pretty good and focus remains constant though as you would expect depth of field is sacrificed a bit.

Audio: Sound recording is good to excellent when the operator is speaking or the camera is close and in front of the subject. Once the sound originates a further distance away or is a mix of more than one sound source external microphones are needed. Rich has done some research in this are. Once the 4 conductor 3.5 mm plug issue is resolved for input and output to the iPhone/iTouch a mini Shure mixer with conventional mics. is being explored. The 4 ring assignments for Android phones are thought to be different than the iPhone configuration. This should be confirmed. Additionally, less expensive, wireless, high quality audio need more research. Bluetooth enabled units may be something worth exploring


The IP application iWebcam was not purchased for this experiment. It or a similar tool is required to maximize limited bandwidth instances.  3.1.11 Got it – not impressed – HUGE lag P-P will retry later perhaps a VLC solution here makes more sense.  Interesting side note – 2 or more iPods (and I assume iPhones) can be cloned along with installed apps. This may not be very practical for a personal comm. device but ideal for production. That is, using two cloned devices, when device one draws down battery, device two can be turned on and pick right up using all the apps, accounts and configurations of of the drained device 🙂

IP connectivity to a laptop would allow for more responsive updating of images so that it can be used as a remote viewfinder. Chicken of the Sea (VNC for Mac) is one way to accomplish this but has not been updated to the newer iOS. Even without robotic articulation (pan/zoom/tilt) remote access to the device via IP could be very useful. A laptop set beside the camera can serve 2 functions. Assuming independent broadcast from the camera (that defeats the bandwidth conservation spoken of earlier) functions like initiating the broadcast, audio mute, and channel selection are more easily done remotely without disturbing the shot composition. A producer, using the laptop, at the site can communicate via text to control to users with supplementary information not seen or heard by the imaging unit.

Without the DSI/Newssharks there was no IP video. IP video is critical for limited bandwidth as only the selected image and audio sources are transmitted to the users.

Control Room

It’s still not clear if the UStream “Pro” producer upgrade allows switching between multiple cameras from non-hardware connected cameras. After Ed discovered the multiple channels on a single Ustream account it seems that the pro version might allow switching much like the crude router we used on the NJN website.

VidBlaster is a software based control room that includes graphics for titling, sound, animation, playlists, screen capture transmission and IP based cameras. Running this application on a Toaster/Tricaster or in a cascading configuration has some interesting implications. Used in combination with Skype, eeVoo, or other conferencing applications and more traditional switching devices the mashup makes for an ad hoc transmission facility with routing along with switching. In other words, more than just a control room in a box – a broadcasting facility in a box. Lessons learned using this equipment will be valuable as more mobile assets are integrated into the traditional plant.

VidBlaster needs a free Adobe Flash encoder to upload to Ustream. I was unable to get it working in this way, but saw that the Ustream Producer recognized it. One way or the other this combo looks like it will work. 3.1.11 Adobe FLASH encoder now working fine and can be sent directly to NJN streaming account. Has also been used in tandem with

Front and back channel communications.

Definition – Front channel communication includes any public means to direct or comment on the live event. One example is a teacher asking a person at a remote site to focus on an object and make a query: “What’s that red thing on the right – can the presenter please explain what it’s used for to us?” Another example is a student commenting on the type of apparatus being used to shoot the image: “Hey, are you guys using an iPhone to shoot this, it looks like my setup?” Yet another example of front channel communication: “ I was at that site earlier and it’s not as cool as it looks, anyone going for burgers after this is over?”

Back channel communication includes private group messaging to update participants on ongoing situations. They may include camera direction, location information, and timing information. This can includes pre-production information – production information – and after production summaries.

Application – Front comm. was only somewhat successfully. The plan was to include tweets from twitter along with ustream chat in a more universal chatting tool, mibbit. This stream of information would be filtered (manually or automatically) with relevant information being routed to the back channel. Cam1 would get the question about the red thing in the example above, but not the comments about the burgers. Funneling all public communication into a single channel may be a bit difficult to follow for the uninitiated, but in time participants should get used to it.

Backchannel communication was not handled well at all. A mish mash of telephone (private and hotlines), email, moot.ly, mail lists and doodle were all attempted with poor results. A common, secure and consistent way to keep all those involved informed is crucial. As we learned this is particularly important when weather is a factor. A scheduling function is a useful feature along with mobile links. Distributing the information to outlook or other calendars along with email notifications as an opt in. The ability to paste directions, pictures, sounds even video along with chat are things we’ll want to have available. Once the right tool is found we must be sure that everyone involved knows how to get to it and can use it. It will also serve as a convenient way to document our issues and solutions. The ability to all look to one definitive source for information is even more critical when we have participants in different groups.

During the event itself we need to be able to communicate to each other easily and with consistency. This has to be mandated by the producing group. We should strive to use a tool that transcends individuals’ tool-sets. Not everyone uses outlook or an iPhone but a message about a change in plans has to be universal.

The original plan called for an individual(s) to parse the direction and questions when tagged with a Cam1(2,3,4) from the pool of information and resend them to the back channel for the producer of the unit to act upon. A well versed camera operator can take direction while maintaining composition, focus and possibly monitoring audio levels. To have them also ask questions and follow action might be too much to ask. Therefore a unit producer will need to carry a device for communication. For smaller or other events a single camera-operator/producer may be able to handle everything but a system that includes acceptable uses will need to be prepared.

Work has begun on building a chatting bridge that filters, messages and supports twitter and other texting platforms along with mibbit. IRC (Internet Relay Chat) is a mature, stable and feature rich environment to develop our application.


The original plan called for remote units to be directed by a centralized control room. The control room would convey direction from viewers watching a single program. It was not the intention of this exercise for the audience to direct a multiple camera shoot of an event, but audience guided coverage of several single camera crews covering different events at a single venue.

While the main “show” was broadcast, the other units would be streaming their audio and video feeds without user direction. This approach was chosen for several reasons. First, the amount of manpower required to parse the direction from general chatter and then relay it to the remote unit was too much. We believed that an on-camera producer was needed at each site and a unit producer to moderate/parse direction from the crowd was also required. These problems could be overcome but it raises some questions. Some users may want to be in complete control of what’s going on. For them, a single event site that allows individual selection of different views of the events would be attractive. Users preferring more passive consumption might want to just see a more traditional treatment of the subject with occasional input to the crew. Still others will just observe with no desire to interact. One other component not included in this treatment is that of participants at the event. Streaming our feed to the attendees’ smartphones and including their participation add another dimension to the event that may engage actual and virtual participants.

Ed suggested another experiment of multiple cameras documenting a single event from different perspectives, a sporting event for example. Individuals equipped with their own camera phones would shoot the action from their seats. The resulting images would be aggregated to a single website with all the cameras visible. The viewer would then be able to select a single angel.

This leads us to the site(s) itself.

The original concept was to have one main landing page. It would serve as an interactive viewing platform. The user sees video of the selected remote crew in one window that can be popped out an repositioned by the user. A second chat window, also detachable, is used for commenting and to relay direction to the producer. There were not enough people to process directions for all the crews. Instead each crew would “freelance” or assume they were independently live until they were called to appear on the main screen. On a second page video from the three crews would be displayed in smaller windows and the user would select the one they wanted. The selected crew would be enlarged and the others would fade back. A corresponding chat window would be joined to each crew for users comments and conversation, but there would be no user direction. Questions with this approach: How to drive the windows with real time updates? Chat components – pop out function – after site – pre site – directions are good – intuitive design is better

Small camera techniques

When we say hand-held camera in television we mean shoulder mounted. Using a smaller camera requires camera support, a wide angle lens, less zooming more static etc.  Let’s start to experiment with the Canon 2Ti – it does 1080 HD and has HDMI out – memory cards (4x16GB?) Instead we use a Lumix FH22

Staple of video is the zoom in/zoom out – pan left/pan right – these techniques are not as effective with fixed lens and non-fluid head camera support. It’s a throw back to early hand held film techniques. Why does Haskell Wexler come to mind?