Licensing opportunities are going begging because of inadequate data management, writes Clare Hodder - and a lot more than a few deals is at risk
The future is exciting for people in the content game. Emerging technology like AI and Blockchain could fundamentally transform the way we create, acquire and license content. Even now, we could create a dynamic content marketplace where elements of our published content can be traded online to enable people to legitimately access the content they want, when they want it, at an affordable price. The Copyright Hub was set up to facilitate exactly this, but publishers haven't yet embraced the possibilities it offers. We have content, the world wants content, governments will amend our IP laws if we don't make it easy for people to license our content, so why wouldn't we invest in the infrastructure to make it possible?
Publishers have invested in providing digital access to whole works, and even to articles or chapters, but not for all the content on their backlists - and crucially not for more granular content. I believe there are two main reasons for this. First, it is hard to gauge the size of the potential market, or the extent to which it might cannibalise existing sales. Second, it involves opening up a very large can of worms, namely the rights and royalties data required to make this kind of licensing work.
The first issue could be resolved with experimentation, but publishers are prevented from experimenting with new models because of the second issue. Unless you can identify what rights you have in the parts of your content you want to license, and to whom and how much you need to pay in royalties, it's difficult to trade it.
The processes publishers use to manage rights data are broken - really broken. As any rights professional will attest, when new licensing opportunities emerge, the lack of detailed, accessible rights data can seriously delay or even prevent those opportunities from being exploited. There are huge swathes of publishers' lists that aren't available electronically because the rights position is uncertain.
For any content you wish to license, you need to establish whether or not you have the rights. Did the author assign or license the relevant rights to you? Have the rights been reverted? Has the list been sold? Does the content contain other items, not covered by the author's agreement - illustrations, quotations, figures? If you can track down all the agreements, satisfy yourself you have the rights, and can pay any royalties involved, you can proceed to license.
Establishing the answers to these questions is often difficult. Information may still be held in analogue form (dusty filing cabinets in basements) and isn't always comprehensive. Rights metadata is not routinely collected, and there are no data standards to ensure that publishers are all collecting the same information in the same way. The upshot of all this is that many licensing opportunities are given up before they are even started, and experimentation with new models is too painful to contemplate.
Re-use permissions are a case in point. For those unlucky enough to have to make a request, they can look forward to a wait of four to six weeks (the embarrassing industry standard) before they get a response. Tools like PLSClear and RightsLink are doing much to help automate the licensing process, but unless and until the publisher has been able to identify that the content being requested is theirs and that they control the rights in it, the logjams at the publisher's end will remain.
From memes and mashups, to blogs, vlogs, podcasts, newsletters and websites, the need for quality content is growing. People want content and expect to be able to get their hands on it quickly. If we don't find a way of addressing our rights issues, those outside of our industry will resolve the problem for us, with serious repercussions for our current business. We've seen governments make changes, or threaten to make changes, to copyright legislation, because it is deemed not to work well in a digital environment (and in the case of Canada, the decimation of the educational publishing industry as a result).
We've seen big tech pushing the boundaries of copyright law with the Google Books Project, and more recently successfully opposing European copyright reforms that would see them take greater responsibility for unauthorised content uploaded to their sites. We've experienced large scale digital piracy, but when we can't produce the documentation to prove we control the rights, we struggle to defend ourselves from it.
We are pushing back against broadening copyright exceptions, and advocating more copyright education, but who will take us seriously unless we can demonstrate that the market isn't broken and that we are making our content available to those who want to access it, in a convenient and affordable form? If we want to embrace our digital future, we must invest in tackling our analogue legacy.
Clare Hodder is a consultant with Rights2 Consultants and co-founder and director of RightsZone, a cloud-based rights database and workflow tool that supports rights professionals in maximising licensing opportunities (www.rightszone.co.uk or contact firstname.lastname@example.org).
This article first appeared in the Publishers Weekly/BookBrunch Frankfurt Show Daily.