Gpodder and the BBC

I have been a satisfied user of gpodder for downloading podcasts for a few years. But three days ago, a new problem whose documentation I cannot find, has arisen.
On the BBC (and no others), gpodder downloads a 700 byte header file rather than the desired audio. I can get the audio by streaming and by going to the program webpage, so the issue lies with gpodder, I think.

The header file looks like this:

<html><head><noscript><meta http-equiv=Refresh Content="0; URL=http://192.168.1.1/ui/dynamic/internet-blocked.html"></noscript><script language='javascript' type='text/javascript'>function init(_frm) { if (_frm.sent.value == 0) { _frm.sent.value=1; _frm.submit(); } }</script></head><body onload=init(auth)><form name=auth action='http://192.168.1.1/ui/dynamic/internet-blocked.html' METHOD=GET><input type=hidden name='mac_addr' value='00:22:4d:86:c8:89'>
<input type=hidden name='url' value='http://open.live.bbc.co.uk/mediaselector/6/redir/version/2.0/mediaset/audio-nondrm-download-low/proto/http/vpid/p09n0vtw.mp3'>
<input type=hidden name='reason' value='1'>
<input type=hidden id=sent value='0'></form></body></html>

Inside is a link that can download the audio:
http://open.live.bbc.co.uk/mediaselector/6/redir/version/2.0/mediaset/audio-nondrm-download-low/proto/http/vpid/p09n0vtw.mp3

I have written to the BBC, but I think this is not their problem.
Is it something I can do something about, other than open the header files and click each of those links?

1 Like

Welcome back, great to see you here. :wave: :slightly_smiling_face:

https://gpoddernet.readthedocs.io/en/latest/user/clients.html

As can be seen above, there are many clients one can choose from.
I assume, you are using the first Linux client in the list:

If they officially support BBC as a download source, then it would be nice of you if you could create a bug report, which outlines the issue you are seeing. The client would need to extract the correct download link from e.g. the header and then download the actual podcast.

As a workaround, if I were in your shoes, I would create a short script, that automatically extracts the correct download links and then run them through gPodder. I would keep using the script, until my bug report has been resolved. After this resolution, I would upgrade my gPodder version and I could use the client, as I did many times before, successfully.

Never went away. I’ve been lurking.

That requires joining Github, right? Maybe a few steps beyond my skill level.

It has been doing that for years, and only just started to fail at it. And just BBC.

I think the easier workaround, since I get only about 5 BBC downloads a day, is to open those files, copy and “go to” the download link.

One can only wait 'til it shows up in the upgrade list

#!/bin/bash

pathToLinkList="$1"

[[ -z $pathToLinkList ]] && pathToLinkList="$(printf "$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )")"

for file in $pathToLinkList/*; do
  [[ $(file -i "${file}" | grep 'text/html;' &>/dev/null)$? != 0 ]] && continue
  filePath="${file}"
  while read -r line; do
    [[ -z "$(grep 'url' <<< "$line")" ]] && continue
    declare -a head=("name='url'" "value='")
    declare -a tail="'>"
    extractedLink="${line##*${head[@]}}"
    extractedLink="${extractedLink%%$tail*}"
    printf '%s\n' "$extractedLink"
  done < "$filePath"
done

The above script reads the header files and gives you the actual download link from each header file.

The header files are expected to be in the same directory as this script is.
If the header files are in a different folder, you can provide the path to the folder as the first argument to the script:

bash bbc-extract-url-from-header.sh /path/to/headerfiles
1 Like

An all too common problem these days - I’d imagine… I can’t get much of anything from the BBC due to geoblocking (I thought it was the “world wide” web - is the Earth really flat? - I’m sure DRM and rampant capitalism would prefer it that way - if it is flat, then it must be really really wide)…

Anyway - many streaming services change stuff around to defeat “underhanded” tactics like “youtube-dl” (they’re not underhanded, EVERY single browser downloads streams to your computer into some hidden cache before you view it!) - and it breaks stuff… For purely “educational purposes” I sometimes use youtube-dl to grab local offline copies of footage of naked people engaging in acts of procreation… I actually use youtube-dl more often to grab music (only if it’s decent hi-def and I can’t buy DRM copies elsewhere) than I would use if for “pr0n” :smiley: … anywway it’s been broken for a few weeks now if trying to grab from a “purely educational” website called “redtube” :smiley: …

Anyway - the BBC’s convict daughter in the antipodes, the ABC (we call her Aunty, so the BBC must be “Great-Aunt” or is that cousin removed a few times?) are soon going to make all users have an actual login - and this will break my favourite “stream downloading” bunch of scripts for Australian free-to-air online streaming services called just “WebDL” from Bitbucket …

I often use that WebDL bunch of python stuff to grab movies from SBS (special broadcasting service, which caters for Ausralia’s multi-cultural heritage) I already have, or own, because it’s damn convenient - and often the movies are better quality than I could rip from DVD, or pir8 - and - their English subtitles are the best in the world (IMHO) for non-English movies (unfortunately they’re also hardcoded - but I don’t care)… e.g. I have a WebDL SBS download of the classic British cult movie “Withnail and I” and it’s better quality than the DVD I “own”… Anyway - I hope SBS don’t move to the login model too…

Note : youtube-dl does have a feature to let you enter a password (I’ve only ever used it with vimeo streams - which I nearly always get - because Vimeo is shit from Australia - constant buffers - which you don’t have to suffer if you have an offline local copy)…

1 Like

Indeed, from today’s quality expectations classic DVD dumbs down the quality way too much. The last time it dumbed down a video I converted onto a DVD for fun, I decided to just copy video files onto the DVD instead, the next time I would want to save videos. It’s not only easier, but also preserves quality, as it wouldn’t be converted to a “real” DVD, but it would be considered “data” which is left untouched.

Since youtube-dl was interrupted by corporate greed, the re-union of youtube-dl contributions didn’t go absolutely successful. There are still some branches and therefore contributions wrongfully blocked from the same copyright bullshit that was already suspended, some time ago, even though, obviously, this shouldn’t be the case anymore. But well, who cares, right? If it’s only for the consumer and corporate greeders do not get a penny from that, then suddenly nothing is ever done about it. On the opposite, you can be damn sure, that if a company loses a single penny over anything, it will be changed and adjusted right away!

That said, youtube-dl is strong and, rightfully, has a big and happy community. If you experience issues now, then it should be fixed soon. Or perhaps it has already been fixed but not yet merged into stable, which is the channel you are using, perhaps. Since the one time I needed a new feature urgently, I had to clone the youtube-dl repository myself and use the newest dev version, to be able to use the feature I was waiting for so long. Since then, I usually update the dev version pretty frequently. Didn’t have problems with using this unstable channel, yet.

Therefore, I would recommend you just manually use the latest dev version of youtube-dl:

I don’t exactly remember how I did it, but I think I just build that thing, after removing the youtube-dl I had installed before. Then I moved it to /usr/bin/, that’s it.
After doing that, you see no difference as a user, between installing it officially and installing the dev version, manually.

1 Like

You’re probably onto something there…

I used to install it from apt… then tried making a local copy in my shared scripts folder in ~/bin/. - but that got tricky to manage across multiple computers… so I started installing it from using pip (pip in python3)…

I usually update it with “sudo pip3 install --upgrade youtube-dl” - and I have, but symptom persists…

I suspect if I installed it from the main git repo - I’d have more chance at success… But it’s not a huge issue for me and I’ll wait till it’s fixed in the pip repos…

One of the really good things about the pip version, is it stayed online and available (and installable) while everything else that hosted it wasn’t during the recent “extortion” attempt by big media with a takedown notice to github…

1 Like

Yes. A thing I already did before, but even more so after the youtube-dl shock, was mirroring repositories I considered important and ephemeral privately on my Git server. This way, the source would be saved, if another youtube-dl shock would happen on one of those repositories.

As the youtube-dl shock and other cases with Google has shown: never rely on third party services. Always back up on your own terms, if you really want to be safe!

1 Like

My gpodder problem has been solved.
It was a Parental Control toggle deep in the router’s Network Map. I turned it off for the desktop and all runs as before.
Thanks go to Thomas Perl on GitHub.

1 Like