From: Yen Chi Hsuan Date: Sun, 24 Jan 2016 17:15:11 +0000 (+0800) Subject: Merge pull request #8130 from dyn888/master X-Git-Url: http://git.bitcoin.ninja/index.cgi?a=commitdiff_plain;h=e9bd0f772b28176e86cfe8c641b6281a96be2ee4;hp=e1a0bfdffe25dda494a9da8b02fba0c9ad39f4fe;p=youtube-dl Merge pull request #8130 from dyn888/master [youtube] added vcodec/acodec/abr for multiple itags --- diff --git a/AUTHORS b/AUTHORS index 20f51009e..bb1f2d8d9 100644 --- a/AUTHORS +++ b/AUTHORS @@ -150,3 +150,8 @@ reiv Muratcan Simsek Evan Lu flatgreen +Brian Foley +Vignesh Venkat +Tom Gijselinck +Founder Fang +Andrew Alexeyew diff --git a/README.md b/README.md index 4fc83b8e3..724fb17d1 100644 --- a/README.md +++ b/README.md @@ -339,8 +339,8 @@ which means you can modify it, redistribute it or use it however you like. preference, for example: "srt" or "ass/srt/best" --sub-lang LANGS Languages of the subtitles to download - (optional) separated by commas, use IETF - language tags like 'en,pt' + (optional) separated by commas, use --list- + subs for available language tags ## Authentication Options: -u, --username USERNAME Login with this account ID @@ -464,15 +464,77 @@ youtube-dl_test_video_.mp4 # A simple file name # FORMAT SELECTION -By default youtube-dl tries to download the best quality, but sometimes you may want to download in a different format. -The simplest case is requesting a specific format, for example `-f 22`. You can get the list of available formats using `--list-formats`, you can also use a file extension (currently it supports aac, m4a, mp3, mp4, ogg, wav, webm) or the special names `best`, `bestvideo`, `bestaudio` and `worst`. +By default youtube-dl tries to download the best available quality, i.e. if you want the best quality you **don't need** to pass any special options, youtube-dl will guess it for you by **default**. -If you want to download multiple videos and they don't have the same formats available, you can specify the order of preference using slashes, as in `-f 22/17/18`. You can also filter the video results by putting a condition in brackets, as in `-f "best[height=720]"` (or `-f "[filesize>10M]"`). This works for filesize, height, width, tbr, abr, vbr, asr, and fps and the comparisons <, <=, >, >=, =, != and for ext, acodec, vcodec, container, and protocol and the comparisons =, != . Formats for which the value is not known are excluded unless you put a question mark (?) after the operator. You can combine format filters, so `-f "[height <=? 720][tbr>500]"` selects up to 720p videos (or videos where the height is not known) with a bitrate of at least 500 KBit/s. Use commas to download multiple formats, such as `-f 136/137/mp4/bestvideo,140/m4a/bestaudio`. You can merge the video and audio of two formats into a single file using `-f +` (requires ffmpeg or avconv), for example `-f bestvideo+bestaudio`. Format selectors can also be grouped using parentheses, for example if you want to download the best mp4 and webm formats with a height lower than 480 you can use `-f '(mp4,webm)[height<480]'`. +But sometimes you may want to download in a different format, for example when you are on a slow or intermittent connection. The key mechanism for achieving this is so called *format selection* based on which you can explicitly specify desired format, select formats based on some criterion or criteria, setup precedence and much more. -Since the end of April 2015 and version 2015.04.26 youtube-dl uses `-f bestvideo+bestaudio/best` as default format selection (see #5447, #5456). If ffmpeg or avconv are installed this results in downloading `bestvideo` and `bestaudio` separately and muxing them together into a single file giving the best overall quality available. Otherwise it falls back to `best` and results in downloading the best available quality served as a single file. `best` is also needed for videos that don't come from YouTube because they don't provide the audio and video in two different files. If you want to only download some dash formats (for example if you are not interested in getting videos with a resolution higher than 1080p), you can add `-f bestvideo[height<=?1080]+bestaudio/best` to your configuration file. Note that if you use youtube-dl to stream to `stdout` (and most likely to pipe it to your media player then), i.e. you explicitly specify output template as `-o -`, youtube-dl still uses `-f best` format selection in order to start content delivery immediately to your player and not to wait until `bestvideo` and `bestaudio` are downloaded and muxed. +The general syntax for format selection is `--format FORMAT` or shorter `-f FORMAT` where `FORMAT` is a *selector expression*, i.e. an expression that describes format or formats you would like to download. + +The simplest case is requesting a specific format, for example with `-f 22` you can download the format with format code equal to 22. You can get the list of available format codes for particular video using `--list-formats` or `-F`. Note that these format codes are extractor specific. + +You can also use a file extension (currently `3gp`, `aac`, `flv`, `m4a`, `mp3`, `mp4`, `ogg`, `wav`, `webm` are supported) to download best quality format of particular file extension served as a single file, e.g. `-f webm` will download best quality format with `webm` extension served as a single file. + +You can also use special names to select particular edge case format: + - `best`: Select best quality format represented by single file with video and audio + - `worst`: Select worst quality format represented by single file with video and audio + - `bestvideo`: Select best quality video only format (e.g. DASH video), may not be available + - `worstvideo`: Select worst quality video only format, may not be available + - `bestaudio`: Select best quality audio only format, may not be available + - `worstaudio`: Select worst quality audio only format, may not be available + +For example, to download worst quality video only format you can use `-f worstvideo`. + +If you want to download multiple videos and they don't have the same formats available, you can specify the order of preference using slashes. Note that slash is left-associative, i.e. formats on the left hand side are preferred, for example `-f 22/17/18` will download format 22 if it's available, otherwise it will download format 17 if it's available, otherwise it will download format 18 if it's available, otherwise it will complain that no suitable formats are available for download. + +If you want to download several formats of the same video use comma as a separator, e.g. `-f 22,17,18` will download all these three formats, of course if they are available. Or more sophisticated example combined with precedence feature `-f 136/137/mp4/bestvideo,140/m4a/bestaudio`. + +You can also filter the video formats by putting a condition in brackets, as in `-f "best[height=720]"` (or `-f "[filesize>10M]"`). + +The following numeric meta fields can be used with comparisons `<`, `<=`, `>`, `>=`, `=` (equals), `!=` (not equals): + - `filesize`: The number of bytes, if known in advance + - `width`: Width of the video, if known + - `height`: Height of the video, if known + - `tbr`: Average bitrate of audio and video in KBit/s + - `abr`: Average audio bitrate in KBit/s + - `vbr`: Average video bitrate in KBit/s + - `asr`: Audio sampling rate in Hertz + - `fps`: Frame rate + +Also filtering work for comparisons `=` (equals), `!=` (not equals), `^=` (begins with), `$=` (ends with), `*=` (contains) and following string meta fields: + - `ext`: File extension + - `acodec`: Name of the audio codec in use + - `vcodec`: Name of the video codec in use + - `container`: Name of the container format + - `protocol`: The protocol that will be used for the actual download, lower-case. `http`, `https`, `rtsp`, `rtmp`, `rtmpe`, `m3u8`, or `m3u8_native` + +Note that none of the aforementioned meta fields are guaranteed to be present since this solely depends on the metadata obtained by particular extractor, i.e. the metadata offered by video hoster. + +Formats for which the value is not known are excluded unless you put a question mark (`?`) after the operator. You can combine format filters, so `-f "[height <=? 720][tbr>500]"` selects up to 720p videos (or videos where the height is not known) with a bitrate of at least 500 KBit/s. + +You can merge the video and audio of two formats into a single file using `-f +` (requires ffmpeg or avconv installed), for example `-f bestvideo+bestaudio` will download best video only format, best audio only format and mux them together with ffmpeg/avconv. + +Format selectors can also be grouped using parentheses, for example if you want to download the best mp4 and webm formats with a height lower than 480 you can use `-f '(mp4,webm)[height<480]'`. + +Since the end of April 2015 and version 2015.04.26 youtube-dl uses `-f bestvideo+bestaudio/best` as default format selection (see #5447, #5456). If ffmpeg or avconv are installed this results in downloading `bestvideo` and `bestaudio` separately and muxing them together into a single file giving the best overall quality available. Otherwise it falls back to `best` and results in downloading the best available quality served as a single file. `best` is also needed for videos that don't come from YouTube because they don't provide the audio and video in two different files. If you want to only download some DASH formats (for example if you are not interested in getting videos with a resolution higher than 1080p), you can add `-f bestvideo[height<=?1080]+bestaudio/best` to your configuration file. Note that if you use youtube-dl to stream to `stdout` (and most likely to pipe it to your media player then), i.e. you explicitly specify output template as `-o -`, youtube-dl still uses `-f best` format selection in order to start content delivery immediately to your player and not to wait until `bestvideo` and `bestaudio` are downloaded and muxed. If you want to preserve the old format selection behavior (prior to youtube-dl 2015.04.26), i.e. you want to download the best available quality media served as a single file, you should explicitly specify your choice with `-f best`. You may want to add it to the [configuration file](#configuration) in order not to type it every time you run youtube-dl. +Examples (note on Windows you may need to use double quotes instead of single): +```bash +# Download best mp4 format available or any other best if no mp4 available +$ youtube-dl -f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best' + +# Download best format available but not better that 480p +$ youtube-dl -f 'bestvideo[height<=480]+bestaudio/best[height<=480]' + +# Download best video only format but no bigger that 50 MB +$ youtube-dl -f 'best[filesize<50M]' + +# Download best format available via direct link over HTTP/HTTPS protocol +$ youtube-dl -f '(bestvideo+bestaudio/best)[protocol^=http]' +``` + + # VIDEO SELECTION Videos can be filtered by their upload date using the options `--date`, `--datebefore` or `--dateafter`. They accept dates in two formats: @@ -627,7 +689,7 @@ Either prepend `http://www.youtube.com/watch?v=` or separate the ID from the opt Use the `--cookies` option, for example `--cookies /path/to/cookies/file.txt`. Note that the cookies file must be in Mozilla/Netscape format and the first line of the cookies file must be either `# HTTP Cookie File` or `# Netscape HTTP Cookie File`. Make sure you have correct [newline format](https://en.wikipedia.org/wiki/Newline) in the cookies file and convert newlines if necessary to correspond with your OS, namely `CRLF` (`\r\n`) for Windows, `LF` (`\n`) for Linux and `CR` (`\r`) for Mac OS. `HTTP Error 400: Bad Request` when using `--cookies` is a good sign of invalid newline format. -Passing cookies to youtube-dl is a good way to workaround login when a particular extractor does not implement it explicitly. +Passing cookies to youtube-dl is a good way to workaround login when a particular extractor does not implement it explicitly. Another use case is working around [CAPTCHA](https://en.wikipedia.org/wiki/CAPTCHA) some websites require you to solve in particular cases in order to get access (e.g. YouTube, CloudFlare). ### Can you add support for this anime video site, or site which shows current movies for free? diff --git a/devscripts/gh-pages/update-copyright.py b/devscripts/gh-pages/update-copyright.py index 3663c8afe..e6c3abc8d 100755 --- a/devscripts/gh-pages/update-copyright.py +++ b/devscripts/gh-pages/update-copyright.py @@ -5,7 +5,7 @@ from __future__ import with_statement, unicode_literals import datetime import glob -import io # For Python 2 compatibilty +import io # For Python 2 compatibility import os import re diff --git a/docs/supportedsites.md b/docs/supportedsites.md index 84c166805..e86467cfa 100644 --- a/docs/supportedsites.md +++ b/docs/supportedsites.md @@ -1,6 +1,7 @@ # Supported sites - **1tv**: Первый канал - **1up.com** + - **20min** - **220.ro** - **22tracks:genre** - **22tracks:track** @@ -23,6 +24,7 @@ - **AdobeTVShow** - **AdobeTVVideo** - **AdultSwim** + - **aenetworks**: A+E Networks: A&E, Lifetime, History.com, FYI Network - **Aftonbladet** - **AirMozilla** - **AlJazeera** @@ -41,6 +43,7 @@ - **ARD:mediathek** - **arte.tv** - **arte.tv:+7** + - **arte.tv:cinema** - **arte.tv:concert** - **arte.tv:creative** - **arte.tv:ddc** @@ -64,6 +67,7 @@ - **Beeg** - **BehindKink** - **Bet** + - **Bigflix** - **Bild**: Bild.de - **BiliBili** - **BleacherReport** @@ -83,6 +87,7 @@ - **CamdemyFolder** - **canalc2.tv** - **Canalplus**: canalplus.fr, piwiplus.fr and d8.tv + - **Canvas** - **CBS** - **CBSNews**: CBS News - **CBSSports** @@ -120,6 +125,8 @@ - **CSpan**: C-SPAN - **CtsNews**: 華視新聞 - **culturebox.francetvinfo.fr** + - **CultureUnplugged** + - **CWTV** - **dailymotion** - **dailymotion:playlist** - **dailymotion:user** @@ -136,6 +143,7 @@ - **defense.gouv.fr** - **democracynow** - **DHM**: Filmarchiv - Deutsches Historisches Museum + - **Digiteka** - **Discovery** - **Dotsub** - **DouyuTV**: 斗鱼 @@ -227,7 +235,6 @@ - **Helsinki**: helsinki.fi - **HentaiStigma** - **HistoricFilms** - - **History** - **hitbox** - **hitbox:live** - **HornBunny** @@ -250,11 +257,12 @@ - **Instagram** - **instagram:user**: Instagram user profile - **InternetVideoArchive** - - **IPrima** + - **IPrima** (Currently broken) - **iqiyi**: 爱奇艺 - **Ir90Tv** - **ivi**: ivi.ru - **ivi:compilation**: ivi.ru compilations + - **ivideon**: Ivideon TV - **Izlesene** - **JadoreCettePub** - **JeuxVideo** @@ -282,7 +290,9 @@ - **la7.tv** - **Laola1Tv** - **Lecture2Go** + - **Lemonde** - **Letv**: 乐视网 + - **LetvCloud**: 乐视云 - **LetvPlaylist** - **LetvTv** - **Libsyn** @@ -295,6 +305,7 @@ - **livestream** - **livestream:original** - **LnkGo** + - **LoveHomePorn** - **lrt.lt** - **lynda**: lynda.com videos - **lynda:course**: lynda.com online courses @@ -386,13 +397,14 @@ - **nowness** - **nowness:playlist** - **nowness:series** - - **NowTV** + - **NowTV** (Currently broken) - **NowTVList** - **nowvideo**: NowVideo - **npo**: npo.nl and ntr.nl - **npo.nl:live** - **npo.nl:radio** - **npo.nl:radio:fragment** + - **Npr** - **NRK** - **NRKPlaylist** - **NRKTV**: NRK TV and NRK Radio @@ -464,11 +476,13 @@ - **RegioTV** - **Restudy** - **ReverbNation** + - **Revision3** - **RingTV** - **RottenTomatoes** - **Roxwel** - **RTBF** - - **Rte** + - **rte**: Raidió Teilifís Éireann TV + - **rte:radio**: Raidió Teilifís Éireann radio - **rtl.nl**: rtl.nl and rtlxl.nl - **RTL2** - **RTP** @@ -478,6 +492,7 @@ - **rtve.es:live**: RTVE.es live streams - **RTVNH** - **RUHD** + - **RulePorn** - **rutube**: Rutube videos - **rutube:channel**: Rutube channels - **rutube:embed**: Rutube embedded videos @@ -573,7 +588,6 @@ - **TeleMB** - **TeleTask** - **TenPlay** - - **TestTube** - **TF1** - **TheIntercept** - **TheOnion** @@ -595,10 +609,13 @@ - **ToypicsUser**: Toypics user profile - **TrailerAddict** (Currently broken) - **Trilulilu** + - **trollvids** - **TruTube** - **Tube8** - **TubiTv** - - **Tudou** + - **tudou** + - **tudou:album** + - **tudou:playlist** - **Tumblr** - **tunein:clip** - **tunein:program** @@ -631,7 +648,6 @@ - **udemy** - **udemy:course** - **UDNEmbed**: 聯合影音 - - **Ultimedia** - **Unistra** - **Urort**: NRK P3 Urørt - **ustream** @@ -651,12 +667,12 @@ - **video.mit.edu** - **VideoDetective** - **videofy.me** - - **VideoMega** + - **VideoMega** (Currently broken) - **videomore** - **videomore:season** - **videomore:video** - **VideoPremium** - - **VideoTt**: video.tt - Your True Tube + - **VideoTt**: video.tt - Your True Tube (Currently broken) - **videoweed**: VideoWeed - **Vidme** - **Vidzi** @@ -698,6 +714,7 @@ - **WebOfStories** - **WebOfStoriesPlaylist** - **Weibo** + - **WeiqiTV**: WQTV - **wholecloud**: WholeCloud - **Wimp** - **Wistia** @@ -749,3 +766,4 @@ - **ZDFChannel** - **zingmp3:album**: mp3.zing.vn albums - **zingmp3:song**: mp3.zing.vn songs + - **ZippCast** diff --git a/test/test_YoutubeDL.py b/test/test_YoutubeDL.py index 0388c0bf3..0caa43843 100644 --- a/test/test_YoutubeDL.py +++ b/test/test_YoutubeDL.py @@ -12,7 +12,7 @@ import copy from test.helper import FakeYDL, assertRegexpMatches from youtube_dl import YoutubeDL -from youtube_dl.compat import compat_str +from youtube_dl.compat import compat_str, compat_urllib_error from youtube_dl.extractor import YoutubeIE from youtube_dl.postprocessor.common import PostProcessor from youtube_dl.utils import ExtractorError, match_filter_func @@ -631,6 +631,11 @@ class TestYoutubeDL(unittest.TestCase): result = get_ids({'playlist_items': '10'}) self.assertEqual(result, []) + def test_urlopen_no_file_protocol(self): + # see https://github.com/rg3/youtube-dl/issues/8227 + ydl = YDL() + self.assertRaises(compat_urllib_error.URLError, ydl.urlopen, 'file:///etc/passwd') + if __name__ == '__main__': unittest.main() diff --git a/test/test_update.py b/test/test_update.py new file mode 100644 index 000000000..d9c71511d --- /dev/null +++ b/test/test_update.py @@ -0,0 +1,30 @@ +#!/usr/bin/env python + +from __future__ import unicode_literals + +# Allow direct execution +import os +import sys +import unittest +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + + +import json +from youtube_dl.update import rsa_verify + + +class TestUpdate(unittest.TestCase): + def test_rsa_verify(self): + UPDATES_RSA_KEY = (0x9d60ee4d8f805312fdb15a62f87b95bd66177b91df176765d13514a0f1754bcd2057295c5b6f1d35daa6742c3ffc9a82d3e118861c207995a8031e151d863c9927e304576bc80692bc8e094896fcf11b66f3e29e04e3a71e9a11558558acea1840aec37fc396fb6b65dc81a1c4144e03bd1c011de62e3f1357b327d08426fe93, 65537) + with open(os.path.join(os.path.dirname(os.path.abspath(__file__)), 'versions.json'), 'rb') as f: + versions_info = f.read().decode() + versions_info = json.loads(versions_info) + signature = versions_info['signature'] + del versions_info['signature'] + self.assertTrue(rsa_verify( + json.dumps(versions_info, sort_keys=True).encode('utf-8'), + signature, UPDATES_RSA_KEY)) + + +if __name__ == '__main__': + unittest.main() diff --git a/test/test_write_annotations.py b/test/test_write_annotations.py index 84b8f39e0..8de08f2d6 100644 --- a/test/test_write_annotations.py +++ b/test/test_write_annotations.py @@ -66,7 +66,7 @@ class TestAnnotations(unittest.TestCase): textTag = a.find('TEXT') text = textTag.text self.assertTrue(text in expected) # assertIn only added in python 2.7 - # remove the first occurance, there could be more than one annotation with the same text + # remove the first occurrence, there could be more than one annotation with the same text expected.remove(text) # We should have seen (and removed) all the expected annotation texts. self.assertEqual(len(expected), 0, 'Not all expected annotations were found.') diff --git a/test/versions.json b/test/versions.json new file mode 100644 index 000000000..6cccc2259 --- /dev/null +++ b/test/versions.json @@ -0,0 +1,34 @@ +{ + "latest": "2013.01.06", + "signature": "72158cdba391628569ffdbea259afbcf279bbe3d8aeb7492690735dc1cfa6afa754f55c61196f3871d429599ab22f2667f1fec98865527b32632e7f4b3675a7ef0f0fbe084d359256ae4bba68f0d33854e531a70754712f244be71d4b92e664302aa99653ee4df19800d955b6c4149cd2b3f24288d6e4b40b16126e01f4c8ce6", + "versions": { + "2013.01.02": { + "bin": [ + "http://youtube-dl.org/downloads/2013.01.02/youtube-dl", + "f5b502f8aaa77675c4884938b1e4871ebca2611813a0c0e74f60c0fbd6dcca6b" + ], + "exe": [ + "http://youtube-dl.org/downloads/2013.01.02/youtube-dl.exe", + "75fa89d2ce297d102ff27675aa9d92545bbc91013f52ec52868c069f4f9f0422" + ], + "tar": [ + "http://youtube-dl.org/downloads/2013.01.02/youtube-dl-2013.01.02.tar.gz", + "6a66d022ac8e1c13da284036288a133ec8dba003b7bd3a5179d0c0daca8c8196" + ] + }, + "2013.01.06": { + "bin": [ + "http://youtube-dl.org/downloads/2013.01.06/youtube-dl", + "64b6ed8865735c6302e836d4d832577321b4519aa02640dc508580c1ee824049" + ], + "exe": [ + "http://youtube-dl.org/downloads/2013.01.06/youtube-dl.exe", + "58609baf91e4389d36e3ba586e21dab882daaaee537e4448b1265392ae86ff84" + ], + "tar": [ + "http://youtube-dl.org/downloads/2013.01.06/youtube-dl-2013.01.06.tar.gz", + "fe77ab20a95d980ed17a659aa67e371fdd4d656d19c4c7950e7b720b0c2f1a86" + ] + } + } +} \ No newline at end of file diff --git a/youtube_dl/YoutubeDL.py b/youtube_dl/YoutubeDL.py index 3b2be3159..09d2b18f2 100755 --- a/youtube_dl/YoutubeDL.py +++ b/youtube_dl/YoutubeDL.py @@ -46,6 +46,7 @@ from .utils import ( DateRange, DEFAULT_OUTTMPL, determine_ext, + determine_protocol, DownloadError, encode_compat_str, encodeFilename, @@ -898,6 +899,9 @@ class YoutubeDL(object): STR_OPERATORS = { '=': operator.eq, '!=': operator.ne, + '^=': lambda attr, value: attr.startswith(value), + '$=': lambda attr, value: attr.endswith(value), + '*=': lambda attr, value: value in attr, } str_operator_rex = re.compile(r'''(?x) \s*(?Pext|acodec|vcodec|container|protocol) @@ -1244,6 +1248,12 @@ class YoutubeDL(object): except (ValueError, OverflowError, OSError): pass + # Auto generate title fields corresponding to the *_number fields when missing + # in order to always have clean titles. This is very common for TV series. + for field in ('chapter', 'season', 'episode'): + if info_dict.get('%s_number' % field) is not None and not info_dict.get(field): + info_dict[field] = '%s %d' % (field.capitalize(), info_dict['%s_number' % field]) + subtitles = info_dict.get('subtitles') if subtitles: for _, subtitle in subtitles.items(): @@ -1300,6 +1310,10 @@ class YoutubeDL(object): # Automatically determine file extension if missing if 'ext' not in format: format['ext'] = determine_ext(format['url']).lower() + # Automatically determine protocol if missing (useful for format + # selection purposes) + if 'protocol' not in format: + format['protocol'] = determine_protocol(format) # Add HTTP headers, so that external programs can use them from the # json output full_format_info = info_dict.copy() @@ -1312,7 +1326,7 @@ class YoutubeDL(object): # only set the 'formats' fields if the original info_dict list them # otherwise we end up with a circular reference, the first (and unique) # element in the 'formats' field in info_dict is info_dict itself, - # wich can't be exported to json + # which can't be exported to json info_dict['formats'] = formats if self.params.get('listformats'): self.list_formats(info_dict) @@ -1986,8 +2000,19 @@ class YoutubeDL(object): https_handler = make_HTTPS_handler(self.params, debuglevel=debuglevel) ydlh = YoutubeDLHandler(self.params, debuglevel=debuglevel) data_handler = compat_urllib_request_DataHandler() + + # When passing our own FileHandler instance, build_opener won't add the + # default FileHandler and allows us to disable the file protocol, which + # can be used for malicious purposes (see + # https://github.com/rg3/youtube-dl/issues/8227) + file_handler = compat_urllib_request.FileHandler() + + def file_open(*args, **kwargs): + raise compat_urllib_error.URLError('file:// scheme is explicitly disabled in youtube-dl for security reasons') + file_handler.file_open = file_open + opener = compat_urllib_request.build_opener( - proxy_handler, https_handler, cookie_processor, ydlh, data_handler) + proxy_handler, https_handler, cookie_processor, ydlh, data_handler, file_handler) # Delete the default user-agent header, which would otherwise apply in # cases where our custom HTTP handler doesn't come into play diff --git a/youtube_dl/compat.py b/youtube_dl/compat.py index a3e85264a..8ab688001 100644 --- a/youtube_dl/compat.py +++ b/youtube_dl/compat.py @@ -433,7 +433,7 @@ if sys.version_info < (3, 0) and sys.platform == 'win32': else: compat_getpass = getpass.getpass -# Old 2.6 and 2.7 releases require kwargs to be bytes +# Python < 2.6.5 require kwargs to be bytes try: def _testfunc(x): pass diff --git a/youtube_dl/downloader/common.py b/youtube_dl/downloader/common.py index beae8c4d0..fc7521598 100644 --- a/youtube_dl/downloader/common.py +++ b/youtube_dl/downloader/common.py @@ -295,7 +295,7 @@ class FileDownloader(object): def report_retry(self, count, retries): """Report retry in case of HTTP error 5xx""" - self.to_screen('[download] Got server HTTP error. Retrying (attempt %d of %d)...' % (count, retries)) + self.to_screen('[download] Got server HTTP error. Retrying (attempt %d of %.0f)...' % (count, retries)) def report_file_already_downloaded(self, file_name): """Report file has already been fully downloaded.""" diff --git a/youtube_dl/downloader/fragment.py b/youtube_dl/downloader/fragment.py index 5a64b29ee..0c9113d0f 100644 --- a/youtube_dl/downloader/fragment.py +++ b/youtube_dl/downloader/fragment.py @@ -59,37 +59,43 @@ class FragmentFD(FileDownloader): 'filename': ctx['filename'], 'tmpfilename': ctx['tmpfilename'], } + start = time.time() - ctx['started'] = start + ctx.update({ + 'started': start, + # Total complete fragments downloaded so far in bytes + 'complete_frags_downloaded_bytes': 0, + # Amount of fragment's bytes downloaded by the time of the previous + # frag progress hook invocation + 'prev_frag_downloaded_bytes': 0, + }) def frag_progress_hook(s): if s['status'] not in ('downloading', 'finished'): return - frag_total_bytes = s.get('total_bytes', 0) - if s['status'] == 'finished': - state['downloaded_bytes'] += frag_total_bytes - state['frag_index'] += 1 + frag_total_bytes = s.get('total_bytes') or 0 estimated_size = ( - (state['downloaded_bytes'] + frag_total_bytes) / + (ctx['complete_frags_downloaded_bytes'] + frag_total_bytes) / (state['frag_index'] + 1) * total_frags) time_now = time.time() state['total_bytes_estimate'] = estimated_size state['elapsed'] = time_now - start if s['status'] == 'finished': - progress = self.calc_percent(state['frag_index'], total_frags) + state['frag_index'] += 1 + state['downloaded_bytes'] += frag_total_bytes - ctx['prev_frag_downloaded_bytes'] + ctx['complete_frags_downloaded_bytes'] = state['downloaded_bytes'] + ctx['prev_frag_downloaded_bytes'] = 0 else: frag_downloaded_bytes = s['downloaded_bytes'] - frag_progress = self.calc_percent(frag_downloaded_bytes, - frag_total_bytes) - progress = self.calc_percent(state['frag_index'], total_frags) - progress += frag_progress / float(total_frags) - + state['downloaded_bytes'] += frag_downloaded_bytes - ctx['prev_frag_downloaded_bytes'] state['eta'] = self.calc_eta( - start, time_now, estimated_size, state['downloaded_bytes'] + frag_downloaded_bytes) + start, time_now, estimated_size, + state['downloaded_bytes']) state['speed'] = s.get('speed') + ctx['prev_frag_downloaded_bytes'] = frag_downloaded_bytes self._hook_progress(state) ctx['dl'].add_progress_hook(frag_progress_hook) diff --git a/youtube_dl/downloader/hls.py b/youtube_dl/downloader/hls.py index b5a3e1167..10b83c6b2 100644 --- a/youtube_dl/downloader/hls.py +++ b/youtube_dl/downloader/hls.py @@ -46,7 +46,16 @@ class HlsFD(FileDownloader): self._debug_cmd(args) - retval = subprocess.call(args) + proc = subprocess.Popen(args, stdin=subprocess.PIPE) + try: + retval = proc.wait() + except KeyboardInterrupt: + # subprocces.run would send the SIGKILL signal to ffmpeg and the + # mp4 file couldn't be played, but if we ask ffmpeg to quit it + # produces a file that is playable (this is mostly useful for live + # streams) + proc.communicate(b'q') + raise if retval == 0: fsize = os.path.getsize(encodeFilename(tmpfilename)) self.to_screen('\r[%s] %s bytes' % (args[0], fsize)) diff --git a/youtube_dl/extractor/__init__.py b/youtube_dl/extractor/__init__.py index 4c7e5223d..245e4d044 100644 --- a/youtube_dl/extractor/__init__.py +++ b/youtube_dl/extractor/__init__.py @@ -15,6 +15,7 @@ from .adobetv import ( AdobeTVVideoIE, ) from .adultswim import AdultSwimIE +from .aenetworks import AENetworksIE from .aftonbladet import AftonbladetIE from .airmozilla import AirMozillaIE from .aljazeera import AlJazeeraIE @@ -41,6 +42,7 @@ from .arte import ( ArteTVCreativeIE, ArteTVConcertIE, ArteTVFutureIE, + ArteTVCinemaIE, ArteTVDDCIE, ArteTVEmbedIE, ) @@ -61,6 +63,7 @@ from .beeg import BeegIE from .behindkink import BehindKinkIE from .beatportpro import BeatportProIE from .bet import BetIE +from .bigflix import BigflixIE from .bild import BildIE from .bilibili import BiliBiliIE from .bleacherreport import ( @@ -85,6 +88,7 @@ from .camdemy import ( ) from .canalplus import CanalplusIE from .canalc2 import Canalc2IE +from .canvas import CanvasIE from .cbs import CBSIE from .cbsnews import CBSNewsIE from .cbssports import CBSSportsIE @@ -127,6 +131,8 @@ from .crunchyroll import ( ) from .cspan import CSpanIE from .ctsnews import CtsNewsIE +from .cultureunplugged import CultureUnpluggedIE +from .cwtv import CWTVIE from .dailymotion import ( DailymotionIE, DailymotionPlaylistIE, @@ -261,7 +267,6 @@ from .hellporno import HellPornoIE from .helsinki import HelsinkiIE from .hentaistigma import HentaiStigmaIE from .historicfilms import HistoricFilmsIE -from .history import HistoryIE from .hitbox import HitboxIE, HitboxLiveIE from .hornbunny import HornBunnyIE from .hotnewhiphop import HotNewHipHopIE @@ -299,6 +304,7 @@ from .ivi import ( IviIE, IviCompilationIE ) +from .ivideon import IvideonIE from .izlesene import IzleseneIE from .jadorecettepub import JadoreCettePubIE from .jeuxvideo import JeuxVideoIE @@ -328,10 +334,12 @@ from .kuwo import ( from .la7 import LA7IE from .laola1tv import Laola1TvIE from .lecture2go import Lecture2GoIE +from .lemonde import LemondeIE from .letv import ( LetvIE, LetvTvIE, - LetvPlaylistIE + LetvPlaylistIE, + LetvCloudIE, ) from .libsyn import LibsynIE from .lifenews import ( @@ -350,6 +358,7 @@ from .livestream import ( LivestreamShortenerIE, ) from .lnkgo import LnkGoIE +from .lovehomeporn import LoveHomePornIE from .lrt import LRTIE from .lynda import ( LyndaIE, @@ -473,6 +482,7 @@ from .npo import ( VPROIE, WNLIE ) +from .npr import NprIE from .nrk import ( NRKIE, NRKPlaylistIE, @@ -563,7 +573,7 @@ from .ro220 import Ro220IE from .rottentomatoes import RottenTomatoesIE from .roxwel import RoxwelIE from .rtbf import RTBFIE -from .rte import RteIE +from .rte import RteIE, RteRadioIE from .rtlnl import RtlNlIE from .rtl2 import RTL2IE from .rtp import RTPIE @@ -571,6 +581,7 @@ from .rts import RTSIE from .rtve import RTVEALaCartaIE, RTVELiveIE, RTVEInfantilIE from .rtvnh import RTVNHIE from .ruhd import RUHDIE +from .ruleporn import RulePornIE from .rutube import ( RutubeIE, RutubeChannelIE, @@ -717,10 +728,15 @@ from .toutv import TouTvIE from .toypics import ToypicsUserIE, ToypicsIE from .traileraddict import TrailerAddictIE from .trilulilu import TriluliluIE +from .trollvids import TrollvidsIE from .trutube import TruTubeIE from .tube8 import Tube8IE from .tubitv import TubiTvIE -from .tudou import TudouIE +from .tudou import ( + TudouIE, + TudouPlaylistIE, + TudouAlbumIE, +) from .tumblr import TumblrIE from .tunein import ( TuneInClipIE, @@ -746,6 +762,7 @@ from .tvp import TvpIE, TvpSeriesIE from .tvplay import TVPlayIE from .tweakers import TweakersIE from .twentyfourvideo import TwentyFourVideoIE +from .twentymin import TwentyMinutenIE from .twentytwotracks import ( TwentyTwoTracksIE, TwentyTwoTracksGenreIE @@ -766,7 +783,7 @@ from .udemy import ( UdemyCourseIE ) from .udn import UDNEmbedIE -from .ultimedia import UltimediaIE +from .digiteka import DigitekaIE from .unistra import UnistraIE from .urort import UrortIE from .ustream import UstreamIE, UstreamChannelIE @@ -845,6 +862,7 @@ from .webofstories import ( WebOfStoriesPlaylistIE, ) from .weibo import WeiboIE +from .weiqitv import WeiqiTVIE from .wimp import WimpIE from .wistia import WistiaIE from .worldstarhiphop import WorldStarHipHopIE @@ -905,6 +923,7 @@ from .zingmp3 import ( ZingMp3SongIE, ZingMp3AlbumIE, ) +from .zippcast import ZippCastIE _ALL_CLASSES = [ klass diff --git a/youtube_dl/extractor/adultswim.py b/youtube_dl/extractor/adultswim.py index bf21a6887..8157da2cb 100644 --- a/youtube_dl/extractor/adultswim.py +++ b/youtube_dl/extractor/adultswim.py @@ -187,7 +187,8 @@ class AdultSwimIE(InfoExtractor): media_url = file_el.text if determine_ext(media_url) == 'm3u8': formats.extend(self._extract_m3u8_formats( - media_url, segment_title, 'mp4', preference=0, m3u8_id='hls')) + media_url, segment_title, 'mp4', preference=0, + m3u8_id='hls', fatal=False)) else: formats.append({ 'format_id': '%s_%s' % (bitrate, ftype), diff --git a/youtube_dl/extractor/aenetworks.py b/youtube_dl/extractor/aenetworks.py new file mode 100644 index 000000000..43d7b0523 --- /dev/null +++ b/youtube_dl/extractor/aenetworks.py @@ -0,0 +1,66 @@ +from __future__ import unicode_literals + +from .common import InfoExtractor +from ..utils import smuggle_url + + +class AENetworksIE(InfoExtractor): + IE_NAME = 'aenetworks' + IE_DESC = 'A+E Networks: A&E, Lifetime, History.com, FYI Network' + _VALID_URL = r'https?://(?:www\.)?(?:(?:history|aetv|mylifetime)\.com|fyi\.tv)/(?:[^/]+/)+(?P[^/]+?)(?:$|[?#])' + + _TESTS = [{ + 'url': 'http://www.history.com/topics/valentines-day/history-of-valentines-day/videos/bet-you-didnt-know-valentines-day?m=528e394da93ae&s=undefined&f=1&free=false', + 'info_dict': { + 'id': 'g12m5Gyt3fdR', + 'ext': 'mp4', + 'title': "Bet You Didn't Know: Valentine's Day", + 'description': 'md5:7b57ea4829b391995b405fa60bd7b5f7', + }, + 'params': { + # m3u8 download + 'skip_download': True, + }, + 'add_ie': ['ThePlatform'], + 'expected_warnings': ['JSON-LD'], + }, { + 'url': 'http://www.history.com/shows/mountain-men/season-1/episode-1', + 'info_dict': { + 'id': 'eg47EERs_JsZ', + 'ext': 'mp4', + 'title': "Winter Is Coming", + 'description': 'md5:641f424b7a19d8e24f26dea22cf59d74', + }, + 'params': { + # m3u8 download + 'skip_download': True, + }, + 'add_ie': ['ThePlatform'], + }, { + 'url': 'http://www.aetv.com/shows/duck-dynasty/video/inlawful-entry', + 'only_matching': True + }, { + 'url': 'http://www.fyi.tv/shows/tiny-house-nation/videos/207-sq-ft-minnesota-prairie-cottage', + 'only_matching': True + }, { + 'url': 'http://www.mylifetime.com/shows/project-runway-junior/video/season-1/episode-6/superstar-clients', + 'only_matching': True + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + + webpage = self._download_webpage(url, video_id) + + video_url_re = [ + r'data-href="[^"]*/%s"[^>]+data-release-url="([^"]+)"' % video_id, + r"media_url\s*=\s*'([^']+)'" + ] + video_url = self._search_regex(video_url_re, webpage, 'video url') + + info = self._search_json_ld(webpage, video_id, fatal=False) + info.update({ + '_type': 'url_transparent', + 'url': smuggle_url(video_url, {'sig': {'key': 'crazyjava', 'secret': 's3cr3t'}}), + }) + return info diff --git a/youtube_dl/extractor/amp.py b/youtube_dl/extractor/amp.py index 1035d1c48..69e6baff7 100644 --- a/youtube_dl/extractor/amp.py +++ b/youtube_dl/extractor/amp.py @@ -76,5 +76,6 @@ class AMPIE(InfoExtractor): 'thumbnails': thumbnails, 'timestamp': parse_iso8601(item.get('pubDate'), ' '), 'duration': int_or_none(media_content[0].get('@attributes', {}).get('duration')), + 'subtitles': subtitles, 'formats': formats, } diff --git a/youtube_dl/extractor/anitube.py b/youtube_dl/extractor/anitube.py index 23f942ae2..2fd912da4 100644 --- a/youtube_dl/extractor/anitube.py +++ b/youtube_dl/extractor/anitube.py @@ -1,11 +1,9 @@ from __future__ import unicode_literals -import re +from .nuevo import NuevoBaseIE -from .common import InfoExtractor - -class AnitubeIE(InfoExtractor): +class AnitubeIE(NuevoBaseIE): IE_NAME = 'anitube.se' _VALID_URL = r'https?://(?:www\.)?anitube\.se/video/(?P\d+)' @@ -22,38 +20,11 @@ class AnitubeIE(InfoExtractor): } def _real_extract(self, url): - mobj = re.match(self._VALID_URL, url) - video_id = mobj.group('id') + video_id = self._match_id(url) webpage = self._download_webpage(url, video_id) key = self._search_regex( r'src=["\']https?://[^/]+/embed/([A-Za-z0-9_-]+)', webpage, 'key') - config_xml = self._download_xml( - 'http://www.anitube.se/nuevo/econfig.php?key=%s' % key, key) - - video_title = config_xml.find('title').text - thumbnail = config_xml.find('image').text - duration = float(config_xml.find('duration').text) - - formats = [] - video_url = config_xml.find('file') - if video_url is not None: - formats.append({ - 'format_id': 'sd', - 'url': video_url.text, - }) - video_url = config_xml.find('filehd') - if video_url is not None: - formats.append({ - 'format_id': 'hd', - 'url': video_url.text, - }) - - return { - 'id': video_id, - 'title': video_title, - 'thumbnail': thumbnail, - 'duration': duration, - 'formats': formats - } + return self._extract_nuevo( + 'http://www.anitube.se/nuevo/econfig.php?key=%s' % key, video_id) diff --git a/youtube_dl/extractor/arte.py b/youtube_dl/extractor/arte.py index 10301a8ea..b9e07f0ef 100644 --- a/youtube_dl/extractor/arte.py +++ b/youtube_dl/extractor/arte.py @@ -199,25 +199,19 @@ class ArteTVCreativeIE(ArteTVPlus7IE): class ArteTVFutureIE(ArteTVPlus7IE): IE_NAME = 'arte.tv:future' - _VALID_URL = r'https?://future\.arte\.tv/(?Pfr|de)/(thema|sujet)/.*?#article-anchor-(?P\d+)' + _VALID_URL = r'https?://future\.arte\.tv/(?Pfr|de)/(?P.+)' - _TEST = { - 'url': 'http://future.arte.tv/fr/sujet/info-sciences#article-anchor-7081', + _TESTS = [{ + 'url': 'http://future.arte.tv/fr/info-sciences/les-ecrevisses-aussi-sont-anxieuses', 'info_dict': { - 'id': '5201', + 'id': '050940-028-A', 'ext': 'mp4', - 'title': 'Les champignons au secours de la planète', - 'upload_date': '20131101', + 'title': 'Les écrevisses aussi peuvent être anxieuses', }, - } - - def _real_extract(self, url): - anchor_id, lang = self._extract_url_info(url) - webpage = self._download_webpage(url, anchor_id) - row = self._search_regex( - r'(?s)id="%s"[^>]*>.+?(]*arte_vp_url[^>]*>)' % anchor_id, - webpage, 'row') - return self._extract_from_webpage(row, anchor_id, lang) + }, { + 'url': 'http://future.arte.tv/fr/la-science-est-elle-responsable', + 'only_matching': True, + }] class ArteTVDDCIE(ArteTVPlus7IE): @@ -255,6 +249,23 @@ class ArteTVConcertIE(ArteTVPlus7IE): } +class ArteTVCinemaIE(ArteTVPlus7IE): + IE_NAME = 'arte.tv:cinema' + _VALID_URL = r'https?://cinema\.arte\.tv/(?Pde|fr)/(?P.+)' + + _TEST = { + 'url': 'http://cinema.arte.tv/de/node/38291', + 'md5': '6b275511a5107c60bacbeeda368c3aa1', + 'info_dict': { + 'id': '055876-000_PWA12025-D', + 'ext': 'mp4', + 'title': 'Tod auf dem Nil', + 'upload_date': '20160122', + 'description': 'md5:7f749bbb77d800ef2be11d54529b96bc', + }, + } + + class ArteTVEmbedIE(ArteTVPlus7IE): IE_NAME = 'arte.tv:embed' _VALID_URL = r'''(?x) diff --git a/youtube_dl/extractor/atresplayer.py b/youtube_dl/extractor/atresplayer.py index 3fb042cea..b8f9ae005 100644 --- a/youtube_dl/extractor/atresplayer.py +++ b/youtube_dl/extractor/atresplayer.py @@ -132,11 +132,6 @@ class AtresPlayerIE(InfoExtractor): }) formats.append(format_info) - m3u8_url = player.get('urlVideoHls') - if m3u8_url: - formats.extend(self._extract_m3u8_formats( - m3u8_url, episode_id, 'mp4', 'm3u8_native', m3u8_id='hls', fatal=False)) - timestamp = int_or_none(self._download_webpage( self._TIME_API_URL, video_id, 'Downloading timestamp', fatal=False), 1000, time.time()) diff --git a/youtube_dl/extractor/bbc.py b/youtube_dl/extractor/bbc.py index 7b169881a..1c493b72d 100644 --- a/youtube_dl/extractor/bbc.py +++ b/youtube_dl/extractor/bbc.py @@ -124,14 +124,14 @@ class BBCCoUkIE(InfoExtractor): }, 'skip': 'Episode is no longer available on BBC iPlayer Radio', }, { - 'url': 'http://www.bbc.co.uk/music/clips/p02frcc3', + 'url': 'http://www.bbc.co.uk/music/clips/p022h44b', 'note': 'Audio', 'info_dict': { - 'id': 'p02frcch', + 'id': 'p022h44j', 'ext': 'flv', - 'title': 'Pete Tong, Past, Present and Future Special, Madeon - After Hours mix', - 'description': 'French house superstar Madeon takes us out of the club and onto the after party.', - 'duration': 3507, + 'title': 'BBC Proms Music Guides, Rachmaninov: Symphonic Dances', + 'description': "In this Proms Music Guide, Andrew McGregor looks at Rachmaninov's Symphonic Dances.", + 'duration': 227, }, 'params': { # rtmp download @@ -182,13 +182,12 @@ class BBCCoUkIE(InfoExtractor): }, { # iptv-all mediaset fails with geolocation however there is no geo restriction # for this programme at all - 'url': 'http://www.bbc.co.uk/programmes/b06bp7lf', + 'url': 'http://www.bbc.co.uk/programmes/b06rkn85', 'info_dict': { - 'id': 'b06bp7kf', + 'id': 'b06rkms3', 'ext': 'flv', - 'title': "Annie Mac's Friday Night, B.Traits sits in for Annie", - 'description': 'B.Traits sits in for Annie Mac with a Mini-Mix from Disclosure.', - 'duration': 10800, + 'title': "Best of the Mini-Mixes 2015: Part 3, Annie Mac's Friday Night - BBC Radio 1", + 'description': "Annie has part three in the Best of the Mini-Mixes 2015, plus the year's Most Played!", }, 'params': { # rtmp download @@ -719,19 +718,10 @@ class BBCIE(BBCCoUkIE): webpage = self._download_webpage(url, playlist_id) - timestamp = None - playlist_title = None - playlist_description = None - - ld = self._parse_json( - self._search_regex( - r'(?s)', - webpage, 'ld json', default='{}'), - playlist_id, fatal=False) - if ld: - timestamp = parse_iso8601(ld.get('datePublished')) - playlist_title = ld.get('headline') - playlist_description = ld.get('articleBody') + json_ld_info = self._search_json_ld(webpage, playlist_id, default=None) + timestamp = json_ld_info.get('timestamp') + playlist_title = json_ld_info.get('title') + playlist_description = json_ld_info.get('description') if not timestamp: timestamp = parse_iso8601(self._search_regex( diff --git a/youtube_dl/extractor/beeg.py b/youtube_dl/extractor/beeg.py index c8d921daf..34c2a756f 100644 --- a/youtube_dl/extractor/beeg.py +++ b/youtube_dl/extractor/beeg.py @@ -34,7 +34,7 @@ class BeegIE(InfoExtractor): video_id = self._match_id(url) video = self._download_json( - 'http://beeg.com/api/v5/video/%s' % video_id, video_id) + 'https://api.beeg.com/api/v5/video/%s' % video_id, video_id) def split(o, e): def cut(s, x): @@ -60,7 +60,7 @@ class BeegIE(InfoExtractor): def decrypt_url(encrypted_url): encrypted_url = self._proto_relative_url( - encrypted_url.replace('{DATA_MARKERS}', ''), 'http:') + encrypted_url.replace('{DATA_MARKERS}', ''), 'https:') key = self._search_regex( r'/key=(.*?)%2Cend=', encrypted_url, 'key', default=None) if not key: diff --git a/youtube_dl/extractor/bigflix.py b/youtube_dl/extractor/bigflix.py new file mode 100644 index 000000000..33762ad93 --- /dev/null +++ b/youtube_dl/extractor/bigflix.py @@ -0,0 +1,85 @@ +# coding: utf-8 +from __future__ import unicode_literals + +import base64 +import re + +from .common import InfoExtractor +from ..compat import compat_urllib_parse_unquote + + +class BigflixIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?bigflix\.com/.+/(?P[0-9]+)' + _TESTS = [{ + 'url': 'http://www.bigflix.com/Hindi-movies/Action-movies/Singham-Returns/16537', + 'md5': 'ec76aa9b1129e2e5b301a474e54fab74', + 'info_dict': { + 'id': '16537', + 'ext': 'mp4', + 'title': 'Singham Returns', + 'description': 'md5:3d2ba5815f14911d5cc6a501ae0cf65d', + } + }, { + # 2 formats + 'url': 'http://www.bigflix.com/Tamil-movies/Drama-movies/Madarasapatinam/16070', + 'info_dict': { + 'id': '16070', + 'ext': 'mp4', + 'title': 'Madarasapatinam', + 'description': 'md5:63b9b8ed79189c6f0418c26d9a3452ca', + 'formats': 'mincount:2', + }, + 'params': { + 'skip_download': True, + } + }, { + # multiple formats + 'url': 'http://www.bigflix.com/Malayalam-movies/Drama-movies/Indian-Rupee/15967', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + + webpage = self._download_webpage(url, video_id) + + title = self._html_search_regex( + r']+class=["\']pagetitle["\'][^>]*>(.+?)', + webpage, 'title') + + def decode_url(quoted_b64_url): + return base64.b64decode(compat_urllib_parse_unquote( + quoted_b64_url).encode('ascii')).decode('utf-8') + + formats = [] + for height, encoded_url in re.findall( + r'ContentURL_(\d{3,4})[pP][^=]+=([^&]+)', webpage): + video_url = decode_url(encoded_url) + f = { + 'url': video_url, + 'format_id': '%sp' % height, + 'height': int(height), + } + if video_url.startswith('rtmp'): + f['ext'] = 'flv' + formats.append(f) + + file_url = self._search_regex( + r'file=([^&]+)', webpage, 'video url', default=None) + if file_url: + video_url = decode_url(file_url) + if all(f['url'] != video_url for f in formats): + formats.append({ + 'url': decode_url(file_url), + }) + + self._sort_formats(formats) + + description = self._html_search_meta('description', webpage) + + return { + 'id': video_id, + 'title': title, + 'description': description, + 'formats': formats + } diff --git a/youtube_dl/extractor/canalc2.py b/youtube_dl/extractor/canalc2.py index f6a1ff381..f1f128c45 100644 --- a/youtube_dl/extractor/canalc2.py +++ b/youtube_dl/extractor/canalc2.py @@ -9,9 +9,9 @@ from ..utils import parse_duration class Canalc2IE(InfoExtractor): IE_NAME = 'canalc2.tv' - _VALID_URL = r'https?://(?:www\.)?canalc2\.tv/video/(?P\d+)' + _VALID_URL = r'https?://(?:(?:www\.)?canalc2\.tv/video/|archives-canalc2\.u-strasbg\.fr/video\.asp\?.*\bidVideo=)(?P\d+)' - _TEST = { + _TESTS = [{ 'url': 'http://www.canalc2.tv/video/12163', 'md5': '060158428b650f896c542dfbb3d6487f', 'info_dict': { @@ -23,24 +23,36 @@ class Canalc2IE(InfoExtractor): 'params': { 'skip_download': True, # Requires rtmpdump } - } + }, { + 'url': 'http://archives-canalc2.u-strasbg.fr/video.asp?idVideo=11427&voir=oui', + 'only_matching': True, + }] def _real_extract(self, url): video_id = self._match_id(url) - webpage = self._download_webpage(url, video_id) - video_url = self._search_regex( - r'jwplayer\((["\'])Player\1\)\.setup\({[^}]*file\s*:\s*(["\'])(?P.+?)\2', - webpage, 'video_url', group='file') - formats = [{'url': video_url}] - if video_url.startswith('rtmp://'): - rtmp = re.search(r'^(?Prtmp://[^/]+/(?P.+/))(?Pmp4:.+)$', video_url) - formats[0].update({ - 'url': rtmp.group('url'), - 'ext': 'flv', - 'app': rtmp.group('app'), - 'play_path': rtmp.group('play_path'), - 'page_url': url, - }) + + webpage = self._download_webpage( + 'http://www.canalc2.tv/video/%s' % video_id, video_id) + + formats = [] + for _, video_url in re.findall(r'file\s*=\s*(["\'])(.+?)\1', webpage): + if video_url.startswith('rtmp://'): + rtmp = re.search( + r'^(?Prtmp://[^/]+/(?P.+/))(?Pmp4:.+)$', video_url) + formats.append({ + 'url': rtmp.group('url'), + 'format_id': 'rtmp', + 'ext': 'flv', + 'app': rtmp.group('app'), + 'play_path': rtmp.group('play_path'), + 'page_url': url, + }) + else: + formats.append({ + 'url': video_url, + 'format_id': 'http', + }) + self._sort_formats(formats) title = self._html_search_regex( r'(?s)class="[^"]*col_description[^"]*">.*?

(.*?)

', webpage, 'title') diff --git a/youtube_dl/extractor/canvas.py b/youtube_dl/extractor/canvas.py new file mode 100644 index 000000000..ee19ff836 --- /dev/null +++ b/youtube_dl/extractor/canvas.py @@ -0,0 +1,65 @@ +from __future__ import unicode_literals + +from .common import InfoExtractor +from ..utils import float_or_none + + +class CanvasIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?canvas\.be/video/(?:[^/]+/)*(?P[^/?#&]+)' + _TEST = { + 'url': 'http://www.canvas.be/video/de-afspraak/najaar-2015/de-afspraak-veilt-voor-de-warmste-week', + 'md5': 'ea838375a547ac787d4064d8c7860a6c', + 'info_dict': { + 'id': 'mz-ast-5e5f90b6-2d72-4c40-82c2-e134f884e93e', + 'display_id': 'de-afspraak-veilt-voor-de-warmste-week', + 'ext': 'mp4', + 'title': 'De afspraak veilt voor de Warmste Week', + 'description': 'md5:24cb860c320dc2be7358e0e5aa317ba6', + 'thumbnail': 're:^https?://.*\.jpg$', + 'duration': 49.02, + } + } + + def _real_extract(self, url): + display_id = self._match_id(url) + + webpage = self._download_webpage(url, display_id) + + title = self._search_regex( + r']+class="video__body__header__title"[^>]*>(.+?)', + webpage, 'title', default=None) or self._og_search_title(webpage) + + video_id = self._html_search_regex( + r'data-video=(["\'])(?P.+?)\1', webpage, 'video id', group='id') + + data = self._download_json( + 'https://mediazone.vrt.be/api/v1/canvas/assets/%s' % video_id, display_id) + + formats = [] + for target in data['targetUrls']: + format_url, format_type = target.get('url'), target.get('type') + if not format_url or not format_type: + continue + if format_type == 'HLS': + formats.extend(self._extract_m3u8_formats( + format_url, display_id, entry_protocol='m3u8_native', + ext='mp4', preference=0, fatal=False, m3u8_id=format_type)) + elif format_type == 'HDS': + formats.extend(self._extract_f4m_formats( + format_url, display_id, f4m_id=format_type, fatal=False)) + else: + formats.append({ + 'format_id': format_type, + 'url': format_url, + }) + self._sort_formats(formats) + + return { + 'id': video_id, + 'display_id': display_id, + 'title': title, + 'description': self._og_search_description(webpage), + 'formats': formats, + 'duration': float_or_none(data.get('duration'), 1000), + 'thumbnail': data.get('posterImageUrl'), + } diff --git a/youtube_dl/extractor/cbsnews.py b/youtube_dl/extractor/cbsnews.py index d211ec23b..480435e26 100644 --- a/youtube_dl/extractor/cbsnews.py +++ b/youtube_dl/extractor/cbsnews.py @@ -35,6 +35,11 @@ class CBSNewsIE(InfoExtractor): 'title': 'Fort Hood shooting: Army downplays mental illness as cause of attack', 'thumbnail': 're:^https?://.*\.jpg$', 'duration': 205, + 'subtitles': { + 'en': [{ + 'ext': 'ttml', + }], + }, }, 'params': { # rtmp download @@ -85,10 +90,18 @@ class CBSNewsIE(InfoExtractor): fmt['ext'] = 'mp4' formats.append(fmt) + subtitles = {} + if 'mpxRefId' in video_info: + subtitles['en'] = [{ + 'ext': 'ttml', + 'url': 'http://www.cbsnews.com/videos/captions/%s.adb_xml' % video_info['mpxRefId'], + }] + return { 'id': video_id, 'title': title, 'thumbnail': thumbnail, 'duration': duration, 'formats': formats, + 'subtitles': subtitles, } diff --git a/youtube_dl/extractor/common.py b/youtube_dl/extractor/common.py index 0719c7bcd..2f574054d 100644 --- a/youtube_dl/extractor/common.py +++ b/youtube_dl/extractor/common.py @@ -34,6 +34,7 @@ from ..utils import ( fix_xml_ampersands, float_or_none, int_or_none, + parse_iso8601, RegexNotFoundError, sanitize_filename, sanitized_Request, @@ -313,9 +314,9 @@ class InfoExtractor(object): except ExtractorError: raise except compat_http_client.IncompleteRead as e: - raise ExtractorError('A network error has occured.', cause=e, expected=True) + raise ExtractorError('A network error has occurred.', cause=e, expected=True) except (KeyError, StopIteration) as e: - raise ExtractorError('An extractor error has occured.', cause=e) + raise ExtractorError('An extractor error has occurred.', cause=e) def set_downloader(self, downloader): """Sets the downloader for this IE.""" @@ -762,6 +763,42 @@ class InfoExtractor(object): return self._html_search_meta('twitter:player', html, 'twitter card player') + def _search_json_ld(self, html, video_id, **kwargs): + json_ld = self._search_regex( + r'(?s)]+type=(["\'])application/ld\+json\1[^>]*>(?P.+?)', + html, 'JSON-LD', group='json_ld', **kwargs) + if not json_ld: + return {} + return self._json_ld(json_ld, video_id, fatal=kwargs.get('fatal', True)) + + def _json_ld(self, json_ld, video_id, fatal=True): + if isinstance(json_ld, compat_str): + json_ld = self._parse_json(json_ld, video_id, fatal=fatal) + if not json_ld: + return {} + info = {} + if json_ld.get('@context') == 'http://schema.org': + item_type = json_ld.get('@type') + if item_type == 'TVEpisode': + info.update({ + 'episode': unescapeHTML(json_ld.get('name')), + 'episode_number': int_or_none(json_ld.get('episodeNumber')), + 'description': unescapeHTML(json_ld.get('description')), + }) + part_of_season = json_ld.get('partOfSeason') + if isinstance(part_of_season, dict) and part_of_season.get('@type') == 'TVSeason': + info['season_number'] = int_or_none(part_of_season.get('seasonNumber')) + part_of_series = json_ld.get('partOfSeries') + if isinstance(part_of_series, dict) and part_of_series.get('@type') == 'TVSeries': + info['series'] = unescapeHTML(part_of_series.get('name')) + elif item_type == 'Article': + info.update({ + 'timestamp': parse_iso8601(json_ld.get('datePublished')), + 'title': unescapeHTML(json_ld.get('headline')), + 'description': unescapeHTML(json_ld.get('articleBody')), + }) + return dict((k, v) for k, v in info.items() if v is not None) + @staticmethod def _hidden_inputs(html): html = re.sub(r'', '', html) @@ -1021,9 +1058,9 @@ class InfoExtractor(object): # TODO: looks like video codec is not always necessarily goes first va_codecs = codecs.split(',') if va_codecs[0]: - f['vcodec'] = va_codecs[0].partition('.')[0] + f['vcodec'] = va_codecs[0] if len(va_codecs) > 1 and va_codecs[1]: - f['acodec'] = va_codecs[1].partition('.')[0] + f['acodec'] = va_codecs[1] resolution = last_info.get('RESOLUTION') if resolution: width_str, height_str = resolution.split('x') diff --git a/youtube_dl/extractor/crunchyroll.py b/youtube_dl/extractor/crunchyroll.py index 00d943f77..785594df8 100644 --- a/youtube_dl/extractor/crunchyroll.py +++ b/youtube_dl/extractor/crunchyroll.py @@ -329,8 +329,10 @@ Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text streamdata_req, video_id, note='Downloading media info for %s' % video_format) stream_info = streamdata.find('./{default}preload/stream_info') - video_url = stream_info.find('./host').text - video_play_path = stream_info.find('./file').text + video_url = xpath_text(stream_info, './host') + video_play_path = xpath_text(stream_info, './file') + if not video_url or not video_play_path: + continue metadata = stream_info.find('./metadata') format_info = { 'format': video_format, diff --git a/youtube_dl/extractor/cultureunplugged.py b/youtube_dl/extractor/cultureunplugged.py new file mode 100644 index 000000000..9c764fe68 --- /dev/null +++ b/youtube_dl/extractor/cultureunplugged.py @@ -0,0 +1,63 @@ +from __future__ import unicode_literals + +import re + +from .common import InfoExtractor +from ..utils import int_or_none + + +class CultureUnpluggedIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?cultureunplugged\.com/documentary/watch-online/play/(?P\d+)(?:/(?P[^/]+))?' + _TESTS = [{ + 'url': 'http://www.cultureunplugged.com/documentary/watch-online/play/53662/The-Next--Best-West', + 'md5': 'ac6c093b089f7d05e79934dcb3d228fc', + 'info_dict': { + 'id': '53662', + 'display_id': 'The-Next--Best-West', + 'ext': 'mp4', + 'title': 'The Next, Best West', + 'description': 'md5:0423cd00833dea1519cf014e9d0903b1', + 'thumbnail': 're:^https?://.*\.jpg$', + 'creator': 'Coldstream Creative', + 'duration': 2203, + 'view_count': int, + } + }, { + 'url': 'http://www.cultureunplugged.com/documentary/watch-online/play/53662', + 'only_matching': True, + }] + + def _real_extract(self, url): + mobj = re.match(self._VALID_URL, url) + video_id = mobj.group('id') + display_id = mobj.group('display_id') or video_id + + movie_data = self._download_json( + 'http://www.cultureunplugged.com/movie-data/cu-%s.json' % video_id, display_id) + + video_url = movie_data['url'] + title = movie_data['title'] + + description = movie_data.get('synopsis') + creator = movie_data.get('producer') + duration = int_or_none(movie_data.get('duration')) + view_count = int_or_none(movie_data.get('views')) + + thumbnails = [{ + 'url': movie_data['%s_thumb' % size], + 'id': size, + 'preference': preference, + } for preference, size in enumerate(( + 'small', 'large')) if movie_data.get('%s_thumb' % size)] + + return { + 'id': video_id, + 'display_id': display_id, + 'url': video_url, + 'title': title, + 'description': description, + 'creator': creator, + 'duration': duration, + 'view_count': view_count, + 'thumbnails': thumbnails, + } diff --git a/youtube_dl/extractor/cwtv.py b/youtube_dl/extractor/cwtv.py new file mode 100644 index 000000000..36af67013 --- /dev/null +++ b/youtube_dl/extractor/cwtv.py @@ -0,0 +1,88 @@ +# coding: utf-8 +from __future__ import unicode_literals + +from .common import InfoExtractor +from ..utils import ( + int_or_none, + parse_iso8601, +) + + +class CWTVIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?cw(?:tv|seed)\.com/shows/(?:[^/]+/){2}\?play=(?P[a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12})' + _TESTS = [{ + 'url': 'http://cwtv.com/shows/arrow/legends-of-yesterday/?play=6b15e985-9345-4f60-baf8-56e96be57c63', + 'info_dict': { + 'id': '6b15e985-9345-4f60-baf8-56e96be57c63', + 'ext': 'mp4', + 'title': 'Legends of Yesterday', + 'description': 'Oliver and Barry Allen take Kendra Saunders and Carter Hall to a remote location to keep them hidden from Vandal Savage while they figure out how to defeat him.', + 'duration': 2665, + 'series': 'Arrow', + 'season_number': 4, + 'season': '4', + 'episode_number': 8, + 'upload_date': '20151203', + 'timestamp': 1449122100, + }, + 'params': { + # m3u8 download + 'skip_download': True, + } + }, { + 'url': 'http://www.cwseed.com/shows/whose-line-is-it-anyway/jeff-davis-4/?play=24282b12-ead2-42f2-95ad-26770c2c6088', + 'info_dict': { + 'id': '24282b12-ead2-42f2-95ad-26770c2c6088', + 'ext': 'mp4', + 'title': 'Jeff Davis 4', + 'description': 'Jeff Davis is back to make you laugh.', + 'duration': 1263, + 'series': 'Whose Line Is It Anyway?', + 'season_number': 11, + 'season': '11', + 'episode_number': 20, + 'upload_date': '20151006', + 'timestamp': 1444107300, + }, + 'params': { + # m3u8 download + 'skip_download': True, + } + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + video_data = self._download_json( + 'http://metaframe.digitalsmiths.tv/v2/CWtv/assets/%s/partner/132?format=json' % video_id, video_id) + + formats = self._extract_m3u8_formats( + video_data['videos']['variantplaylist']['uri'], video_id, 'mp4') + + thumbnails = [{ + 'url': image['uri'], + 'width': image.get('width'), + 'height': image.get('height'), + } for image_id, image in video_data['images'].items() if image.get('uri')] if video_data.get('images') else None + + video_metadata = video_data['assetFields'] + + subtitles = { + 'en': [{ + 'url': video_metadata['UnicornCcUrl'], + }], + } if video_metadata.get('UnicornCcUrl') else None + + return { + 'id': video_id, + 'title': video_metadata['title'], + 'description': video_metadata.get('description'), + 'duration': int_or_none(video_metadata.get('duration')), + 'series': video_metadata.get('seriesName'), + 'season_number': int_or_none(video_metadata.get('seasonNumber')), + 'season': video_metadata.get('seasonName'), + 'episode_number': int_or_none(video_metadata.get('episodeNumber')), + 'timestamp': parse_iso8601(video_data.get('startTime')), + 'thumbnails': thumbnails, + 'formats': formats, + 'subtitles': subtitles, + } diff --git a/youtube_dl/extractor/dailymotion.py b/youtube_dl/extractor/dailymotion.py index 439fd42e8..6e462af69 100644 --- a/youtube_dl/extractor/dailymotion.py +++ b/youtube_dl/extractor/dailymotion.py @@ -37,7 +37,7 @@ class DailymotionBaseInfoExtractor(InfoExtractor): class DailymotionIE(DailymotionBaseInfoExtractor): - _VALID_URL = r'(?i)(?:https?://)?(?:(www|touch)\.)?dailymotion\.[a-z]{2,3}/(?:(embed|#)/)?video/(?P[^/?_]+)' + _VALID_URL = r'(?i)(?:https?://)?(?:(www|touch)\.)?dailymotion\.[a-z]{2,3}/(?:(?:embed|swf|#)/)?video/(?P[^/?_]+)' IE_NAME = 'dailymotion' _FORMATS = [ @@ -104,6 +104,10 @@ class DailymotionIE(DailymotionBaseInfoExtractor): { 'url': 'http://www.dailymotion.com/video/x20su5f_the-power-of-nightmares-1-the-rise-of-the-politics-of-fear-bbc-2004_news', 'only_matching': True, + }, + { + 'url': 'http://www.dailymotion.com/swf/video/x3n92nf', + 'only_matching': True, } ] @@ -149,14 +153,15 @@ class DailymotionIE(DailymotionBaseInfoExtractor): ext = determine_ext(media_url) if type_ == 'application/x-mpegURL' or ext == 'm3u8': formats.extend(self._extract_m3u8_formats( - media_url, video_id, 'mp4', m3u8_id='hls', fatal=False)) + media_url, video_id, 'mp4', preference=-1, + m3u8_id='hls', fatal=False)) elif type_ == 'application/f4m' or ext == 'f4m': formats.extend(self._extract_f4m_formats( media_url, video_id, preference=-1, f4m_id='hds', fatal=False)) else: f = { 'url': media_url, - 'format_id': quality, + 'format_id': 'http-%s' % quality, } m = re.search(r'H264-(?P\d+)x(?P\d+)', media_url) if m: @@ -335,7 +340,7 @@ class DailymotionPlaylistIE(DailymotionBaseInfoExtractor): class DailymotionUserIE(DailymotionPlaylistIE): IE_NAME = 'dailymotion:user' - _VALID_URL = r'https?://(?:www\.)?dailymotion\.[a-z]{2,3}/(?!(?:embed|#|video|playlist)/)(?:(?:old/)?user/)?(?P[^/]+)' + _VALID_URL = r'https?://(?:www\.)?dailymotion\.[a-z]{2,3}/(?!(?:embed|swf|#|video|playlist)/)(?:(?:old/)?user/)?(?P[^/]+)' _PAGE_TEMPLATE = 'http://www.dailymotion.com/user/%s/%s' _TESTS = [{ 'url': 'https://www.dailymotion.com/user/nqtv', diff --git a/youtube_dl/extractor/dcn.py b/youtube_dl/extractor/dcn.py index 8f48571de..15a1c40f7 100644 --- a/youtube_dl/extractor/dcn.py +++ b/youtube_dl/extractor/dcn.py @@ -5,7 +5,10 @@ import re import base64 from .common import InfoExtractor -from ..compat import compat_urllib_parse +from ..compat import ( + compat_urllib_parse, + compat_str, +) from ..utils import ( int_or_none, parse_iso8601, @@ -186,7 +189,8 @@ class DCNSeasonIE(InfoExtractor): entries = [] for video in show['videos']: + video_id = compat_str(video['id']) entries.append(self.url_result( - 'http://www.dcndigital.ae/media/%s' % video['id'], 'DCNVideo')) + 'http://www.dcndigital.ae/media/%s' % video_id, 'DCNVideo', video_id)) return self.playlist_result(entries, season_id, title) diff --git a/youtube_dl/extractor/digiteka.py b/youtube_dl/extractor/digiteka.py new file mode 100644 index 000000000..7bb79ffda --- /dev/null +++ b/youtube_dl/extractor/digiteka.py @@ -0,0 +1,112 @@ +# coding: utf-8 +from __future__ import unicode_literals + +import re + +from .common import InfoExtractor +from ..utils import int_or_none + + +class DigitekaIE(InfoExtractor): + _VALID_URL = r'''(?x) + https?://(?:www\.)?(?:digiteka\.net|ultimedia\.com)/ + (?: + deliver/ + (?P + generic| + musique + ) + (?:/[^/]+)*/ + (?: + src| + article + )| + default/index/video + (?P + generic| + music + ) + /id + )/(?P[\d+a-z]+)''' + _TESTS = [{ + # news + 'url': 'https://www.ultimedia.com/default/index/videogeneric/id/s8uk0r', + 'md5': '276a0e49de58c7e85d32b057837952a2', + 'info_dict': { + 'id': 's8uk0r', + 'ext': 'mp4', + 'title': 'Loi sur la fin de vie: le texte prévoit un renforcement des directives anticipées', + 'thumbnail': 're:^https?://.*\.jpg', + 'duration': 74, + 'upload_date': '20150317', + 'timestamp': 1426604939, + 'uploader_id': '3fszv', + }, + }, { + # music + 'url': 'https://www.ultimedia.com/default/index/videomusic/id/xvpfp8', + 'md5': '2ea3513813cf230605c7e2ffe7eca61c', + 'info_dict': { + 'id': 'xvpfp8', + 'ext': 'mp4', + 'title': 'Two - C\'est La Vie (clip)', + 'thumbnail': 're:^https?://.*\.jpg', + 'duration': 233, + 'upload_date': '20150224', + 'timestamp': 1424760500, + 'uploader_id': '3rfzk', + }, + }, { + 'url': 'https://www.digiteka.net/deliver/generic/iframe/mdtk/01637594/src/lqm3kl/zone/1/showtitle/1/autoplay/yes', + 'only_matching': True, + }] + + @staticmethod + def _extract_url(webpage): + mobj = re.search( + r'<(?:iframe|script)[^>]+src=["\'](?P(?:https?:)?//(?:www\.)?ultimedia\.com/deliver/(?:generic|musique)(?:/[^/]+)*/(?:src|article)/[\d+a-z]+)', + webpage) + if mobj: + return mobj.group('url') + + def _real_extract(self, url): + mobj = re.match(self._VALID_URL, url) + video_id = mobj.group('id') + video_type = mobj.group('embed_type') or mobj.group('site_type') + if video_type == 'music': + video_type = 'musique' + + deliver_info = self._download_json( + 'http://www.ultimedia.com/deliver/video?video=%s&topic=%s' % (video_id, video_type), + video_id) + + yt_id = deliver_info.get('yt_id') + if yt_id: + return self.url_result(yt_id, 'Youtube') + + jwconf = deliver_info['jwconf'] + + formats = [] + for source in jwconf['playlist'][0]['sources']: + formats.append({ + 'url': source['file'], + 'format_id': source.get('label'), + }) + + self._sort_formats(formats) + + title = deliver_info['title'] + thumbnail = jwconf.get('image') + duration = int_or_none(deliver_info.get('duration')) + timestamp = int_or_none(deliver_info.get('release_time')) + uploader_id = deliver_info.get('owner_id') + + return { + 'id': video_id, + 'title': title, + 'thumbnail': thumbnail, + 'duration': duration, + 'timestamp': timestamp, + 'uploader_id': uploader_id, + 'formats': formats, + } diff --git a/youtube_dl/extractor/dramafever.py b/youtube_dl/extractor/dramafever.py index b3b21d65f..d35e88881 100644 --- a/youtube_dl/extractor/dramafever.py +++ b/youtube_dl/extractor/dramafever.py @@ -12,6 +12,7 @@ from ..compat import ( from ..utils import ( ExtractorError, clean_html, + int_or_none, sanitized_Request, ) @@ -66,13 +67,15 @@ class DramaFeverBaseIE(AMPIE): class DramaFeverIE(DramaFeverBaseIE): IE_NAME = 'dramafever' _VALID_URL = r'https?://(?:www\.)?dramafever\.com/drama/(?P[0-9]+/[0-9]+)(?:/|$)' - _TEST = { + _TESTS = [{ 'url': 'http://www.dramafever.com/drama/4512/1/Cooking_with_Shin/', 'info_dict': { 'id': '4512.1', - 'ext': 'flv', + 'ext': 'mp4', 'title': 'Cooking with Shin 4512.1', 'description': 'md5:a8eec7942e1664a6896fcd5e1287bfd0', + 'episode': 'Episode 1', + 'episode_number': 1, 'thumbnail': 're:^https?://.*\.jpg', 'timestamp': 1404336058, 'upload_date': '20140702', @@ -82,7 +85,25 @@ class DramaFeverIE(DramaFeverBaseIE): # m3u8 download 'skip_download': True, }, - } + }, { + 'url': 'http://www.dramafever.com/drama/4826/4/Mnet_Asian_Music_Awards_2015/?ap=1', + 'info_dict': { + 'id': '4826.4', + 'ext': 'mp4', + 'title': 'Mnet Asian Music Awards 2015 4826.4', + 'description': 'md5:3ff2ee8fedaef86e076791c909cf2e91', + 'episode': 'Mnet Asian Music Awards 2015 - Part 3', + 'episode_number': 4, + 'thumbnail': 're:^https?://.*\.jpg', + 'timestamp': 1450213200, + 'upload_date': '20151215', + 'duration': 5602, + }, + 'params': { + # m3u8 download + 'skip_download': True, + }, + }] def _real_extract(self, url): video_id = self._match_id(url).replace('/', '.') @@ -105,13 +126,22 @@ class DramaFeverIE(DramaFeverBaseIE): video_id, 'Downloading episode info JSON', fatal=False) if episode_info: value = episode_info.get('value') - if value: - subfile = value[0].get('subfile') or value[0].get('new_subfile') - if subfile and subfile != 'http://www.dramafever.com/st/': - info.setdefault('subtitles', {}).setdefault('English', []).append({ - 'ext': 'srt', - 'url': subfile, - }) + if isinstance(value, list): + for v in value: + if v.get('type') == 'Episode': + subfile = v.get('subfile') or v.get('new_subfile') + if subfile and subfile != 'http://www.dramafever.com/st/': + info.setdefault('subtitles', {}).setdefault('English', []).append({ + 'ext': 'srt', + 'url': subfile, + }) + episode_number = int_or_none(v.get('number')) + episode_fallback = 'Episode' + if episode_number: + episode_fallback += ' %d' % episode_number + info['episode'] = v.get('title') or episode_fallback + info['episode_number'] = episode_number + break return info diff --git a/youtube_dl/extractor/drtv.py b/youtube_dl/extractor/drtv.py index baa24c6d1..2d74ff855 100644 --- a/youtube_dl/extractor/drtv.py +++ b/youtube_dl/extractor/drtv.py @@ -91,7 +91,7 @@ class DRTVIE(InfoExtractor): subtitles_list = asset.get('SubtitlesList') if isinstance(subtitles_list, list): LANGS = { - 'Danish': 'dk', + 'Danish': 'da', } for subs in subtitles_list: lang = subs['Language'] diff --git a/youtube_dl/extractor/facebook.py b/youtube_dl/extractor/facebook.py index 5e43f2359..ec699ba54 100644 --- a/youtube_dl/extractor/facebook.py +++ b/youtube_dl/extractor/facebook.py @@ -105,7 +105,7 @@ class FacebookIE(InfoExtractor): login_results, 'login error', default=None, group='error') if error: raise ExtractorError('Unable to login: %s' % error, expected=True) - self._downloader.report_warning('unable to log in: bad username/password, or exceded login rate limit (~3/min). Check credentials or wait.') + self._downloader.report_warning('unable to log in: bad username/password, or exceeded login rate limit (~3/min). Check credentials or wait.') return fb_dtsg = self._search_regex( @@ -126,7 +126,7 @@ class FacebookIE(InfoExtractor): check_response = self._download_webpage(check_req, None, note='Confirming login') if re.search(r'id="checkpointSubmitButton"', check_response) is not None: - self._downloader.report_warning('Unable to confirm login, you have to login in your brower and authorize the login.') + self._downloader.report_warning('Unable to confirm login, you have to login in your browser and authorize the login.') except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err: self._downloader.report_warning('unable to log in: %s' % error_to_compat_str(err)) return diff --git a/youtube_dl/extractor/generic.py b/youtube_dl/extractor/generic.py index d79e1adc9..26d3698c8 100644 --- a/youtube_dl/extractor/generic.py +++ b/youtube_dl/extractor/generic.py @@ -57,7 +57,7 @@ from .pladform import PladformIE from .videomore import VideomoreIE from .googledrive import GoogleDriveIE from .jwplatform import JWPlatformIE -from .ultimedia import UltimediaIE +from .digiteka import DigitekaIE class GenericIE(InfoExtractor): @@ -487,7 +487,7 @@ class GenericIE(InfoExtractor): 'description': 'md5:8145d19d320ff3e52f28401f4c4283b9', } }, - # Embeded Ustream video + # Embedded Ustream video { 'url': 'http://www.american.edu/spa/pti/nsa-privacy-janus-2014.cfm', 'md5': '27b99cdb639c9b12a79bca876a073417', @@ -1402,7 +1402,7 @@ class GenericIE(InfoExtractor): # Look for embedded Dailymotion player matches = re.findall( - r']+?src=(["\'])(?P(?:https?:)?//(?:www\.)?dailymotion\.com/embed/video/.+?)\1', webpage) + r'<(?:embed|iframe)[^>]+?src=(["\'])(?P(?:https?:)?//(?:www\.)?dailymotion\.com/(?:embed|swf)/video/.+?)\1', webpage) if matches: return _playlist_from_matches( matches, lambda m: unescapeHTML(m[1])) @@ -1644,7 +1644,7 @@ class GenericIE(InfoExtractor): if myvi_url: return self.url_result(myvi_url) - # Look for embeded soundcloud player + # Look for embedded soundcloud player mobj = re.search( r'https?://(?:w\.)?soundcloud\.com/player[^"]+)"', webpage) @@ -1814,10 +1814,10 @@ class GenericIE(InfoExtractor): if mobj is not None: return self.url_result(unescapeHTML(mobj.group('url')), 'ScreenwaveMedia') - # Look for Ulltimedia embeds - ultimedia_url = UltimediaIE._extract_url(webpage) - if ultimedia_url: - return self.url_result(self._proto_relative_url(ultimedia_url), 'Ultimedia') + # Look for Digiteka embeds + digiteka_url = DigitekaIE._extract_url(webpage) + if digiteka_url: + return self.url_result(self._proto_relative_url(digiteka_url), DigitekaIE.ie_key()) # Look for AdobeTVVideo embeds mobj = re.search( diff --git a/youtube_dl/extractor/history.py b/youtube_dl/extractor/history.py deleted file mode 100644 index f86164afe..000000000 --- a/youtube_dl/extractor/history.py +++ /dev/null @@ -1,31 +0,0 @@ -from __future__ import unicode_literals - -from .common import InfoExtractor -from ..utils import smuggle_url - - -class HistoryIE(InfoExtractor): - _VALID_URL = r'https?://(?:www\.)?history\.com/(?:[^/]+/)+(?P[^/]+?)(?:$|[?#])' - - _TESTS = [{ - 'url': 'http://www.history.com/topics/valentines-day/history-of-valentines-day/videos/bet-you-didnt-know-valentines-day?m=528e394da93ae&s=undefined&f=1&free=false', - 'md5': '6fe632d033c92aa10b8d4a9be047a7c5', - 'info_dict': { - 'id': 'bLx5Dv5Aka1G', - 'ext': 'mp4', - 'title': "Bet You Didn't Know: Valentine's Day", - 'description': 'md5:7b57ea4829b391995b405fa60bd7b5f7', - }, - 'add_ie': ['ThePlatform'], - }] - - def _real_extract(self, url): - video_id = self._match_id(url) - - webpage = self._download_webpage(url, video_id) - - video_url = self._search_regex( - r'data-href="[^"]*/%s"[^>]+data-release-url="([^"]+)"' % video_id, - webpage, 'video url') - - return self.url_result(smuggle_url(video_url, {'sig': {'key': 'crazyjava', 'secret': 's3cr3t'}})) diff --git a/youtube_dl/extractor/hitbox.py b/youtube_dl/extractor/hitbox.py index 421f55bbe..ff797438d 100644 --- a/youtube_dl/extractor/hitbox.py +++ b/youtube_dl/extractor/hitbox.py @@ -159,6 +159,9 @@ class HitboxLiveIE(HitboxIE): cdns = player_config.get('cdns') servers = [] for cdn in cdns: + # Subscribe URLs are not playable + if cdn.get('rtmpSubscribe') is True: + continue base_url = cdn.get('netConnectionUrl') host = re.search('.+\.([^\.]+\.[^\./]+)/.+', base_url).group(1) if base_url not in servers: diff --git a/youtube_dl/extractor/iprima.py b/youtube_dl/extractor/iprima.py index 36baf3245..073777f34 100644 --- a/youtube_dl/extractor/iprima.py +++ b/youtube_dl/extractor/iprima.py @@ -14,6 +14,7 @@ from ..utils import ( class IPrimaIE(InfoExtractor): + _WORKING = False _VALID_URL = r'https?://play\.iprima\.cz/(?:[^/]+/)*(?P[^?#]+)' _TESTS = [{ diff --git a/youtube_dl/extractor/iqiyi.py b/youtube_dl/extractor/iqiyi.py index 66a70a181..691cb66d6 100644 --- a/youtube_dl/extractor/iqiyi.py +++ b/youtube_dl/extractor/iqiyi.py @@ -214,8 +214,8 @@ class IqiyiIE(InfoExtractor): def get_enc_key(self, swf_url, video_id): # TODO: automatic key extraction - # last update at 2015-12-18 for Zombie::bite - enc_key = '8b6b683780897eb8d9a48a02ccc4817d'[::-1] + # last update at 2016-01-22 for Zombie::bite + enc_key = '6ab6d0280511493ba85594779759d4ed' return enc_key def _real_extract(self, url): diff --git a/youtube_dl/extractor/ivi.py b/youtube_dl/extractor/ivi.py index 029878d24..472d72b4c 100644 --- a/youtube_dl/extractor/ivi.py +++ b/youtube_dl/extractor/ivi.py @@ -7,6 +7,7 @@ import json from .common import InfoExtractor from ..utils import ( ExtractorError, + int_or_none, sanitized_Request, ) @@ -27,44 +28,36 @@ class IviIE(InfoExtractor): 'title': 'Иван Васильевич меняет профессию', 'description': 'md5:b924063ea1677c8fe343d8a72ac2195f', 'duration': 5498, - 'thumbnail': 'http://thumbs.ivi.ru/f20.vcp.digitalaccess.ru/contents/d/1/c3c885163a082c29bceeb7b5a267a6.jpg', + 'thumbnail': 're:^https?://.*\.jpg$', }, 'skip': 'Only works from Russia', }, - # Serial's serie + # Serial's series { 'url': 'http://www.ivi.ru/watch/dvoe_iz_lartsa/9549', 'md5': '221f56b35e3ed815fde2df71032f4b3e', 'info_dict': { 'id': '9549', 'ext': 'mp4', - 'title': 'Двое из ларца - Серия 1', + 'title': 'Двое из ларца - Дело Гольдберга (1 часть)', + 'series': 'Двое из ларца', + 'season': 'Сезон 1', + 'season_number': 1, + 'episode': 'Дело Гольдберга (1 часть)', + 'episode_number': 1, 'duration': 2655, - 'thumbnail': 'http://thumbs.ivi.ru/f15.vcp.digitalaccess.ru/contents/8/4/0068dc0677041f3336b7c2baad8fc0.jpg', + 'thumbnail': 're:^https?://.*\.jpg$', }, 'skip': 'Only works from Russia', } ] # Sorted by quality - _known_formats = ['MP4-low-mobile', 'MP4-mobile', 'FLV-lo', 'MP4-lo', 'FLV-hi', 'MP4-hi', 'MP4-SHQ'] - - # Sorted by size - _known_thumbnails = ['Thumb-120x90', 'Thumb-160', 'Thumb-640x480'] - - def _extract_description(self, html): - m = re.search(r'', html) - return m.group('description') if m is not None else None - - def _extract_comment_count(self, html): - m = re.search('(?s)\s*Комментарии:\s*(?P\d+)\s*', html) - return int(m.group('commentcount')) if m is not None else 0 + _KNOWN_FORMATS = ['MP4-low-mobile', 'MP4-mobile', 'FLV-lo', 'MP4-lo', 'FLV-hi', 'MP4-hi', 'MP4-SHQ'] def _real_extract(self, url): video_id = self._match_id(url) - api_url = 'http://api.digitalaccess.ru/api/json/' - data = { 'method': 'da.content.get', 'params': [ @@ -76,11 +69,10 @@ class IviIE(InfoExtractor): ] } - request = sanitized_Request(api_url, json.dumps(data)) - - video_json_page = self._download_webpage( + request = sanitized_Request( + 'http://api.digitalaccess.ru/api/json/', json.dumps(data)) + video_json = self._download_json( request, video_id, 'Downloading video JSON') - video_json = json.loads(video_json_page) if 'error' in video_json: error = video_json['error'] @@ -95,35 +87,51 @@ class IviIE(InfoExtractor): formats = [{ 'url': x['url'], 'format_id': x['content_format'], - 'preference': self._known_formats.index(x['content_format']), - } for x in result['files'] if x['content_format'] in self._known_formats] + 'preference': self._KNOWN_FORMATS.index(x['content_format']), + } for x in result['files'] if x['content_format'] in self._KNOWN_FORMATS] self._sort_formats(formats) - if not formats: - raise ExtractorError('No media links available for %s' % video_id) - - duration = result['duration'] - compilation = result['compilation'] title = result['title'] + duration = int_or_none(result.get('duration')) + compilation = result.get('compilation') + episode = title if compilation else None + title = '%s - %s' % (compilation, title) if compilation is not None else title - previews = result['preview'] - previews.sort(key=lambda fmt: self._known_thumbnails.index(fmt['content_format'])) - thumbnail = previews[-1]['url'] if len(previews) > 0 else None + thumbnails = [{ + 'url': preview['url'], + 'id': preview.get('content_format'), + } for preview in result.get('preview', []) if preview.get('url')] + + webpage = self._download_webpage(url, video_id) + + season = self._search_regex( + r']+class="season active"[^>]*>]+>([^<]+)', + webpage, 'season', default=None) + season_number = int_or_none(self._search_regex( + r']+class="season active"[^>]*>]+data-season(?:-index)?="(\d+)"', + webpage, 'season number', default=None)) + + episode_number = int_or_none(self._search_regex( + r']+itemprop="episode"[^>]*>\s*]+itemprop="episodeNumber"[^>]+content="(\d+)', + webpage, 'episode number', default=None)) - video_page = self._download_webpage(url, video_id, 'Downloading video page') - description = self._extract_description(video_page) - comment_count = self._extract_comment_count(video_page) + description = self._og_search_description(webpage, default=None) or self._html_search_meta( + 'description', webpage, 'description', default=None) return { 'id': video_id, 'title': title, - 'thumbnail': thumbnail, + 'series': compilation, + 'season': season, + 'season_number': season_number, + 'episode': episode, + 'episode_number': episode_number, + 'thumbnails': thumbnails, 'description': description, 'duration': duration, - 'comment_count': comment_count, 'formats': formats, } @@ -149,8 +157,11 @@ class IviCompilationIE(InfoExtractor): }] def _extract_entries(self, html, compilation_id): - return [self.url_result('http://www.ivi.ru/watch/%s/%s' % (compilation_id, serie), 'Ivi') - for serie in re.findall(r'(?:[^<]+)' % compilation_id, html)] + return [ + self.url_result( + 'http://www.ivi.ru/watch/%s/%s' % (compilation_id, serie), IviIE.ie_key()) + for serie in re.findall( + r']+data-id="\1"' % compilation_id, html)] def _real_extract(self, url): mobj = re.match(self._VALID_URL, url) @@ -158,7 +169,8 @@ class IviCompilationIE(InfoExtractor): season_id = mobj.group('seasonid') if season_id is not None: # Season link - season_page = self._download_webpage(url, compilation_id, 'Downloading season %s web page' % season_id) + season_page = self._download_webpage( + url, compilation_id, 'Downloading season %s web page' % season_id) playlist_id = '%s/season%s' % (compilation_id, season_id) playlist_title = self._html_search_meta('title', season_page, 'title') entries = self._extract_entries(season_page, compilation_id) @@ -166,8 +178,9 @@ class IviCompilationIE(InfoExtractor): compilation_page = self._download_webpage(url, compilation_id, 'Downloading compilation web page') playlist_id = compilation_id playlist_title = self._html_search_meta('title', compilation_page, 'title') - seasons = re.findall(r'[^<]+' % compilation_id, compilation_page) - if len(seasons) == 0: # No seasons in this compilation + seasons = re.findall( + r']*>([^<]+)', webpage, 'camera name', default=None) + + quality = qualities(self._QUALITIES) + + formats = [{ + 'url': 'https://streaming.ivideon.com/flv/live?%s' % compat_urllib_parse.urlencode({ + 'server': server_id, + 'camera': camera_id, + 'sessionId': 'demo', + 'q': quality(format_id), + }), + 'format_id': format_id, + 'ext': 'flv', + 'quality': quality(format_id), + } for format_id in self._QUALITIES] + self._sort_formats(formats) + + return { + 'id': server_id, + 'title': self._live_title(camera_name or server_id), + 'description': description, + 'is_live': True, + 'formats': formats, + } diff --git a/youtube_dl/extractor/kanalplay.py b/youtube_dl/extractor/kanalplay.py index 4597d1b96..6c3498c67 100644 --- a/youtube_dl/extractor/kanalplay.py +++ b/youtube_dl/extractor/kanalplay.py @@ -49,7 +49,7 @@ class KanalPlayIE(InfoExtractor): subs = self._download_json( 'http://www.kanal%splay.se/api/subtitles/%s' % (channel_id, video_id), video_id, 'Downloading subtitles JSON', fatal=False) - return {'se': [{'ext': 'srt', 'data': self._fix_subtitles(subs)}]} if subs else {} + return {'sv': [{'ext': 'srt', 'data': self._fix_subtitles(subs)}]} if subs else {} def _real_extract(self, url): mobj = re.match(self._VALID_URL, url) diff --git a/youtube_dl/extractor/lemonde.py b/youtube_dl/extractor/lemonde.py new file mode 100644 index 000000000..be66fff03 --- /dev/null +++ b/youtube_dl/extractor/lemonde.py @@ -0,0 +1,34 @@ +from __future__ import unicode_literals + +from .common import InfoExtractor + + +class LemondeIE(InfoExtractor): + _VALID_URL = r'https?://(?:.+?\.)?lemonde\.fr/(?:[^/]+/)*(?P[^/]+)\.html' + _TESTS = [{ + 'url': 'http://www.lemonde.fr/police-justice/video/2016/01/19/comprendre-l-affaire-bygmalion-en-cinq-minutes_4849702_1653578.html', + 'md5': '01fb3c92de4c12c573343d63e163d302', + 'info_dict': { + 'id': 'lqm3kl', + 'ext': 'mp4', + 'title': "Comprendre l'affaire Bygmalion en 5 minutes", + 'thumbnail': 're:^https?://.*\.jpg', + 'duration': 320, + 'upload_date': '20160119', + 'timestamp': 1453194778, + 'uploader_id': '3pmkp', + }, + }, { + 'url': 'http://redaction.actu.lemonde.fr/societe/video/2016/01/18/calais-debut-des-travaux-de-defrichement-dans-la-jungle_4849233_3224.html', + 'only_matching': True, + }] + + def _real_extract(self, url): + display_id = self._match_id(url) + + webpage = self._download_webpage(url, display_id) + + digiteka_url = self._proto_relative_url(self._search_regex( + r'url\s*:\s*(["\'])(?P(?:https?://)?//(?:www\.)?(?:digiteka\.net|ultimedia\.com)/deliver/.+?)\1', + webpage, 'digiteka url', group='url')) + return self.url_result(digiteka_url, 'Digiteka') diff --git a/youtube_dl/extractor/letv.py b/youtube_dl/extractor/letv.py index be648000e..08bdae8a2 100644 --- a/youtube_dl/extractor/letv.py +++ b/youtube_dl/extractor/letv.py @@ -4,6 +4,7 @@ from __future__ import unicode_literals import datetime import re import time +import base64 from .common import InfoExtractor from ..compat import ( @@ -16,7 +17,9 @@ from ..utils import ( parse_iso8601, sanitized_Request, int_or_none, + str_or_none, encode_data_uri, + url_basename, ) @@ -239,3 +242,80 @@ class LetvPlaylistIE(LetvTvIE): }, 'playlist_mincount': 7 }] + + +class LetvCloudIE(InfoExtractor): + IE_DESC = '乐视云' + _VALID_URL = r'https?://yuntv\.letv\.com/bcloud.html\?.+' + + _TESTS = [{ + 'url': 'http://yuntv.letv.com/bcloud.html?uu=p7jnfw5hw9&vu=467623dedf', + 'md5': '26450599afd64c513bc77030ad15db44', + 'info_dict': { + 'id': 'p7jnfw5hw9_467623dedf', + 'ext': 'mp4', + 'title': 'Video p7jnfw5hw9_467623dedf', + }, + }, { + 'url': 'http://yuntv.letv.com/bcloud.html?uu=p7jnfw5hw9&vu=ec93197892&pu=2c7cd40209&auto_play=1&gpcflag=1&width=640&height=360', + 'info_dict': { + 'id': 'p7jnfw5hw9_ec93197892', + 'ext': 'mp4', + 'title': 'Video p7jnfw5hw9_ec93197892', + }, + }, { + 'url': 'http://yuntv.letv.com/bcloud.html?uu=p7jnfw5hw9&vu=187060b6fd', + 'info_dict': { + 'id': 'p7jnfw5hw9_187060b6fd', + 'ext': 'mp4', + 'title': 'Video p7jnfw5hw9_187060b6fd', + }, + }] + + def _real_extract(self, url): + uu_mobj = re.search('uu=([\w]+)', url) + vu_mobj = re.search('vu=([\w]+)', url) + + if not uu_mobj or not vu_mobj: + raise ExtractorError('Invalid URL: %s' % url, expected=True) + + uu = uu_mobj.group(1) + vu = vu_mobj.group(1) + media_id = uu + '_' + vu + + play_json_req = sanitized_Request( + 'http://api.letvcloud.com/gpc.php?cf=html5&sign=signxxxxx&ver=2.2&format=json&' + + 'uu=' + uu + '&vu=' + vu) + play_json = self._download_json(play_json_req, media_id, 'Downloading playJson data') + + if not play_json.get('data'): + if play_json.get('message'): + raise ExtractorError('Letv cloud said: %s' % play_json['message'], expected=True) + elif play_json.get('code'): + raise ExtractorError('Letv cloud returned error %d' % play_json['code'], expected=True) + else: + raise ExtractorError('Letv cloud returned an unknwon error') + + def b64decode(s): + return base64.b64decode(s.encode('utf-8')).decode('utf-8') + + formats = [] + for media in play_json['data']['video_info']['media'].values(): + play_url = media['play_url'] + url = b64decode(play_url['main_url']) + decoded_url = b64decode(url_basename(url)) + formats.append({ + 'url': url, + 'ext': determine_ext(decoded_url), + 'format_id': int_or_none(play_url.get('vtype')), + 'format_note': str_or_none(play_url.get('definition')), + 'width': int_or_none(play_url.get('vwidth')), + 'height': int_or_none(play_url.get('vheight')), + }) + self._sort_formats(formats) + + return { + 'id': media_id, + 'title': 'Video %s' % media_id, + 'formats': formats, + } diff --git a/youtube_dl/extractor/lovehomeporn.py b/youtube_dl/extractor/lovehomeporn.py new file mode 100644 index 000000000..8f65a3c03 --- /dev/null +++ b/youtube_dl/extractor/lovehomeporn.py @@ -0,0 +1,37 @@ +from __future__ import unicode_literals + +import re + +from .nuevo import NuevoBaseIE + + +class LoveHomePornIE(NuevoBaseIE): + _VALID_URL = r'https?://(?:www\.)?lovehomeporn\.com/video/(?P\d+)(?:/(?P[^/?#&]+))?' + _TEST = { + 'url': 'http://lovehomeporn.com/video/48483/stunning-busty-brunette-girlfriend-sucking-and-riding-a-big-dick#menu', + 'info_dict': { + 'id': '48483', + 'display_id': 'stunning-busty-brunette-girlfriend-sucking-and-riding-a-big-dick', + 'ext': 'mp4', + 'title': 'Stunning busty brunette girlfriend sucking and riding a big dick', + 'age_limit': 18, + 'duration': 238.47, + }, + 'params': { + 'skip_download': True, + } + } + + def _real_extract(self, url): + mobj = re.match(self._VALID_URL, url) + video_id = mobj.group('id') + display_id = mobj.group('display_id') + + info = self._extract_nuevo( + 'http://lovehomeporn.com/media/nuevo/config.php?key=%s' % video_id, + video_id) + info.update({ + 'display_id': display_id, + 'age_limit': 18 + }) + return info diff --git a/youtube_dl/extractor/mdr.py b/youtube_dl/extractor/mdr.py index 88334889e..425fc9e2a 100644 --- a/youtube_dl/extractor/mdr.py +++ b/youtube_dl/extractor/mdr.py @@ -17,7 +17,7 @@ class MDRIE(InfoExtractor): _VALID_URL = r'https?://(?:www\.)?(?:mdr|kika)\.de/(?:.*)/[a-z]+(?P\d+)(?:_.+?)?\.html' _TESTS = [{ - # MDR regularily deletes its videos + # MDR regularly deletes its videos 'url': 'http://www.mdr.de/fakt/video189002.html', 'only_matching': True, }, { diff --git a/youtube_dl/extractor/nbc.py b/youtube_dl/extractor/nbc.py index 340c922bd..1dd54c2f1 100644 --- a/youtube_dl/extractor/nbc.py +++ b/youtube_dl/extractor/nbc.py @@ -100,7 +100,7 @@ class NBCSportsVPlayerIE(InfoExtractor): class NBCSportsIE(InfoExtractor): - # Does not include https becuase its certificate is invalid + # Does not include https because its certificate is invalid _VALID_URL = r'http://www\.nbcsports\.com//?(?:[^/]+/)+(?P[0-9a-z-]+)' _TEST = { diff --git a/youtube_dl/extractor/neteasemusic.py b/youtube_dl/extractor/neteasemusic.py index 15eca825a..7830616f8 100644 --- a/youtube_dl/extractor/neteasemusic.py +++ b/youtube_dl/extractor/neteasemusic.py @@ -12,7 +12,10 @@ from ..compat import ( compat_str, compat_itertools_count, ) -from ..utils import sanitized_Request +from ..utils import ( + sanitized_Request, + float_or_none, +) class NetEaseMusicBaseIE(InfoExtractor): @@ -32,23 +35,32 @@ class NetEaseMusicBaseIE(InfoExtractor): result = b64encode(m.digest()).decode('ascii') return result.replace('/', '_').replace('+', '-') - @classmethod - def extract_formats(cls, info): + def extract_formats(self, info): formats = [] - for song_format in cls._FORMATS: + for song_format in self._FORMATS: details = info.get(song_format) if not details: continue - formats.append({ - 'url': 'http://m5.music.126.net/%s/%s.%s' % - (cls._encrypt(details['dfsId']), details['dfsId'], - details['extension']), - 'ext': details.get('extension'), - 'abr': details.get('bitrate', 0) / 1000, - 'format_id': song_format, - 'filesize': details.get('size'), - 'asr': details.get('sr') - }) + song_file_path = '/%s/%s.%s' % ( + self._encrypt(details['dfsId']), details['dfsId'], details['extension']) + + # 203.130.59.9, 124.40.233.182, 115.231.74.139, etc is a reverse proxy-like feature + # from NetEase's CDN provider that can be used if m5.music.126.net does not + # work, especially for users outside of Mainland China + # via: https://github.com/JixunMoe/unblock-163/issues/3#issuecomment-163115880 + for host in ('http://m5.music.126.net', 'http://115.231.74.139/m1.music.126.net', + 'http://124.40.233.182/m1.music.126.net', 'http://203.130.59.9/m1.music.126.net'): + song_url = host + song_file_path + if self._is_valid_url(song_url, info['id'], 'song'): + formats.append({ + 'url': song_url, + 'ext': details.get('extension'), + 'abr': float_or_none(details.get('bitrate'), scale=1000), + 'format_id': song_format, + 'filesize': details.get('size'), + 'asr': details.get('sr') + }) + break return formats @classmethod diff --git a/youtube_dl/extractor/nhl.py b/youtube_dl/extractor/nhl.py index e98a5ef89..8d5ce46ad 100644 --- a/youtube_dl/extractor/nhl.py +++ b/youtube_dl/extractor/nhl.py @@ -223,7 +223,7 @@ class NHLVideocenterIE(NHLBaseInfoExtractor): response = self._download_webpage(request_url, playlist_title) response = self._fix_json(response) if not response.strip(): - self._downloader.report_warning('Got an empty reponse, trying ' + self._downloader.report_warning('Got an empty response, trying ' 'adding the "newvideos" parameter') response = self._download_webpage(request_url + '&newvideos=true', playlist_title) diff --git a/youtube_dl/extractor/nowtv.py b/youtube_dl/extractor/nowtv.py index fd107aca2..916a102bf 100644 --- a/youtube_dl/extractor/nowtv.py +++ b/youtube_dl/extractor/nowtv.py @@ -71,6 +71,7 @@ class NowTVBaseIE(InfoExtractor): class NowTVIE(NowTVBaseIE): + _WORKING = False _VALID_URL = r'https?://(?:www\.)?nowtv\.(?:de|at|ch)/(?:rtl|rtl2|rtlnitro|superrtl|ntv|vox)/(?P[^/]+)/(?:(?:list/[^/]+|jahr/\d{4}/\d{1,2})/)?(?P[^/]+)/(?:player|preview)' _TESTS = [{ diff --git a/youtube_dl/extractor/npr.py b/youtube_dl/extractor/npr.py new file mode 100644 index 000000000..125c7010b --- /dev/null +++ b/youtube_dl/extractor/npr.py @@ -0,0 +1,82 @@ +from __future__ import unicode_literals + +from .common import InfoExtractor +from ..compat import compat_urllib_parse +from ..utils import ( + int_or_none, + qualities, +) + + +class NprIE(InfoExtractor): + _VALID_URL = r'http://(?:www\.)?npr\.org/player/v2/mediaPlayer\.html\?.*\bid=(?P\d+)' + _TESTS = [{ + 'url': 'http://www.npr.org/player/v2/mediaPlayer.html?id=449974205', + 'info_dict': { + 'id': '449974205', + 'title': 'New Music From Beach House, Chairlift, CMJ Discoveries And More' + }, + 'playlist_count': 7, + }, { + 'url': 'http://www.npr.org/player/v2/mediaPlayer.html?action=1&t=1&islist=false&id=446928052&m=446929930&live=1', + 'info_dict': { + 'id': '446928052', + 'title': "Songs We Love: Tigran Hamasyan, 'Your Mercy is Boundless'" + }, + 'playlist': [{ + 'md5': '12fa60cb2d3ed932f53609d4aeceabf1', + 'info_dict': { + 'id': '446929930', + 'ext': 'mp3', + 'title': 'Your Mercy is Boundless (Bazum en Qo gtutyunqd)', + 'duration': 402, + }, + }], + }] + + def _real_extract(self, url): + playlist_id = self._match_id(url) + + config = self._download_json( + 'http://api.npr.org/query?%s' % compat_urllib_parse.urlencode({ + 'id': playlist_id, + 'fields': 'titles,audio,show', + 'format': 'json', + 'apiKey': 'MDAzMzQ2MjAyMDEyMzk4MTU1MDg3ZmM3MQ010', + }), playlist_id) + + story = config['list']['story'][0] + + KNOWN_FORMATS = ('threegp', 'mp4', 'mp3') + quality = qualities(KNOWN_FORMATS) + + entries = [] + for audio in story.get('audio', []): + title = audio.get('title', {}).get('$text') + duration = int_or_none(audio.get('duration', {}).get('$text')) + formats = [] + for format_id, formats_entry in audio.get('format', {}).items(): + if not formats_entry: + continue + if isinstance(formats_entry, list): + formats_entry = formats_entry[0] + format_url = formats_entry.get('$text') + if not format_url: + continue + if format_id in KNOWN_FORMATS: + formats.append({ + 'url': format_url, + 'format_id': format_id, + 'ext': formats_entry.get('type'), + 'quality': quality(format_id), + }) + self._sort_formats(formats) + entries.append({ + 'id': audio['id'], + 'title': title, + 'duration': duration, + 'formats': formats, + }) + + playlist_title = story.get('title', {}).get('$text') + return self.playlist_result(entries, playlist_id, playlist_title) diff --git a/youtube_dl/extractor/ntvde.py b/youtube_dl/extractor/ntvde.py index d2cfe0961..a83e85cb8 100644 --- a/youtube_dl/extractor/ntvde.py +++ b/youtube_dl/extractor/ntvde.py @@ -2,6 +2,7 @@ from __future__ import unicode_literals from .common import InfoExtractor +from ..compat import compat_urlparse from ..utils import ( int_or_none, js_to_json, @@ -34,7 +35,7 @@ class NTVDeIE(InfoExtractor): webpage = self._download_webpage(url, video_id) info = self._parse_json(self._search_regex( - r'(?s)ntv.pageInfo.article =\s(\{.*?\});', webpage, 'info'), + r'(?s)ntv\.pageInfo\.article\s*=\s*(\{.*?\});', webpage, 'info'), video_id, transform_source=js_to_json) timestamp = int_or_none(info.get('publishedDateAsUnixTimeStamp')) vdata = self._parse_json(self._search_regex( @@ -42,18 +43,24 @@ class NTVDeIE(InfoExtractor): webpage, 'player data'), video_id, transform_source=js_to_json) duration = parse_duration(vdata.get('duration')) - formats = [{ - 'format_id': 'flash', - 'url': 'rtmp://fms.n-tv.de/' + vdata['video'], - }, { - 'format_id': 'mobile', - 'url': 'http://video.n-tv.de' + vdata['videoMp4'], - 'tbr': 400, # estimation - }] - m3u8_url = 'http://video.n-tv.de' + vdata['videoM3u8'] - formats.extend(self._extract_m3u8_formats( - m3u8_url, video_id, ext='mp4', - entry_protocol='m3u8_native', preference=0)) + + formats = [] + if vdata.get('video'): + formats.append({ + 'format_id': 'flash', + 'url': 'rtmp://fms.n-tv.de/%s' % vdata['video'], + }) + if vdata.get('videoMp4'): + formats.append({ + 'format_id': 'mobile', + 'url': compat_urlparse.urljoin('http://video.n-tv.de', vdata['videoMp4']), + 'tbr': 400, # estimation + }) + if vdata.get('videoM3u8'): + m3u8_url = compat_urlparse.urljoin('http://video.n-tv.de', vdata['videoM3u8']) + formats.extend(self._extract_m3u8_formats( + m3u8_url, video_id, ext='mp4', entry_protocol='m3u8_native', + preference=0, m3u8_id='hls', fatal=False)) self._sort_formats(formats) return { diff --git a/youtube_dl/extractor/nuevo.py b/youtube_dl/extractor/nuevo.py new file mode 100644 index 000000000..ef093dec2 --- /dev/null +++ b/youtube_dl/extractor/nuevo.py @@ -0,0 +1,38 @@ +# encoding: utf-8 +from __future__ import unicode_literals + +from .common import InfoExtractor + +from ..utils import ( + float_or_none, + xpath_text +) + + +class NuevoBaseIE(InfoExtractor): + def _extract_nuevo(self, config_url, video_id): + config = self._download_xml( + config_url, video_id, transform_source=lambda s: s.strip()) + + title = xpath_text(config, './title', 'title', fatal=True).strip() + video_id = xpath_text(config, './mediaid', default=video_id) + thumbnail = xpath_text(config, ['./image', './thumb']) + duration = float_or_none(xpath_text(config, './duration')) + + formats = [] + for element_name, format_id in (('file', 'sd'), ('filehd', 'hd')): + video_url = xpath_text(config, element_name) + if video_url: + formats.append({ + 'url': video_url, + 'format_id': format_id, + }) + self._check_formats(formats, video_id) + + return { + 'id': video_id, + 'title': title, + 'thumbnail': thumbnail, + 'duration': duration, + 'formats': formats + } diff --git a/youtube_dl/extractor/ora.py b/youtube_dl/extractor/ora.py index 9c4255a2d..8545fb1b8 100644 --- a/youtube_dl/extractor/ora.py +++ b/youtube_dl/extractor/ora.py @@ -21,7 +21,6 @@ class OraTVIE(InfoExtractor): 'ext': 'mp4', 'title': 'Vine & YouTube Stars Zach King & King Bach On Their Viral Videos!', 'description': 'md5:ebbc5b1424dd5dba7be7538148287ac1', - 'duration': 1477, } } @@ -30,14 +29,14 @@ class OraTVIE(InfoExtractor): webpage = self._download_webpage(url, display_id) video_data = self._search_regex( - r'"current"\s*:\s*({[^}]+?})', webpage, 'current video') + r'"(?:video|current)"\s*:\s*({[^}]+?})', webpage, 'current video') m3u8_url = self._search_regex( - r'"hls_stream"\s*:\s*"([^"]+)', video_data, 'm3u8 url', None) + r'hls_stream"?\s*:\s*"([^"]+)', video_data, 'm3u8 url', None) if m3u8_url: formats = self._extract_m3u8_formats( m3u8_url, display_id, 'mp4', 'm3u8_native', m3u8_id='hls', fatal=False) - # simular to GameSpotIE + # similar to GameSpotIE m3u8_path = compat_urlparse.urlparse(m3u8_url).path QUALITIES_RE = r'((,[a-z]+\d+)+,?)' available_qualities = self._search_regex( @@ -62,14 +61,12 @@ class OraTVIE(InfoExtractor): return { 'id': self._search_regex( - r'"video_id"\s*:\s*(\d+)', video_data, 'video id'), + r'"id"\s*:\s*(\d+)', video_data, 'video id', default=display_id), 'display_id': display_id, 'title': unescapeHTML(self._og_search_title(webpage)), 'description': get_element_by_attribute( 'class', 'video_txt_decription', webpage), 'thumbnail': self._proto_relative_url(self._search_regex( r'"thumb"\s*:\s*"([^"]+)', video_data, 'thumbnail', None)), - 'duration': int(self._search_regex( - r'"duration"\s*:\s*(\d+)', video_data, 'duration')), 'formats': formats, } diff --git a/youtube_dl/extractor/orf.py b/youtube_dl/extractor/orf.py index 2e6c9872b..c54775d54 100644 --- a/youtube_dl/extractor/orf.py +++ b/youtube_dl/extractor/orf.py @@ -170,7 +170,21 @@ class ORFOE1IE(InfoExtractor): class ORFFM4IE(InfoExtractor): IE_NAME = 'orf:fm4' IE_DESC = 'radio FM4' - _VALID_URL = r'http://fm4\.orf\.at/7tage/?#(?P[0-9]+)/(?P\w+)' + _VALID_URL = r'http://fm4\.orf\.at/(?:7tage/?#|player/)(?P[0-9]+)/(?P\w+)' + + _TEST = { + 'url': 'http://fm4.orf.at/player/20160110/IS/', + 'md5': '01e736e8f1cef7e13246e880a59ad298', + 'info_dict': { + 'id': '2016-01-10_2100_tl_54_7DaysSun13_11244', + 'ext': 'mp3', + 'title': 'Im Sumpf', + 'description': 'md5:384c543f866c4e422a55f66a62d669cd', + 'duration': 7173, + 'timestamp': 1452456073, + 'upload_date': '20160110', + }, + } def _real_extract(self, url): mobj = re.match(self._VALID_URL, url) diff --git a/youtube_dl/extractor/pluralsight.py b/youtube_dl/extractor/pluralsight.py index 55c11b3bf..12e1c2862 100644 --- a/youtube_dl/extractor/pluralsight.py +++ b/youtube_dl/extractor/pluralsight.py @@ -232,7 +232,7 @@ class PluralsightIE(PluralsightBaseIE): # { a = author, cn = clip_id, lc = end, m = name } return { - 'id': clip['clipName'], + 'id': clip.get('clipName') or clip['name'], 'title': '%s - %s' % (module['title'], clip['title']), 'duration': int_or_none(clip.get('duration')) or parse_duration(clip.get('formattedDuration')), 'creator': author, diff --git a/youtube_dl/extractor/prosiebensat1.py b/youtube_dl/extractor/prosiebensat1.py index baa54a3af..670e6950f 100644 --- a/youtube_dl/extractor/prosiebensat1.py +++ b/youtube_dl/extractor/prosiebensat1.py @@ -20,7 +20,7 @@ from ..utils import ( class ProSiebenSat1IE(InfoExtractor): IE_NAME = 'prosiebensat1' IE_DESC = 'ProSiebenSat.1 Digital' - _VALID_URL = r'https?://(?:www\.)?(?:(?:prosieben|prosiebenmaxx|sixx|sat1|kabeleins|the-voice-of-germany)\.(?:de|at|ch)|ran\.de|fem\.com)/(?P.+)' + _VALID_URL = r'https?://(?:www\.)?(?:(?:prosieben|prosiebenmaxx|sixx|sat1|kabeleins|the-voice-of-germany|7tv)\.(?:de|at|ch)|ran\.de|fem\.com)/(?P.+)' _TESTS = [ { @@ -32,7 +32,7 @@ class ProSiebenSat1IE(InfoExtractor): 'url': 'http://www.prosieben.de/tv/circus-halligalli/videos/218-staffel-2-episode-18-jahresrueckblick-ganze-folge', 'info_dict': { 'id': '2104602', - 'ext': 'mp4', + 'ext': 'flv', 'title': 'Episode 18 - Staffel 2', 'description': 'md5:8733c81b702ea472e069bc48bb658fc1', 'upload_date': '20131231', @@ -138,14 +138,13 @@ class ProSiebenSat1IE(InfoExtractor): 'url': 'http://www.the-voice-of-germany.de/video/31-andreas-kuemmert-rocket-man-clip', 'info_dict': { 'id': '2572814', - 'ext': 'mp4', + 'ext': 'flv', 'title': 'Andreas Kümmert: Rocket Man', 'description': 'md5:6ddb02b0781c6adf778afea606652e38', 'upload_date': '20131017', 'duration': 469.88, }, 'params': { - # rtmp download 'skip_download': True, }, }, @@ -153,13 +152,12 @@ class ProSiebenSat1IE(InfoExtractor): 'url': 'http://www.fem.com/wellness/videos/wellness-video-clip-kurztripps-zum-valentinstag.html', 'info_dict': { 'id': '2156342', - 'ext': 'mp4', + 'ext': 'flv', 'title': 'Kurztrips zum Valentinstag', - 'description': 'Romantischer Kurztrip zum Valentinstag? Wir verraten, was sich hier wirklich lohnt.', + 'description': 'Romantischer Kurztrip zum Valentinstag? Nina Heinemann verrät, was sich hier wirklich lohnt.', 'duration': 307.24, }, 'params': { - # rtmp download 'skip_download': True, }, }, @@ -172,12 +170,26 @@ class ProSiebenSat1IE(InfoExtractor): }, 'playlist_count': 2, }, + { + 'url': 'http://www.7tv.de/circus-halligalli/615-best-of-circus-halligalli-ganze-folge', + 'info_dict': { + 'id': '4187506', + 'ext': 'flv', + 'title': 'Best of Circus HalliGalli', + 'description': 'md5:8849752efd90b9772c9db6fdf87fb9e9', + 'upload_date': '20151229', + }, + 'params': { + 'skip_download': True, + }, + }, ] _CLIPID_REGEXES = [ r'"clip_id"\s*:\s+"(\d+)"', r'clipid: "(\d+)"', r'clip[iI]d=(\d+)', + r'clip[iI]d\s*=\s*["\'](\d+)', r"'itemImageUrl'\s*:\s*'/dynamic/thumbnails/full/\d+/(\d+)", ] _TITLE_REGEXES = [ @@ -186,12 +198,16 @@ class ProSiebenSat1IE(InfoExtractor): r'\s*

(.+?)

', r'

\s*(.+?)

', r'
\s*

([^<]+)

\s*
', + r'

\s*(.+?)

', + r']+id="veeseoTitle"[^>]*>(.+?)', ] _DESCRIPTION_REGEXES = [ r'

\s*(.+?)

', r'
\s*

Beschreibung: (.+?)

', r'
\s*
\s*\s*(.+?)\s*