From: Philipp Hagemeister Date: Sat, 13 Dec 2014 11:59:12 +0000 (+0100) Subject: Merge pull request #3927 from qrtt1/master X-Git-Url: http://git.bitcoin.ninja/index.cgi?p=youtube-dl;a=commitdiff_plain;h=2f15832f569834a37ac3ee6140a3b997407c04cd;hp=b1c3a49fffb7109125a2ad215f412f1198e3dffd Merge pull request #3927 from qrtt1/master apply ratelimit to f4m --- diff --git a/.gitignore b/.gitignore index e44977ca3..86312d4e4 100644 --- a/.gitignore +++ b/.gitignore @@ -30,3 +30,4 @@ updates_key.pem *.swp test/testdata .tox +youtube-dl.zsh diff --git a/AUTHORS b/AUTHORS new file mode 100644 index 000000000..bfa00f91b --- /dev/null +++ b/AUTHORS @@ -0,0 +1,94 @@ +Ricardo Garcia Gonzalez +Danny Colligan +Benjamin Johnson +Vasyl' Vavrychuk +Witold Baryluk +Paweł Paprota +Gergely Imreh +Rogério Brito +Philipp Hagemeister +Sören Schulze +Kevin Ngo +Ori Avtalion +shizeeg +Filippo Valsorda +Christian Albrecht +Dave Vasilevsky +Jaime Marquínez Ferrándiz +Jeff Crouse +Osama Khalid +Michael Walter +M. Yasoob Ullah Khalid +Julien Fraichard +Johny Mo Swag +Axel Noack +Albert Kim +Pierre Rudloff +Huarong Huo +Ismael Mejía +Steffan 'Ruirize' James +Andras Elso +Jelle van der Waa +Marcin Cieślak +Anton Larionov +Takuya Tsuchida +Sergey M. +Michael Orlitzky +Chris Gahan +Saimadhav Heblikar +Mike Col +Oleg Prutz +pulpe +Andreas Schmitz +Michael Kaiser +Niklas Laxström +David Triendl +Anthony Weems +David Wagner +Juan C. Olivares +Mattias Harrysson +phaer +Sainyam Kapoor +Nicolas Évrard +Jason Normore +Hoje Lee +Adam Thalhammer +Georg Jähnig +Ralf Haring +Koki Takahashi +Ariset Llerena +Adam Malcontenti-Wilson +Tobias Bell +Naglis Jonaitis +Charles Chen +Hassaan Ali +Dobrosław Żybort +David Fabijan +Sebastian Haas +Alexander Kirk +Erik Johnson +Keith Beckman +Ole Ernst +Aaron McDaniel (mcd1992) +Magnus Kolstad +Hari Padmanaban +Carlos Ramos +5moufl +lenaten +Dennis Scheiba +Damon Timm +winwon +Xavier Beynon +Gabriel Schubiner +xantares +Jan Matějka +Mauroy Sébastien +William Sewell +Dao Hoang Son +Oskar Jauch +Matthew Rayfield +t0mm0 +Tithen-Firion +Zack Fernandes +cryptonaut +Adrian Kretz diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 000000000..0ff7b395a --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,136 @@ +Please include the full output of the command when run with `--verbose`. The output (including the first lines) contain important debugging information. Issues without the full output are often not reproducible and therefore do not get solved in short order, if ever. + +Please re-read your issue once again to avoid a couple of common mistakes (you can and should use this as a checklist): + +### Is the description of the issue itself sufficient? + +We often get issue reports that we cannot really decipher. While in most cases we eventually get the required information after asking back multiple times, this poses an unnecessary drain on our resources. Many contributors, including myself, are also not native speakers, so we may misread some parts. + +So please elaborate on what feature you are requesting, or what bug you want to be fixed. Make sure that it's obvious + +- What the problem is +- How it could be fixed +- How your proposed solution would look like + +If your report is shorter than two lines, it is almost certainly missing some of these, which makes it hard for us to respond to it. We're often too polite to close the issue outright, but the missing info makes misinterpretation likely. As a commiter myself, I often get frustrated by these issues, since the only possible way for me to move forward on them is to ask for clarification over and over. + +For bug reports, this means that your report should contain the *complete* output of youtube-dl when called with the -v flag. The error message you get for (most) bugs even says so, but you would not believe how many of our bug reports do not contain this information. + +Site support requests **must contain an example URL**. An example URL is a URL you might want to download, like http://www.youtube.com/watch?v=BaW_jenozKc . There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. http://www.youtube.com/ ) is *not* an example URL. + +### Are you using the latest version? + +Before reporting any issue, type youtube-dl -U. This should report that you're up-to-date. About 20% of the reports we receive are already fixed, but people are using outdated versions. This goes for feature requests as well. + +### Is the issue already documented? + +Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or at https://github.com/rg3/youtube-dl/search?type=Issues . If there is an issue, feel free to write something along the lines of "This affects me as well, with version 2015.01.01. Here is some more information on the issue: ...". While some issues may be old, a new post into them often spurs rapid activity. + +### Why are existing options not enough? + +Before requesting a new feature, please have a quick peek at [the list of supported options](https://github.com/rg3/youtube-dl/blob/master/README.md#synopsis). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem. + +### Is there enough context in your bug report? + +People want to solve problems, and often think they do us a favor by breaking down their larger problems (e.g. wanting to skip already downloaded files) to a specific request (e.g. requesting us to look whether the file exists before downloading the info page). However, what often happens is that they break down the problem into two steps: One simple, and one impossible (or extremely complicated one). + +We are then presented with a very complicated request when the original problem could be solved far easier, e.g. by recording the downloaded video IDs in a separate file. To avoid this, you must include the greater context where it is non-obvious. In particular, every feature request that does not consist of adding support for a new site should contain a use case scenario that explains in what situation the missing feature would be useful. + +### Does the issue involve one problem, and one problem only? + +Some of our users seem to think there is a limit of issues they can or should open. There is no limit of issues they can or should open. While it may seem appealing to be able to dump all your issues into one ticket, that means that someone who solves one of your issues cannot mark the issue as closed. Typically, reporting a bunch of issues leads to the ticket lingering since nobody wants to attack that behemoth, until someone mercifully splits the issue into multiple ones. + +In particular, every site support request issue should only pertain to services at one site (generally under a common domain, but always using the same backend technology). Do not request support for vimeo user videos, Whitehouse podcasts, and Google Plus pages in the same issue. Also, make sure that you don't post bug reports alongside feature requests. As a rule of thumb, a feature request does not include outputs of youtube-dl that are not immediately related to the feature at hand. Do not post reports of a network error alongside the request for a new video service. + +### Is anyone going to need the feature? + +Only post features that you (or an incapicated friend you can personally talk to) require. Do not post features because they seem like a good idea. If they are really useful, they will be requested by someone who requires them. + +### Is your question about youtube-dl? + +It may sound strange, but some bug reports we receive are completely unrelated to youtube-dl and relate to a different or even the reporter's own application. Please make sure that you are actually using youtube-dl. If you are using a UI for youtube-dl, report the bug to the maintainer of the actual application providing the UI. On the other hand, if your UI for youtube-dl fails in some way you believe is related to youtube-dl, by all means, go ahead and report the bug. + +# DEVELOPER INSTRUCTIONS + +Most users do not need to build youtube-dl and can [download the builds](http://rg3.github.io/youtube-dl/download.html) or get them from their distribution. + +To run youtube-dl as a developer, you don't need to build anything either. Simply execute + + python -m youtube_dl + +To run the test, simply invoke your favorite test runner, or execute a test file directly; any of the following work: + + python -m unittest discover + python test/test_download.py + nosetests + +If you want to create a build of youtube-dl yourself, you'll need + +* python +* make +* pandoc +* zip +* nosetests + +### Adding support for a new site + +If you want to add support for a new site, you can follow this quick list (assuming your service is called `yourextractor`): + +1. [Fork this repository](https://github.com/rg3/youtube-dl/fork) +2. Check out the source code with `git clone git@github.com:YOUR_GITHUB_USERNAME/youtube-dl.git` +3. Start a new git branch with `cd youtube-dl; git checkout -b yourextractor` +4. Start with this simple template and save it to `youtube_dl/extractor/yourextractor.py`: + ```python + # coding: utf-8 + from __future__ import unicode_literals + + from .common import InfoExtractor + + + class YourExtractorIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?yourextractor\.com/watch/(?P[0-9]+)' + _TEST = { + 'url': 'http://yourextractor.com/watch/42', + 'md5': 'TODO: md5 sum of the first 10241 bytes of the video file (use --test)', + 'info_dict': { + 'id': '42', + 'ext': 'mp4', + 'title': 'Video title goes here', + 'thumbnail': 're:^https?://.*\.jpg$', + # TODO more properties, either as: + # * A value + # * MD5 checksum; start the string with md5: + # * A regular expression; start the string with re: + # * Any Python type (for example int or float) + } + } + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + + # TODO more code goes here, for example ... + title = self._html_search_regex(r'

(.*?)

', webpage, 'title') + + return { + 'id': video_id, + 'title': title, + 'description': self._og_search_description(webpage), + # TODO more properties (see youtube_dl/extractor/common.py) + } + ``` +5. Add an import in [`youtube_dl/extractor/__init__.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/__init__.py). +6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will be then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. +7. Have a look at [`youtube_dl/common/extractor/common.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should return](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L38). Add tests and code for as many as you want. +8. If you can, check the code with [pyflakes](https://pypi.python.org/pypi/pyflakes) (a good idea) and [pep8](https://pypi.python.org/pypi/pep8) (optional, ignore E501). +9. When the tests pass, [add](http://git-scm.com/docs/git-add) the new files and [commit](http://git-scm.com/docs/git-commit) them and [push](http://git-scm.com/docs/git-push) the result, like this: + + $ git add youtube_dl/extractor/__init__.py + $ git add youtube_dl/extractor/yourextractor.py + $ git commit -m '[yourextractor] Add new extractor' + $ git push origin yourextractor + +10. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it. + +In any case, thank you very much for your contributions! + diff --git a/Makefile b/Makefile index 6272b826c..d5b6e4307 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ -all: youtube-dl README.md README.txt youtube-dl.1 youtube-dl.bash-completion youtube-dl.fish +all: youtube-dl README.md CONTRIBUTING.md README.txt youtube-dl.1 youtube-dl.bash-completion youtube-dl.zsh youtube-dl.fish clean: - rm -rf youtube-dl.1.temp.md youtube-dl.1 youtube-dl.bash-completion README.txt MANIFEST build/ dist/ .coverage cover/ youtube-dl.tar.gz youtube-dl.fish + rm -rf youtube-dl.1.temp.md youtube-dl.1 youtube-dl.bash-completion README.txt MANIFEST build/ dist/ .coverage cover/ youtube-dl.tar.gz youtube-dl.zsh youtube-dl.fish *.dump *.part *.info.json CONTRIBUTING.md.tmp cleanall: clean rm -f youtube-dl youtube-dl.exe @@ -9,6 +9,7 @@ cleanall: clean PREFIX ?= /usr/local BINDIR ?= $(PREFIX)/bin MANDIR ?= $(PREFIX)/man +SHAREDIR ?= $(PREFIX)/share PYTHON ?= /usr/bin/env python # set SYSCONFDIR to /etc if PREFIX=/usr or PREFIX=/usr/local @@ -22,13 +23,15 @@ else endif endif -install: youtube-dl youtube-dl.1 youtube-dl.bash-completion +install: youtube-dl youtube-dl.1 youtube-dl.bash-completion youtube-dl.zsh youtube-dl.fish install -d $(DESTDIR)$(BINDIR) install -m 755 youtube-dl $(DESTDIR)$(BINDIR) install -d $(DESTDIR)$(MANDIR)/man1 install -m 644 youtube-dl.1 $(DESTDIR)$(MANDIR)/man1 install -d $(DESTDIR)$(SYSCONFDIR)/bash_completion.d install -m 644 youtube-dl.bash-completion $(DESTDIR)$(SYSCONFDIR)/bash_completion.d/youtube-dl + install -d $(DESTDIR)$(SHAREDIR)/zsh/site-functions + install -m 644 youtube-dl.zsh $(DESTDIR)$(SHAREDIR)/zsh/site-functions/_youtube-dl install -d $(DESTDIR)$(SYSCONFDIR)/fish/completions install -m 644 youtube-dl.fish $(DESTDIR)$(SYSCONFDIR)/fish/completions/youtube-dl.fish @@ -38,7 +41,7 @@ test: tar: youtube-dl.tar.gz -.PHONY: all clean install test tar bash-completion pypi-files fish-completion +.PHONY: all clean install test tar bash-completion pypi-files zsh-completion fish-completion pypi-files: youtube-dl.bash-completion README.txt youtube-dl.1 youtube-dl.fish @@ -53,6 +56,9 @@ youtube-dl: youtube_dl/*.py youtube_dl/*/*.py README.md: youtube_dl/*.py youtube_dl/*/*.py COLUMNS=80 python -m youtube_dl --help | python devscripts/make_readme.py +CONTRIBUTING.md: README.md + python devscripts/make_contributing.py README.md CONTRIBUTING.md + README.txt: README.md pandoc -f markdown -t plain README.md -o README.txt @@ -66,12 +72,17 @@ youtube-dl.bash-completion: youtube_dl/*.py youtube_dl/*/*.py devscripts/bash-co bash-completion: youtube-dl.bash-completion +youtube-dl.zsh: youtube_dl/*.py youtube_dl/*/*.py devscripts/zsh-completion.in + python devscripts/zsh-completion.py + +zsh-completion: youtube-dl.zsh + youtube-dl.fish: youtube_dl/*.py youtube_dl/*/*.py devscripts/fish-completion.in python devscripts/fish-completion.py fish-completion: youtube-dl.fish -youtube-dl.tar.gz: youtube-dl README.md README.txt youtube-dl.1 youtube-dl.bash-completion youtube-dl.fish +youtube-dl.tar.gz: youtube-dl README.md README.txt youtube-dl.1 youtube-dl.bash-completion youtube-dl.zsh youtube-dl.fish @tar -czf youtube-dl.tar.gz --transform "s|^|youtube-dl/|" --owner 0 --group 0 \ --exclude '*.DS_Store' \ --exclude '*.kate-swp' \ @@ -86,5 +97,5 @@ youtube-dl.tar.gz: youtube-dl README.md README.txt youtube-dl.1 youtube-dl.bash- bin devscripts test youtube_dl docs \ LICENSE README.md README.txt \ Makefile MANIFEST.in youtube-dl.1 youtube-dl.bash-completion \ - youtube-dl.fish setup.py \ + youtube-dl.zsh youtube-dl.fish setup.py \ youtube-dl diff --git a/README.md b/README.md index cabc5eb9a..f10f06ee8 100644 --- a/README.md +++ b/README.md @@ -30,7 +30,7 @@ Alternatively, refer to the developer instructions below for how to check out an # DESCRIPTION **youtube-dl** is a small command-line program to download videos from YouTube.com and a few more sites. It requires the Python interpreter, version -2.6, 2.7, or 3.3+, and it is not platform specific. It should work on +2.6, 2.7, or 3.2+, and it is not platform specific. It should work on your Unix box, on Windows or on Mac OS X. It is released to the public domain, which means you can modify it, redistribute it or use it however you like. @@ -65,10 +65,12 @@ which means you can modify it, redistribute it or use it however you like. this is not possible instead of searching. --ignore-config Do not read configuration files. When given in the global configuration file /etc - /youtube-dl.conf: do not read the user - configuration in ~/.config/youtube-dl.conf - (%APPDATA%/youtube-dl/config.txt on - Windows) + /youtube-dl.conf: Do not read the user + configuration in ~/.config/youtube- + dl/config (%APPDATA%/youtube-dl/config.txt + on Windows) + --flat-playlist Do not extract the videos of a playlist, + only list them. ## Video Selection: --playlist-start NUMBER playlist video to start at (default is 1) @@ -91,7 +93,8 @@ which means you can modify it, redistribute it or use it however you like. COUNT views --max-views COUNT Do not download any videos with more than COUNT views - --no-playlist download only the currently playing video + --no-playlist If the URL refers to a video and a + playlist, download only the video. --age-limit YEARS download only videos suitable for the given age --download-archive FILE Download only videos not listed in the @@ -99,8 +102,6 @@ which means you can modify it, redistribute it or use it however you like. downloaded videos in it. --include-ads Download advertisements as well (experimental) - --youtube-include-dash-manifest Try to download the DASH manifest on - YouTube videos (experimental) ## Download Options: -r, --rate-limit LIMIT maximum download rate in bytes per second @@ -112,12 +113,12 @@ which means you can modify it, redistribute it or use it however you like. size. By default, the buffer size is automatically resized from an initial value of SIZE. + --playlist-reverse Download playlist videos in reverse order ## Filesystem Options: -a, --batch-file FILE file containing URLs to download ('-' for stdin) --id use only video ID in file name - -A, --auto-number number downloaded files starting from 00000 -o, --output TEMPLATE output filename template. Use %(title)s to get the title, %(uploader)s for the uploader name, %(uploader_id)s for the @@ -131,17 +132,19 @@ which means you can modify it, redistribute it or use it however you like. %(upload_date)s for the upload date (YYYYMMDD), %(extractor)s for the provider (youtube, metacafe, etc), %(id)s for the - video id, %(playlist)s for the playlist the + video id, %(playlist_title)s, + %(playlist_id)s, or %(playlist)s (=title if + present, ID otherwise) for the playlist the video is in, %(playlist_index)s for the - position in the playlist and %% for a - literal percent. %(height)s and %(width)s - for the width and height of the video - format. %(resolution)s for a textual + position in the playlist. %(height)s and + %(width)s for the width and height of the + video format. %(resolution)s for a textual description of the resolution of the video - format. Use - to output to stdout. Can also - be used to download to a different - directory, for example with -o '/my/downloa - ds/%(uploader)s/%(title)s-%(id)s.%(ext)s' . + format. %% for a literal percent. Use - to + output to stdout. Can also be used to + download to a different directory, for + example with -o '/my/downloads/%(uploader)s + /%(title)s-%(id)s.%(ext)s' . --autonumber-size NUMBER Specifies the number of digits in %(autonumber)s when it is present in output filename template or --auto-number option @@ -149,6 +152,9 @@ which means you can modify it, redistribute it or use it however you like. --restrict-filenames Restrict filenames to only ASCII characters, and avoid "&" and spaces in filenames + -A, --auto-number [deprecated; use -o + "%(autonumber)s-%(title)s.%(ext)s" ] number + downloaded files starting from 00000 -t, --title [deprecated] use title in file name (default) -l, --literal [deprecated] alias of --title @@ -158,7 +164,8 @@ which means you can modify it, redistribute it or use it however you like. downloads if possible. --no-continue do not resume partially downloaded files (restart from beginning) - --no-part do not use .part files + --no-part do not use .part files - write directly + into output file --no-mtime do not use the Last-modified header to set the file modification time --write-description write video description to a .description @@ -198,6 +205,10 @@ which means you can modify it, redistribute it or use it however you like. -j, --dump-json simulate, quiet but print JSON information. See --output for a description of available keys. + -J, --dump-single-json simulate, quiet but print JSON information + for each command-line argument. If the URL + refers to a playlist, dump the whole + playlist information in a single line. --newline output progress bar as new lines --no-progress do not print progress bar --console-title display progress in console titlebar @@ -216,7 +227,7 @@ which means you can modify it, redistribute it or use it however you like. information about the video. (Currently supported only for YouTube) --user-agent UA specify a custom user agent - --referer REF specify a custom referer, use if the video + --referer URL specify a custom referer, use if the video access is restricted to one domain --add-header FIELD:VALUE specify a custom HTTP header and its value, separated by a colon ':'. You can use this @@ -234,13 +245,20 @@ which means you can modify it, redistribute it or use it however you like. "worst", "worstvideo" and "worstaudio". By default, youtube-dl will pick the best quality. Use commas to download multiple - audio formats, such as -f - 136/137/mp4/bestvideo,140/m4a/bestaudio + audio formats, such as -f + 136/137/mp4/bestvideo,140/m4a/bestaudio. + You can merge the video and audio of two + formats into a single file using -f + (requires ffmpeg or + avconv), for example -f + bestvideo+bestaudio. --all-formats download all available video formats --prefer-free-formats prefer free video formats unless a specific one is requested --max-quality FORMAT highest quality format to download -F, --list-formats list all available formats + --youtube-skip-dash-manifest Do not download the DASH manifest on + YouTube videos ## Subtitle Options: --write-sub write subtitle file @@ -256,7 +274,7 @@ which means you can modify it, redistribute it or use it however you like. language tags like 'en,pt' ## Authentication Options: - -u, --username USERNAME account username + -u, --username USERNAME login with this account ID -p, --password PASSWORD account password -2, --twofactor TWOFACTOR two-factor auth code -n, --netrc use .netrc authentication data @@ -267,7 +285,7 @@ which means you can modify it, redistribute it or use it however you like. (requires ffmpeg or avconv and ffprobe or avprobe) --audio-format FORMAT "best", "aac", "vorbis", "mp3", "m4a", - "opus", or "wav"; best by default + "opus", or "wav"; "best" by default --audio-quality QUALITY ffmpeg/avconv audio quality specification, insert a value between 0 (better) and 9 (worse) for VBR or a specific bitrate like @@ -374,7 +392,7 @@ Again, from then on you'll be able to update with `sudo youtube-dl -U`. YouTube changed their playlist format in March 2014 and later on, so you'll need at least youtube-dl 2014.07.25 to download all YouTube videos. -If you have installed youtube-dl with a package manager, pip, setup.py or a tarball, please use that to update. Note that Ubuntu packages do not seem to get updated anymore. Since we are not affiliated with Ubuntu, there is little we can do. Feel free to report bugs to the Ubuntu packaging guys - all they have to do is update the package to a somewhat recent version. See above for a way to update. +If you have installed youtube-dl with a package manager, pip, setup.py or a tarball, please use that to update. Note that Ubuntu packages do not seem to get updated anymore. Since we are not affiliated with Ubuntu, there is little we can do. Feel free to [report bugs](https://bugs.launchpad.net/ubuntu/+source/youtube-dl/+filebug) to the [Ubuntu packaging guys](mailto:ubuntu-motu@lists.ubuntu.com?subject=outdated%20version%20of%20youtube-dl) - all they have to do is update the package to a somewhat recent version. See above for a way to update. ### Do I always have to pass in `--max-quality FORMAT`, or `-citw`? @@ -478,14 +496,15 @@ If you want to add support for a new site, you can follow this quick list (assum def _real_extract(self, url): video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) # TODO more code goes here, for example ... - webpage = self._download_webpage(url, video_id) title = self._html_search_regex(r'

(.*?)

', webpage, 'title') return { 'id': video_id, 'title': title, + 'description': self._og_search_description(webpage), # TODO more properties (see youtube_dl/extractor/common.py) } ``` @@ -493,7 +512,7 @@ If you want to add support for a new site, you can follow this quick list (assum 6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will be then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. 7. Have a look at [`youtube_dl/common/extractor/common.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should return](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L38). Add tests and code for as many as you want. 8. If you can, check the code with [pyflakes](https://pypi.python.org/pypi/pyflakes) (a good idea) and [pep8](https://pypi.python.org/pypi/pep8) (optional, ignore E501). -9. When the tests pass, [add](https://www.kernel.org/pub/software/scm/git/docs/git-add.html) the new files and [commit](https://www.kernel.org/pub/software/scm/git/docs/git-commit.html) them and [push](https://www.kernel.org/pub/software/scm/git/docs/git-push.html) the result, like this: +9. When the tests pass, [add](http://git-scm.com/docs/git-add) the new files and [commit](http://git-scm.com/docs/git-commit) them and [push](http://git-scm.com/docs/git-push) the result, like this: $ git add youtube_dl/extractor/__init__.py $ git add youtube_dl/extractor/yourextractor.py @@ -504,15 +523,27 @@ If you want to add support for a new site, you can follow this quick list (assum In any case, thank you very much for your contributions! +# EMBEDDING YOUTUBE-DL + +youtube-dl makes the best effort to be a good command-line program, and thus should be callable from any programming language. If you encounter any problems parsing its output, feel free to [create a report](https://github.com/rg3/youtube-dl/issues/new). + +From a Python program, you can embed youtube-dl in a more powerful fashion, like this: + + import youtube_dl + + ydl_opts = {} + with youtube_dl.YoutubeDL(ydl_opts) as ydl: + ydl.download(['http://www.youtube.com/watch?v=BaW_jenozKc']) + +Most likely, you'll want to use various options. For a list of what can be done, have a look at [youtube_dl/YoutubeDL.py](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/YoutubeDL.py#L69). For a start, if you want to intercept youtube-dl's output, set a `logger` object. + # BUGS -Bugs and suggestions should be reported at: . Unless you were prompted so or there is another pertinent reason (e.g. GitHub fails to accept the bug report), please do not send bug reports via personal email. +Bugs and suggestions should be reported at: . Unless you were prompted so or there is another pertinent reason (e.g. GitHub fails to accept the bug report), please do not send bug reports via personal email. For discussions, join us in the irc channel #youtube-dl on freenode. Please include the full output of the command when run with `--verbose`. The output (including the first lines) contain important debugging information. Issues without the full output are often not reproducible and therefore do not get solved in short order, if ever. -For discussions, join us in the irc channel #youtube-dl on freenode. - -When you submit a request, please re-read it once to avoid a couple of mistakes (you can and should use this as a checklist): +Please re-read your issue once again to avoid a couple of common mistakes (you can and should use this as a checklist): ### Is the description of the issue itself sufficient? diff --git a/devscripts/bash-completion.py b/devscripts/bash-completion.py index 49287724d..cd26cc089 100755 --- a/devscripts/bash-completion.py +++ b/devscripts/bash-completion.py @@ -1,4 +1,6 @@ #!/usr/bin/env python +from __future__ import unicode_literals + import os from os.path import dirname as dirn import sys @@ -9,16 +11,17 @@ import youtube_dl BASH_COMPLETION_FILE = "youtube-dl.bash-completion" BASH_COMPLETION_TEMPLATE = "devscripts/bash-completion.in" + def build_completion(opt_parser): opts_flag = [] for group in opt_parser.option_groups: for option in group.option_list: - #for every long flag + # for every long flag opts_flag.append(option.get_opt_string()) with open(BASH_COMPLETION_TEMPLATE) as f: template = f.read() with open(BASH_COMPLETION_FILE, "w") as f: - #just using the special char + # just using the special char filled_template = template.replace("{{flags}}", " ".join(opts_flag)) f.write(filled_template) diff --git a/devscripts/buildserver.py b/devscripts/buildserver.py index e0c3cc83e..7c2f49f8b 100644 --- a/devscripts/buildserver.py +++ b/devscripts/buildserver.py @@ -142,7 +142,7 @@ def win_service_set_status(handle, status_code): def win_service_main(service_name, real_main, argc, argv_raw): try: - #args = [argv_raw[i].value for i in range(argc)] + # args = [argv_raw[i].value for i in range(argc)] stop_event = threading.Event() handler = HandlerEx(functools.partial(stop_event, win_service_handler)) h = advapi32.RegisterServiceCtrlHandlerExW(service_name, handler, None) @@ -233,6 +233,7 @@ def rmtree(path): #============================================================================== + class BuildError(Exception): def __init__(self, output, code=500): self.output = output @@ -369,7 +370,7 @@ class Builder(PythonBuilder, GITBuilder, YoutubeDLBuilder, DownloadBuilder, Clea class BuildHTTPRequestHandler(BaseHTTPRequestHandler): - actionDict = { 'build': Builder, 'download': Builder } # They're the same, no more caching. + actionDict = {'build': Builder, 'download': Builder} # They're the same, no more caching. def do_GET(self): path = urlparse.urlparse(self.path) diff --git a/devscripts/check-porn.py b/devscripts/check-porn.py index 86aa37b5f..216282712 100644 --- a/devscripts/check-porn.py +++ b/devscripts/check-porn.py @@ -1,4 +1,5 @@ #!/usr/bin/env python +from __future__ import unicode_literals """ This script employs a VERY basic heuristic ('porn' in webpage.lower()) to check diff --git a/devscripts/fish-completion.py b/devscripts/fish-completion.py index f4aaf0201..c2f238798 100755 --- a/devscripts/fish-completion.py +++ b/devscripts/fish-completion.py @@ -23,13 +23,13 @@ EXTRA_ARGS = { 'batch-file': ['--require-parameter'], } + def build_completion(opt_parser): commands = [] for group in opt_parser.option_groups: for option in group.option_list: long_option = option.get_opt_string().strip('-') - help_msg = shell_quote([option.help]) complete_cmd = ['complete', '--command', 'youtube-dl', '--long-option', long_option] if option._short_opts: complete_cmd += ['--short-option', option._short_opts[0].strip('-')] diff --git a/devscripts/gh-pages/add-version.py b/devscripts/gh-pages/add-version.py index 35865b2f3..867ea0048 100755 --- a/devscripts/gh-pages/add-version.py +++ b/devscripts/gh-pages/add-version.py @@ -1,4 +1,5 @@ #!/usr/bin/env python3 +from __future__ import unicode_literals import json import sys diff --git a/devscripts/gh-pages/generate-download.py b/devscripts/gh-pages/generate-download.py index 55912e12c..392e3ba21 100755 --- a/devscripts/gh-pages/generate-download.py +++ b/devscripts/gh-pages/generate-download.py @@ -1,8 +1,7 @@ #!/usr/bin/env python3 +from __future__ import unicode_literals + import hashlib -import shutil -import subprocess -import tempfile import urllib.request import json diff --git a/devscripts/gh-pages/sign-versions.py b/devscripts/gh-pages/sign-versions.py index 8a824df56..fa389c358 100755 --- a/devscripts/gh-pages/sign-versions.py +++ b/devscripts/gh-pages/sign-versions.py @@ -1,4 +1,5 @@ #!/usr/bin/env python3 +from __future__ import unicode_literals, with_statement import rsa import json @@ -11,22 +12,23 @@ except NameError: versions_info = json.load(open('update/versions.json')) if 'signature' in versions_info: - del versions_info['signature'] + del versions_info['signature'] print('Enter the PKCS1 private key, followed by a blank line:') privkey = b'' while True: - try: - line = input() - except EOFError: - break - if line == '': - break - privkey += line.encode('ascii') + b'\n' + try: + line = input() + except EOFError: + break + if line == '': + break + privkey += line.encode('ascii') + b'\n' privkey = rsa.PrivateKey.load_pkcs1(privkey) signature = hexlify(rsa.pkcs1.sign(json.dumps(versions_info, sort_keys=True).encode('utf-8'), privkey, 'SHA-256')).decode() print('signature: ' + signature) versions_info['signature'] = signature -json.dump(versions_info, open('update/versions.json', 'w'), indent=4, sort_keys=True) \ No newline at end of file +with open('update/versions.json', 'w') as versionsf: + json.dump(versions_info, versionsf, indent=4, sort_keys=True) diff --git a/devscripts/gh-pages/update-copyright.py b/devscripts/gh-pages/update-copyright.py index 12c2a9194..3663c8afe 100755 --- a/devscripts/gh-pages/update-copyright.py +++ b/devscripts/gh-pages/update-copyright.py @@ -1,11 +1,11 @@ #!/usr/bin/env python # coding: utf-8 -from __future__ import with_statement +from __future__ import with_statement, unicode_literals import datetime import glob -import io # For Python 2 compatibilty +import io # For Python 2 compatibilty import os import re @@ -13,7 +13,7 @@ year = str(datetime.datetime.now().year) for fn in glob.glob('*.html*'): with io.open(fn, encoding='utf-8') as f: content = f.read() - newc = re.sub(u'(?PCopyright © 2006-)(?P[0-9]{4})', u'Copyright © 2006-' + year, content) + newc = re.sub(r'(?PCopyright © 2006-)(?P[0-9]{4})', 'Copyright © 2006-' + year, content) if content != newc: tmpFn = fn + '.part' with io.open(tmpFn, 'wt', encoding='utf-8') as outf: diff --git a/devscripts/gh-pages/update-feed.py b/devscripts/gh-pages/update-feed.py index 0ba15ae0f..e93eb60fb 100755 --- a/devscripts/gh-pages/update-feed.py +++ b/devscripts/gh-pages/update-feed.py @@ -1,4 +1,5 @@ #!/usr/bin/env python3 +from __future__ import unicode_literals import datetime import io @@ -73,4 +74,3 @@ atom_template = atom_template.replace('@ENTRIES@', entries_str) with io.open('update/releases.atom', 'w', encoding='utf-8') as atom_file: atom_file.write(atom_template) - diff --git a/devscripts/gh-pages/update-sites.py b/devscripts/gh-pages/update-sites.py index 153e15c8a..f0f0481c7 100755 --- a/devscripts/gh-pages/update-sites.py +++ b/devscripts/gh-pages/update-sites.py @@ -1,4 +1,5 @@ #!/usr/bin/env python3 +from __future__ import unicode_literals import sys import os @@ -9,6 +10,7 @@ sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath( import youtube_dl + def main(): with open('supportedsites.html.in', 'r', encoding='utf-8') as tmplf: template = tmplf.read() @@ -21,7 +23,7 @@ def main(): continue elif ie_desc is not None: ie_html += ': {}'.format(ie.IE_DESC) - if ie.working() == False: + if not ie.working(): ie_html += ' (Currently broken)' ie_htmls.append('
  • {}
  • '.format(ie_html)) diff --git a/devscripts/make_contributing.py b/devscripts/make_contributing.py new file mode 100755 index 000000000..5fa8cf851 --- /dev/null +++ b/devscripts/make_contributing.py @@ -0,0 +1,32 @@ +#!/usr/bin/env python +from __future__ import unicode_literals + +import argparse +import io +import re + + +def main(): + parser = argparse.ArgumentParser() + parser.add_argument( + 'INFILE', help='README.md file name to read from') + parser.add_argument( + 'OUTFILE', help='CONTRIBUTING.md file name to write to') + args = parser.parse_args() + + with io.open(args.INFILE, encoding='utf-8') as inf: + readme = inf.read() + + bug_text = re.search( + r'(?s)#\s*BUGS\s*[^\n]*\s*(.*?)#\s*COPYRIGHT', readme).group(1) + dev_text = re.search( + r'(?s)(#\s*DEVELOPER INSTRUCTIONS.*?)#\s*EMBEDDING YOUTUBE-DL', + readme).group(1) + + out = bug_text + dev_text + + with io.open(args.OUTFILE, 'w', encoding='utf-8') as outf: + outf.write(out) + +if __name__ == '__main__': + main() diff --git a/devscripts/make_readme.py b/devscripts/make_readme.py index 70fa942dd..8fbce0796 100755 --- a/devscripts/make_readme.py +++ b/devscripts/make_readme.py @@ -1,3 +1,5 @@ +from __future__ import unicode_literals + import io import sys import re diff --git a/devscripts/prepare_manpage.py b/devscripts/prepare_manpage.py index d9c857015..f66bebfea 100644 --- a/devscripts/prepare_manpage.py +++ b/devscripts/prepare_manpage.py @@ -1,3 +1,4 @@ +from __future__ import unicode_literals import io import os.path diff --git a/devscripts/transition_helper.py b/devscripts/transition_helper.py deleted file mode 100644 index d5ca2d4ba..000000000 --- a/devscripts/transition_helper.py +++ /dev/null @@ -1,40 +0,0 @@ -#!/usr/bin/env python - -import sys, os - -try: - import urllib.request as compat_urllib_request -except ImportError: # Python 2 - import urllib2 as compat_urllib_request - -sys.stderr.write(u'Hi! We changed distribution method and now youtube-dl needs to update itself one more time.\n') -sys.stderr.write(u'This will only happen once. Simply press enter to go on. Sorry for the trouble!\n') -sys.stderr.write(u'The new location of the binaries is https://github.com/rg3/youtube-dl/downloads, not the git repository.\n\n') - -try: - raw_input() -except NameError: # Python 3 - input() - -filename = sys.argv[0] - -API_URL = "https://api.github.com/repos/rg3/youtube-dl/downloads" -BIN_URL = "https://github.com/downloads/rg3/youtube-dl/youtube-dl" - -if not os.access(filename, os.W_OK): - sys.exit('ERROR: no write permissions on %s' % filename) - -try: - urlh = compat_urllib_request.urlopen(BIN_URL) - newcontent = urlh.read() - urlh.close() -except (IOError, OSError) as err: - sys.exit('ERROR: unable to download latest version') - -try: - with open(filename, 'wb') as outf: - outf.write(newcontent) -except (IOError, OSError) as err: - sys.exit('ERROR: unable to overwrite current version') - -sys.stderr.write(u'Done! Now you can run youtube-dl.\n') diff --git a/devscripts/transition_helper_exe/setup.py b/devscripts/transition_helper_exe/setup.py deleted file mode 100644 index aaf5c2983..000000000 --- a/devscripts/transition_helper_exe/setup.py +++ /dev/null @@ -1,12 +0,0 @@ -from distutils.core import setup -import py2exe - -py2exe_options = { - "bundle_files": 1, - "compressed": 1, - "optimize": 2, - "dist_dir": '.', - "dll_excludes": ['w9xpopen.exe'] -} - -setup(console=['youtube-dl.py'], options={ "py2exe": py2exe_options }, zipfile=None) \ No newline at end of file diff --git a/devscripts/transition_helper_exe/youtube-dl.py b/devscripts/transition_helper_exe/youtube-dl.py deleted file mode 100644 index 6297dfd40..000000000 --- a/devscripts/transition_helper_exe/youtube-dl.py +++ /dev/null @@ -1,102 +0,0 @@ -#!/usr/bin/env python - -import sys, os -import urllib2 -import json, hashlib - -def rsa_verify(message, signature, key): - from struct import pack - from hashlib import sha256 - from sys import version_info - def b(x): - if version_info[0] == 2: return x - else: return x.encode('latin1') - assert(type(message) == type(b(''))) - block_size = 0 - n = key[0] - while n: - block_size += 1 - n >>= 8 - signature = pow(int(signature, 16), key[1], key[0]) - raw_bytes = [] - while signature: - raw_bytes.insert(0, pack("B", signature & 0xFF)) - signature >>= 8 - signature = (block_size - len(raw_bytes)) * b('\x00') + b('').join(raw_bytes) - if signature[0:2] != b('\x00\x01'): return False - signature = signature[2:] - if not b('\x00') in signature: return False - signature = signature[signature.index(b('\x00'))+1:] - if not signature.startswith(b('\x30\x31\x30\x0D\x06\x09\x60\x86\x48\x01\x65\x03\x04\x02\x01\x05\x00\x04\x20')): return False - signature = signature[19:] - if signature != sha256(message).digest(): return False - return True - -sys.stderr.write(u'Hi! We changed distribution method and now youtube-dl needs to update itself one more time.\n') -sys.stderr.write(u'This will only happen once. Simply press enter to go on. Sorry for the trouble!\n') -sys.stderr.write(u'From now on, get the binaries from http://rg3.github.com/youtube-dl/download.html, not from the git repository.\n\n') - -raw_input() - -filename = sys.argv[0] - -UPDATE_URL = "http://rg3.github.io/youtube-dl/update/" -VERSION_URL = UPDATE_URL + 'LATEST_VERSION' -JSON_URL = UPDATE_URL + 'versions.json' -UPDATES_RSA_KEY = (0x9d60ee4d8f805312fdb15a62f87b95bd66177b91df176765d13514a0f1754bcd2057295c5b6f1d35daa6742c3ffc9a82d3e118861c207995a8031e151d863c9927e304576bc80692bc8e094896fcf11b66f3e29e04e3a71e9a11558558acea1840aec37fc396fb6b65dc81a1c4144e03bd1c011de62e3f1357b327d08426fe93, 65537) - -if not os.access(filename, os.W_OK): - sys.exit('ERROR: no write permissions on %s' % filename) - -exe = os.path.abspath(filename) -directory = os.path.dirname(exe) -if not os.access(directory, os.W_OK): - sys.exit('ERROR: no write permissions on %s' % directory) - -try: - versions_info = urllib2.urlopen(JSON_URL).read().decode('utf-8') - versions_info = json.loads(versions_info) -except: - sys.exit(u'ERROR: can\'t obtain versions info. Please try again later.') -if not 'signature' in versions_info: - sys.exit(u'ERROR: the versions file is not signed or corrupted. Aborting.') -signature = versions_info['signature'] -del versions_info['signature'] -if not rsa_verify(json.dumps(versions_info, sort_keys=True), signature, UPDATES_RSA_KEY): - sys.exit(u'ERROR: the versions file signature is invalid. Aborting.') - -version = versions_info['versions'][versions_info['latest']] - -try: - urlh = urllib2.urlopen(version['exe'][0]) - newcontent = urlh.read() - urlh.close() -except (IOError, OSError) as err: - sys.exit('ERROR: unable to download latest version') - -newcontent_hash = hashlib.sha256(newcontent).hexdigest() -if newcontent_hash != version['exe'][1]: - sys.exit(u'ERROR: the downloaded file hash does not match. Aborting.') - -try: - with open(exe + '.new', 'wb') as outf: - outf.write(newcontent) -except (IOError, OSError) as err: - sys.exit(u'ERROR: unable to write the new version') - -try: - bat = os.path.join(directory, 'youtube-dl-updater.bat') - b = open(bat, 'w') - b.write(""" -echo Updating youtube-dl... -ping 127.0.0.1 -n 5 -w 1000 > NUL -move /Y "%s.new" "%s" -del "%s" - \n""" %(exe, exe, bat)) - b.close() - - os.startfile(bat) -except (IOError, OSError) as err: - sys.exit('ERROR: unable to overwrite current version') - -sys.stderr.write(u'Done! Now you can run youtube-dl.\n') diff --git a/devscripts/zsh-completion.in b/devscripts/zsh-completion.in new file mode 100644 index 000000000..b394a1ae7 --- /dev/null +++ b/devscripts/zsh-completion.in @@ -0,0 +1,28 @@ +#compdef youtube-dl + +__youtube_dl() { + local curcontext="$curcontext" fileopts diropts cur prev + typeset -A opt_args + fileopts="{{fileopts}}" + diropts="{{diropts}}" + cur=$words[CURRENT] + case $cur in + :) + _arguments '*: :(::ytfavorites ::ytrecommended ::ytsubscriptions ::ytwatchlater ::ythistory)' + ;; + *) + prev=$words[CURRENT-1] + if [[ ${prev} =~ ${fileopts} ]]; then + _path_files + elif [[ ${prev} =~ ${diropts} ]]; then + _path_files -/ + elif [[ ${prev} == "--recode-video" ]]; then + _arguments '*: :(mp4 flv ogg webm mkv)' + else + _arguments '*: :({{flags}})' + fi + ;; + esac +} + +__youtube_dl \ No newline at end of file diff --git a/devscripts/zsh-completion.py b/devscripts/zsh-completion.py new file mode 100755 index 000000000..f200f2c80 --- /dev/null +++ b/devscripts/zsh-completion.py @@ -0,0 +1,48 @@ +#!/usr/bin/env python +from __future__ import unicode_literals + +import os +from os.path import dirname as dirn +import sys + +sys.path.append(dirn(dirn((os.path.abspath(__file__))))) +import youtube_dl + +ZSH_COMPLETION_FILE = "youtube-dl.zsh" +ZSH_COMPLETION_TEMPLATE = "devscripts/zsh-completion.in" + + +def build_completion(opt_parser): + opts = [opt for group in opt_parser.option_groups + for opt in group.option_list] + opts_file = [opt for opt in opts if opt.metavar == "FILE"] + opts_dir = [opt for opt in opts if opt.metavar == "DIR"] + + fileopts = [] + for opt in opts_file: + if opt._short_opts: + fileopts.extend(opt._short_opts) + if opt._long_opts: + fileopts.extend(opt._long_opts) + + diropts = [] + for opt in opts_dir: + if opt._short_opts: + diropts.extend(opt._short_opts) + if opt._long_opts: + diropts.extend(opt._long_opts) + + flags = [opt.get_opt_string() for opt in opts] + + with open(ZSH_COMPLETION_TEMPLATE) as f: + template = f.read() + + template = template.replace("{{fileopts}}", "|".join(fileopts)) + template = template.replace("{{diropts}}", "|".join(diropts)) + template = template.replace("{{flags}}", " ".join(flags)) + + with open(ZSH_COMPLETION_FILE, "w") as f: + f.write(template) + +parser = youtube_dl.parseOpts()[0] +build_completion(parser) diff --git a/docs/conf.py b/docs/conf.py index 4a04ad779..594ca61a6 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -44,8 +44,8 @@ copyright = u'2014, Ricardo Garcia Gonzalez' # built documents. # # The short X.Y version. -import youtube_dl -version = youtube_dl.__version__ +from youtube_dl.version import __version__ +version = __version__ # The full version, including alpha/beta/rc tags. release = version diff --git a/setup.py b/setup.py index cf6b92b0f..4686260e0 100644 --- a/setup.py +++ b/setup.py @@ -4,7 +4,6 @@ from __future__ import print_function import os.path -import pkg_resources import warnings import sys @@ -103,7 +102,9 @@ setup( "Programming Language :: Python :: 2.6", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", - "Programming Language :: Python :: 3.3" + "Programming Language :: Python :: 3.2", + "Programming Language :: Python :: 3.3", + "Programming Language :: Python :: 3.4", ], **params diff --git a/test/helper.py b/test/helper.py index 62cb3ce02..8a820526a 100644 --- a/test/helper.py +++ b/test/helper.py @@ -57,9 +57,9 @@ class FakeYDL(YoutubeDL): # Different instances of the downloader can't share the same dictionary # some test set the "sublang" parameter, which would break the md5 checks. params = get_params(override=override) - super(FakeYDL, self).__init__(params) + super(FakeYDL, self).__init__(params, auto_init=False) self.result = [] - + def to_screen(self, s, skip_eol=None): print(s) @@ -72,8 +72,10 @@ class FakeYDL(YoutubeDL): def expect_warning(self, regex): # Silence an expected warning matching a regex old_report_warning = self.report_warning + def report_warning(self, message): - if re.match(regex, message): return + if re.match(regex, message): + return old_report_warning(message) self.report_warning = types.MethodType(report_warning, self) @@ -114,14 +116,14 @@ def expect_info_dict(self, expected_dict, got_dict): elif isinstance(expected, type): got = got_dict.get(info_field) self.assertTrue(isinstance(got, expected), - 'Expected type %r for field %s, but got value %r of type %r' % (expected, info_field, got, type(got))) + 'Expected type %r for field %s, but got value %r of type %r' % (expected, info_field, got, type(got))) else: if isinstance(expected, compat_str) and expected.startswith('md5:'): got = 'md5:' + md5(got_dict.get(info_field)) else: got = got_dict.get(info_field) self.assertEqual(expected, got, - 'invalid value for field %s, expected %r, got %r' % (info_field, expected, got)) + 'invalid value for field %s, expected %r, got %r' % (info_field, expected, got)) # Check for the presence of mandatory fields if got_dict.get('_type') != 'playlist': @@ -133,19 +135,20 @@ def expect_info_dict(self, expected_dict, got_dict): # Are checkable fields missing from the test case definition? test_info_dict = dict((key, value if not isinstance(value, compat_str) or len(value) < 250 else 'md5:' + md5(value)) - for key, value in got_dict.items() - if value and key in ('title', 'description', 'uploader', 'upload_date', 'timestamp', 'uploader_id', 'location')) + for key, value in got_dict.items() + if value and key in ('title', 'description', 'uploader', 'upload_date', 'timestamp', 'uploader_id', 'location')) missing_keys = set(test_info_dict.keys()) - set(expected_dict.keys()) if missing_keys: def _repr(v): if isinstance(v, compat_str): - return "'%s'" % v.replace('\\', '\\\\').replace("'", "\\'") + return "'%s'" % v.replace('\\', '\\\\').replace("'", "\\'").replace('\n', '\\n') else: return repr(v) info_dict_str = ''.join( ' %s: %s,\n' % (_repr(k), _repr(v)) for k, v in test_info_dict.items()) - write_string('\n"info_dict": {' + info_dict_str + '}\n', out=sys.stderr) + write_string( + '\n\'info_dict\': {\n' + info_dict_str + '}\n', out=sys.stderr) self.assertFalse( missing_keys, 'Missing keys in test definition: %s' % ( @@ -158,7 +161,9 @@ def assertRegexpMatches(self, text, regexp, msg=None): else: m = re.match(regexp, text) if not m: - note = 'Regexp didn\'t match: %r not found in %r' % (regexp, text) + note = 'Regexp didn\'t match: %r not found' % (regexp) + if len(text) < 1000: + note += ' in %r' % text if msg is None: msg = note else: @@ -171,3 +176,13 @@ def assertGreaterEqual(self, got, expected, msg=None): if msg is None: msg = '%r not greater than or equal to %r' % (got, expected) self.assertTrue(got >= expected, msg) + + +def expect_warnings(ydl, warnings_re): + real_warning = ydl.report_warning + + def _report_warning(w): + if not any(re.search(w_re, w) for w_re in warnings_re): + real_warning(w) + + ydl.report_warning = _report_warning diff --git a/test/swftests/ConstArrayAccess.as b/test/swftests/ConstArrayAccess.as new file mode 100644 index 000000000..07dc3f460 --- /dev/null +++ b/test/swftests/ConstArrayAccess.as @@ -0,0 +1,18 @@ +// input: [] +// output: 4 + +package { +public class ConstArrayAccess { + private static const x:int = 2; + private static const ar:Array = ["42", "3411"]; + + public static function main():int{ + var c:ConstArrayAccess = new ConstArrayAccess(); + return c.f(); + } + + public function f(): int { + return ar[1].length; + } +} +} diff --git a/test/swftests/ConstantInt.as b/test/swftests/ConstantInt.as new file mode 100644 index 000000000..e0bbb6166 --- /dev/null +++ b/test/swftests/ConstantInt.as @@ -0,0 +1,12 @@ +// input: [] +// output: 2 + +package { +public class ConstantInt { + private static const x:int = 2; + + public static function main():int{ + return x; + } +} +} diff --git a/test/swftests/DictCall.as b/test/swftests/DictCall.as new file mode 100644 index 000000000..c2d174cc2 --- /dev/null +++ b/test/swftests/DictCall.as @@ -0,0 +1,10 @@ +// input: [{"x": 1, "y": 2}] +// output: 3 + +package { +public class DictCall { + public static function main(d:Object):int{ + return d.x + d.y; + } +} +} diff --git a/test/swftests/EqualsOperator.as b/test/swftests/EqualsOperator.as new file mode 100644 index 000000000..837a69a46 --- /dev/null +++ b/test/swftests/EqualsOperator.as @@ -0,0 +1,10 @@ +// input: [] +// output: false + +package { +public class EqualsOperator { + public static function main():Boolean{ + return 1 == 2; + } +} +} diff --git a/test/swftests/MemberAssignment.as b/test/swftests/MemberAssignment.as new file mode 100644 index 000000000..dcba5e3ff --- /dev/null +++ b/test/swftests/MemberAssignment.as @@ -0,0 +1,22 @@ +// input: [1] +// output: 2 + +package { +public class MemberAssignment { + public var v:int; + + public function g():int { + return this.v; + } + + public function f(a:int):int{ + this.v = a; + return this.v + this.g(); + } + + public static function main(a:int): int { + var v:MemberAssignment = new MemberAssignment(); + return v.f(a); + } +} +} diff --git a/test/swftests/NeOperator.as b/test/swftests/NeOperator.as new file mode 100644 index 000000000..61dcbc4e9 --- /dev/null +++ b/test/swftests/NeOperator.as @@ -0,0 +1,24 @@ +// input: [] +// output: 123 + +package { +public class NeOperator { + public static function main(): int { + var res:int = 0; + if (1 != 2) { + res += 3; + } else { + res += 4; + } + if (2 != 2) { + res += 10; + } else { + res += 20; + } + if (9 == 9) { + res += 100; + } + return res; + } +} +} diff --git a/test/swftests/PrivateVoidCall.as b/test/swftests/PrivateVoidCall.as new file mode 100644 index 000000000..2cc016797 --- /dev/null +++ b/test/swftests/PrivateVoidCall.as @@ -0,0 +1,22 @@ +// input: [] +// output: 9 + +package { +public class PrivateVoidCall { + public static function main():int{ + var f:OtherClass = new OtherClass(); + f.func(); + return 9; + } +} +} + +class OtherClass { + private function pf():void { + ; + } + + public function func():void { + this.pf(); + } +} diff --git a/test/swftests/StringBasics.as b/test/swftests/StringBasics.as new file mode 100644 index 000000000..d27430b13 --- /dev/null +++ b/test/swftests/StringBasics.as @@ -0,0 +1,11 @@ +// input: [] +// output: 3 + +package { +public class StringBasics { + public static function main():int{ + var s:String = "abc"; + return s.length; + } +} +} diff --git a/test/swftests/StringCharCodeAt.as b/test/swftests/StringCharCodeAt.as new file mode 100644 index 000000000..c20d74d65 --- /dev/null +++ b/test/swftests/StringCharCodeAt.as @@ -0,0 +1,11 @@ +// input: [] +// output: 9897 + +package { +public class StringCharCodeAt { + public static function main():int{ + var s:String = "abc"; + return s.charCodeAt(1) * 100 + s.charCodeAt(); + } +} +} diff --git a/test/swftests/StringConversion.as b/test/swftests/StringConversion.as new file mode 100644 index 000000000..c976f5042 --- /dev/null +++ b/test/swftests/StringConversion.as @@ -0,0 +1,11 @@ +// input: [] +// output: 2 + +package { +public class StringConversion { + public static function main():int{ + var s:String = String(99); + return s.length; + } +} +} diff --git a/test/test_YoutubeDL.py b/test/test_YoutubeDL.py index ab61e1976..f8e4f930e 100644 --- a/test/test_YoutubeDL.py +++ b/test/test_YoutubeDL.py @@ -266,6 +266,7 @@ class TestFormatSelection(unittest.TestCase): 'ext': 'mp4', 'width': None, } + def fname(templ): ydl = YoutubeDL({'outtmpl': templ}) return ydl.prepare_filename(info) diff --git a/test/test_age_restriction.py b/test/test_age_restriction.py index 71e80b037..5be065c43 100644 --- a/test/test_age_restriction.py +++ b/test/test_age_restriction.py @@ -1,4 +1,5 @@ #!/usr/bin/env python +from __future__ import unicode_literals # Allow direct execution import os @@ -19,7 +20,7 @@ def _download_restricted(url, filename, age): 'age_limit': age, 'skip_download': True, 'writeinfojson': True, - "outtmpl": "%(id)s.%(ext)s", + 'outtmpl': '%(id)s.%(ext)s', } ydl = YoutubeDL(params) ydl.add_default_info_extractors() diff --git a/test/test_all_urls.py b/test/test_all_urls.py index 84b05da39..bd4fe17bf 100644 --- a/test/test_all_urls.py +++ b/test/test_all_urls.py @@ -14,7 +14,7 @@ from test.helper import gettestcases from youtube_dl.extractor import ( FacebookIE, gen_extractors, - JustinTVIE, + TwitchIE, YoutubeIE, ) @@ -32,19 +32,19 @@ class TestAllURLsMatching(unittest.TestCase): def test_youtube_playlist_matching(self): assertPlaylist = lambda url: self.assertMatch(url, ['youtube:playlist']) assertPlaylist('ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8') - assertPlaylist('UUBABnxM4Ar9ten8Mdjj1j0Q') #585 + assertPlaylist('UUBABnxM4Ar9ten8Mdjj1j0Q') # 585 assertPlaylist('PL63F0C78739B09958') assertPlaylist('https://www.youtube.com/playlist?list=UUBABnxM4Ar9ten8Mdjj1j0Q') assertPlaylist('https://www.youtube.com/course?list=ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8') assertPlaylist('https://www.youtube.com/playlist?list=PLwP_SiAcdui0KVebT0mU9Apz359a4ubsC') - assertPlaylist('https://www.youtube.com/watch?v=AV6J6_AeFEQ&playnext=1&list=PL4023E734DA416012') #668 + assertPlaylist('https://www.youtube.com/watch?v=AV6J6_AeFEQ&playnext=1&list=PL4023E734DA416012') # 668 self.assertFalse('youtube:playlist' in self.matching_ies('PLtS2H6bU1M')) # Top tracks assertPlaylist('https://www.youtube.com/playlist?list=MCUS.20142101') def test_youtube_matching(self): self.assertTrue(YoutubeIE.suitable('PLtS2H6bU1M')) - self.assertFalse(YoutubeIE.suitable('https://www.youtube.com/watch?v=AV6J6_AeFEQ&playnext=1&list=PL4023E734DA416012')) #668 + self.assertFalse(YoutubeIE.suitable('https://www.youtube.com/watch?v=AV6J6_AeFEQ&playnext=1&list=PL4023E734DA416012')) # 668 self.assertMatch('http://youtu.be/BaW_jenozKc', ['youtube']) self.assertMatch('http://www.youtube.com/v/BaW_jenozKc', ['youtube']) self.assertMatch('https://youtube.googleapis.com/v/BaW_jenozKc', ['youtube']) @@ -72,21 +72,17 @@ class TestAllURLsMatching(unittest.TestCase): self.assertMatch('http://www.youtube.com/results?search_query=making+mustard', ['youtube:search_url']) self.assertMatch('https://www.youtube.com/results?baz=bar&search_query=youtube-dl+test+video&filters=video&lclk=video', ['youtube:search_url']) - def test_justin_tv_channelid_matching(self): - self.assertTrue(JustinTVIE.suitable('justin.tv/vanillatv')) - self.assertTrue(JustinTVIE.suitable('twitch.tv/vanillatv')) - self.assertTrue(JustinTVIE.suitable('www.justin.tv/vanillatv')) - self.assertTrue(JustinTVIE.suitable('www.twitch.tv/vanillatv')) - self.assertTrue(JustinTVIE.suitable('http://www.justin.tv/vanillatv')) - self.assertTrue(JustinTVIE.suitable('http://www.twitch.tv/vanillatv')) - self.assertTrue(JustinTVIE.suitable('http://www.justin.tv/vanillatv/')) - self.assertTrue(JustinTVIE.suitable('http://www.twitch.tv/vanillatv/')) - - def test_justintv_videoid_matching(self): - self.assertTrue(JustinTVIE.suitable('http://www.twitch.tv/vanillatv/b/328087483')) - - def test_justin_tv_chapterid_matching(self): - self.assertTrue(JustinTVIE.suitable('http://www.twitch.tv/tsm_theoddone/c/2349361')) + def test_twitch_channelid_matching(self): + self.assertTrue(TwitchIE.suitable('twitch.tv/vanillatv')) + self.assertTrue(TwitchIE.suitable('www.twitch.tv/vanillatv')) + self.assertTrue(TwitchIE.suitable('http://www.twitch.tv/vanillatv')) + self.assertTrue(TwitchIE.suitable('http://www.twitch.tv/vanillatv/')) + + def test_twitch_videoid_matching(self): + self.assertTrue(TwitchIE.suitable('http://www.twitch.tv/vanillatv/b/328087483')) + + def test_twitch_chapterid_matching(self): + self.assertTrue(TwitchIE.suitable('http://www.twitch.tv/tsm_theoddone/c/2349361')) def test_youtube_extract(self): assertExtractId = lambda url, id: self.assertEqual(YoutubeIE.extract_id(url), id) diff --git a/test/test_compat.py b/test/test_compat.py new file mode 100644 index 000000000..1eb454e06 --- /dev/null +++ b/test/test_compat.py @@ -0,0 +1,46 @@ +#!/usr/bin/env python +# coding: utf-8 + +from __future__ import unicode_literals + +# Allow direct execution +import os +import sys +import unittest +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + + +from youtube_dl.utils import get_filesystem_encoding +from youtube_dl.compat import ( + compat_getenv, + compat_expanduser, +) + + +class TestCompat(unittest.TestCase): + def test_compat_getenv(self): + test_str = 'тест' + os.environ['YOUTUBE-DL-TEST'] = ( + test_str if sys.version_info >= (3, 0) + else test_str.encode(get_filesystem_encoding())) + self.assertEqual(compat_getenv('YOUTUBE-DL-TEST'), test_str) + + def test_compat_expanduser(self): + old_home = os.environ.get('HOME') + test_str = 'C:\Documents and Settings\тест\Application Data' + os.environ['HOME'] = ( + test_str if sys.version_info >= (3, 0) + else test_str.encode(get_filesystem_encoding())) + self.assertEqual(compat_expanduser('~'), test_str) + os.environ['HOME'] = old_home + + def test_all_present(self): + import youtube_dl.compat + all_names = youtube_dl.compat.__all__ + present_names = set(filter( + lambda c: '_' in c and not c.startswith('_'), + dir(youtube_dl.compat))) - set(['unicode_literals']) + self.assertEqual(all_names, sorted(present_names)) + +if __name__ == '__main__': + unittest.main() diff --git a/test/test_download.py b/test/test_download.py index 8178015ea..a009aa475 100644 --- a/test/test_download.py +++ b/test/test_download.py @@ -1,5 +1,7 @@ #!/usr/bin/env python +from __future__ import unicode_literals + # Allow direct execution import os import sys @@ -8,6 +10,7 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) from test.helper import ( assertGreaterEqual, + expect_warnings, get_params, gettestcases, expect_info_dict, @@ -22,10 +25,12 @@ import json import socket import youtube_dl.YoutubeDL -from youtube_dl.utils import ( +from youtube_dl.compat import ( compat_http_client, compat_urllib_error, compat_HTTPError, +) +from youtube_dl.utils import ( DownloadError, ExtractorError, format_bytes, @@ -35,18 +40,22 @@ from youtube_dl.extractor import get_info_extractor RETRIES = 3 + class YoutubeDL(youtube_dl.YoutubeDL): def __init__(self, *args, **kwargs): self.to_stderr = self.to_screen self.processed_info_dicts = [] super(YoutubeDL, self).__init__(*args, **kwargs) + def report_warning(self, message): # Don't accept warnings during tests raise ExtractorError(message) + def process_info(self, info_dict): self.processed_info_dicts.append(info_dict) return super(YoutubeDL, self).process_info(info_dict) + def _file_md5(fn): with open(fn, 'rb') as f: return hashlib.md5(f.read()).hexdigest() @@ -56,10 +65,13 @@ defs = gettestcases() class TestDownload(unittest.TestCase): maxDiff = None + def setUp(self): self.defs = defs -### Dynamically generate tests +# Dynamically generate tests + + def generator(test_case): def test_template(self): @@ -85,7 +97,7 @@ def generator(test_case): return for other_ie in other_ies: if not other_ie.working(): - print_skipping(u'test depends on %sIE, marked as not WORKING' % other_ie.ie_key()) + print_skipping('test depends on %sIE, marked as not WORKING' % other_ie.ie_key()) return params = get_params(test_case.get('params', {})) @@ -93,18 +105,21 @@ def generator(test_case): params.setdefault('extract_flat', True) params.setdefault('skip_download', True) - ydl = YoutubeDL(params) + ydl = YoutubeDL(params, auto_init=False) ydl.add_default_info_extractors() finished_hook_called = set() + def _hook(status): if status['status'] == 'finished': finished_hook_called.add(status['filename']) ydl.add_progress_hook(_hook) + expect_warnings(ydl, test_case.get('expected_warnings', [])) def get_tc_filename(tc): return tc.get('file') or ydl.prepare_filename(tc.get('info_dict', {})) res_dict = None + def try_rm_tcs_files(tcs=None): if tcs is None: tcs = test_cases @@ -128,7 +143,7 @@ def generator(test_case): raise if try_num == RETRIES: - report_warning(u'Failed due to network errors, skipping...') + report_warning('Failed due to network errors, skipping...') return print('Retrying: {0} failed tries\n\n##########\n\n'.format(try_num)) @@ -183,7 +198,9 @@ def generator(test_case): md5_for_file = _file_md5(tc_filename) self.assertEqual(md5_for_file, tc['md5']) info_json_fn = os.path.splitext(tc_filename)[0] + '.info.json' - self.assertTrue(os.path.exists(info_json_fn)) + self.assertTrue( + os.path.exists(info_json_fn), + 'Missing info file %s' % info_json_fn) with io.open(info_json_fn, encoding='utf-8') as infof: info_dict = json.load(infof) @@ -198,15 +215,15 @@ def generator(test_case): return test_template -### And add them to TestDownload +# And add them to TestDownload for n, test_case in enumerate(defs): test_method = generator(test_case) tname = 'test_' + str(test_case['name']) i = 1 while hasattr(TestDownload, tname): - tname = 'test_' + str(test_case['name']) + '_' + str(i) + tname = 'test_%s_%d' % (test_case['name'], i) i += 1 - test_method.__name__ = tname + test_method.__name__ = str(tname) setattr(TestDownload, test_method.__name__, test_method) del test_method diff --git a/test/test_execution.py b/test/test_execution.py index 2b115fb31..60df187de 100644 --- a/test/test_execution.py +++ b/test/test_execution.py @@ -1,3 +1,6 @@ +#!/usr/bin/env python +from __future__ import unicode_literals + import unittest import sys @@ -6,17 +9,19 @@ import subprocess rootDir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) + try: _DEV_NULL = subprocess.DEVNULL except AttributeError: _DEV_NULL = open(os.devnull, 'wb') + class TestExecution(unittest.TestCase): def test_import(self): subprocess.check_call([sys.executable, '-c', 'import youtube_dl'], cwd=rootDir) def test_module_exec(self): - if sys.version_info >= (2,7): # Python 2.6 doesn't support package execution + if sys.version_info >= (2, 7): # Python 2.6 doesn't support package execution subprocess.check_call([sys.executable, '-m', 'youtube_dl', '--version'], cwd=rootDir, stdout=_DEV_NULL) def test_main_exec(self): diff --git a/test/test_subtitles.py b/test/test_subtitles.py index 8f4602e5f..7c4cd8218 100644 --- a/test/test_subtitles.py +++ b/test/test_subtitles.py @@ -1,4 +1,5 @@ #!/usr/bin/env python +from __future__ import unicode_literals # Allow direct execution import os @@ -22,6 +23,7 @@ from youtube_dl.extractor import ( class BaseTestSubtitles(unittest.TestCase): url = None IE = None + def setUp(self): self.DL = FakeYDL() self.ie = self.IE(self.DL) @@ -74,7 +76,7 @@ class TestYoutubeSubtitles(BaseTestSubtitles): self.assertEqual(md5(subtitles['en']), '3cb210999d3e021bd6c7f0ea751eab06') def test_youtube_list_subtitles(self): - self.DL.expect_warning(u'Video doesn\'t have automatic captions') + self.DL.expect_warning('Video doesn\'t have automatic captions') self.DL.params['listsubtitles'] = True info_dict = self.getInfoDict() self.assertEqual(info_dict, None) @@ -87,7 +89,7 @@ class TestYoutubeSubtitles(BaseTestSubtitles): self.assertTrue(subtitles['it'] is not None) def test_youtube_nosubtitles(self): - self.DL.expect_warning(u'video doesn\'t have subtitles') + self.DL.expect_warning('video doesn\'t have subtitles') self.url = 'n5BB19UTcdA' self.DL.params['writesubtitles'] = True self.DL.params['allsubtitles'] = True @@ -101,7 +103,7 @@ class TestYoutubeSubtitles(BaseTestSubtitles): self.DL.params['subtitleslangs'] = langs subtitles = self.getSubtitles() for lang in langs: - self.assertTrue(subtitles.get(lang) is not None, u'Subtitles for \'%s\' not extracted' % lang) + self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang) class TestDailymotionSubtitles(BaseTestSubtitles): @@ -130,20 +132,20 @@ class TestDailymotionSubtitles(BaseTestSubtitles): self.assertEqual(len(subtitles.keys()), 5) def test_list_subtitles(self): - self.DL.expect_warning(u'Automatic Captions not supported by this server') + self.DL.expect_warning('Automatic Captions not supported by this server') self.DL.params['listsubtitles'] = True info_dict = self.getInfoDict() self.assertEqual(info_dict, None) def test_automatic_captions(self): - self.DL.expect_warning(u'Automatic Captions not supported by this server') + self.DL.expect_warning('Automatic Captions not supported by this server') self.DL.params['writeautomaticsub'] = True self.DL.params['subtitleslang'] = ['en'] subtitles = self.getSubtitles() self.assertTrue(len(subtitles.keys()) == 0) def test_nosubtitles(self): - self.DL.expect_warning(u'video doesn\'t have subtitles') + self.DL.expect_warning('video doesn\'t have subtitles') self.url = 'http://www.dailymotion.com/video/x12u166_le-zapping-tele-star-du-08-aout-2013_tv' self.DL.params['writesubtitles'] = True self.DL.params['allsubtitles'] = True @@ -156,7 +158,7 @@ class TestDailymotionSubtitles(BaseTestSubtitles): self.DL.params['subtitleslangs'] = langs subtitles = self.getSubtitles() for lang in langs: - self.assertTrue(subtitles.get(lang) is not None, u'Subtitles for \'%s\' not extracted' % lang) + self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang) class TestTedSubtitles(BaseTestSubtitles): @@ -185,13 +187,13 @@ class TestTedSubtitles(BaseTestSubtitles): self.assertTrue(len(subtitles.keys()) >= 28) def test_list_subtitles(self): - self.DL.expect_warning(u'Automatic Captions not supported by this server') + self.DL.expect_warning('Automatic Captions not supported by this server') self.DL.params['listsubtitles'] = True info_dict = self.getInfoDict() self.assertEqual(info_dict, None) def test_automatic_captions(self): - self.DL.expect_warning(u'Automatic Captions not supported by this server') + self.DL.expect_warning('Automatic Captions not supported by this server') self.DL.params['writeautomaticsub'] = True self.DL.params['subtitleslang'] = ['en'] subtitles = self.getSubtitles() @@ -203,7 +205,7 @@ class TestTedSubtitles(BaseTestSubtitles): self.DL.params['subtitleslangs'] = langs subtitles = self.getSubtitles() for lang in langs: - self.assertTrue(subtitles.get(lang) is not None, u'Subtitles for \'%s\' not extracted' % lang) + self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang) class TestBlipTVSubtitles(BaseTestSubtitles): @@ -211,13 +213,13 @@ class TestBlipTVSubtitles(BaseTestSubtitles): IE = BlipTVIE def test_list_subtitles(self): - self.DL.expect_warning(u'Automatic Captions not supported by this server') + self.DL.expect_warning('Automatic Captions not supported by this server') self.DL.params['listsubtitles'] = True info_dict = self.getInfoDict() self.assertEqual(info_dict, None) def test_allsubtitles(self): - self.DL.expect_warning(u'Automatic Captions not supported by this server') + self.DL.expect_warning('Automatic Captions not supported by this server') self.DL.params['writesubtitles'] = True self.DL.params['allsubtitles'] = True subtitles = self.getSubtitles() @@ -236,7 +238,7 @@ class TestVimeoSubtitles(BaseTestSubtitles): def test_subtitles(self): self.DL.params['writesubtitles'] = True subtitles = self.getSubtitles() - self.assertEqual(md5(subtitles['en']), '8062383cf4dec168fc40a088aa6d5888') + self.assertEqual(md5(subtitles['en']), '26399116d23ae3cf2c087cea94bc43b4') def test_subtitles_lang(self): self.DL.params['writesubtitles'] = True @@ -251,20 +253,20 @@ class TestVimeoSubtitles(BaseTestSubtitles): self.assertEqual(set(subtitles.keys()), set(['de', 'en', 'es', 'fr'])) def test_list_subtitles(self): - self.DL.expect_warning(u'Automatic Captions not supported by this server') + self.DL.expect_warning('Automatic Captions not supported by this server') self.DL.params['listsubtitles'] = True info_dict = self.getInfoDict() self.assertEqual(info_dict, None) def test_automatic_captions(self): - self.DL.expect_warning(u'Automatic Captions not supported by this server') + self.DL.expect_warning('Automatic Captions not supported by this server') self.DL.params['writeautomaticsub'] = True self.DL.params['subtitleslang'] = ['en'] subtitles = self.getSubtitles() self.assertTrue(len(subtitles.keys()) == 0) def test_nosubtitles(self): - self.DL.expect_warning(u'video doesn\'t have subtitles') + self.DL.expect_warning('video doesn\'t have subtitles') self.url = 'http://vimeo.com/56015672' self.DL.params['writesubtitles'] = True self.DL.params['allsubtitles'] = True @@ -277,7 +279,7 @@ class TestVimeoSubtitles(BaseTestSubtitles): self.DL.params['subtitleslangs'] = langs subtitles = self.getSubtitles() for lang in langs: - self.assertTrue(subtitles.get(lang) is not None, u'Subtitles for \'%s\' not extracted' % lang) + self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang) class TestWallaSubtitles(BaseTestSubtitles): @@ -285,13 +287,13 @@ class TestWallaSubtitles(BaseTestSubtitles): IE = WallaIE def test_list_subtitles(self): - self.DL.expect_warning(u'Automatic Captions not supported by this server') + self.DL.expect_warning('Automatic Captions not supported by this server') self.DL.params['listsubtitles'] = True info_dict = self.getInfoDict() self.assertEqual(info_dict, None) def test_allsubtitles(self): - self.DL.expect_warning(u'Automatic Captions not supported by this server') + self.DL.expect_warning('Automatic Captions not supported by this server') self.DL.params['writesubtitles'] = True self.DL.params['allsubtitles'] = True subtitles = self.getSubtitles() @@ -299,7 +301,7 @@ class TestWallaSubtitles(BaseTestSubtitles): self.assertEqual(md5(subtitles['heb']), 'e758c5d7cb982f6bef14f377ec7a3920') def test_nosubtitles(self): - self.DL.expect_warning(u'video doesn\'t have subtitles') + self.DL.expect_warning('video doesn\'t have subtitles') self.url = 'http://vod.walla.co.il/movie/2642630/one-direction-all-for-one' self.DL.params['writesubtitles'] = True self.DL.params['allsubtitles'] = True diff --git a/test/test_swfinterp.py b/test/test_swfinterp.py index b42cd74c7..9f18055e6 100644 --- a/test/test_swfinterp.py +++ b/test/test_swfinterp.py @@ -1,4 +1,5 @@ #!/usr/bin/env python +from __future__ import unicode_literals # Allow direct execution import os @@ -37,7 +38,9 @@ def _make_testfunc(testfile): or os.path.getmtime(swf_file) < os.path.getmtime(as_file)): # Recompile try: - subprocess.check_call(['mxmlc', '-output', swf_file, as_file]) + subprocess.check_call([ + 'mxmlc', '-output', swf_file, + '-static-link-runtime-shared-libraries', as_file]) except OSError as ose: if ose.errno == errno.ENOENT: print('mxmlc not found! Skipping test.') diff --git a/test/test_unicode_literals.py b/test/test_unicode_literals.py index a4ba7bad0..19813e034 100644 --- a/test/test_unicode_literals.py +++ b/test/test_unicode_literals.py @@ -1,5 +1,11 @@ from __future__ import unicode_literals +# Allow direct execution +import os +import sys +import unittest +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + import io import os import re @@ -9,14 +15,16 @@ rootDir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) IGNORED_FILES = [ 'setup.py', # http://bugs.python.org/issue13943 + 'conf.py', + 'buildserver.py', ] +from test.helper import assertRegexpMatches + + class TestUnicodeLiterals(unittest.TestCase): def test_all_files(self): - print('Skipping this test (not yet fully implemented)') - return - for dirpath, _, filenames in os.walk(rootDir): for basename in filenames: if not basename.endswith('.py'): @@ -30,10 +38,11 @@ class TestUnicodeLiterals(unittest.TestCase): if "'" not in code and '"' not in code: continue - imps = 'from __future__ import unicode_literals' - self.assertTrue( - imps in code, - ' %s missing in %s' % (imps, fn)) + assertRegexpMatches( + self, + code, + r'(?:(?:#.*?|\s*)\n)*from __future__ import (?:[a-z_]+,\s*)*unicode_literals', + 'unicode_literals import missing in %s' % fn) m = re.search(r'(?<=\s)u[\'"](?!\)|,|$)', code) if m is not None: diff --git a/test/test_utils.py b/test/test_utils.py index bcca0efea..d42df6d96 100644 --- a/test/test_utils.py +++ b/test/test_utils.py @@ -16,11 +16,11 @@ import json import xml.etree.ElementTree from youtube_dl.utils import ( + clean_html, DateRange, encodeFilename, find_xpath_attr, fix_xml_ampersands, - get_meta_content, orderedSet, OnDemandPagedList, InAdvancePagedList, @@ -45,6 +45,10 @@ from youtube_dl.utils import ( escape_rfc3986, escape_url, js_to_json, + intlist_to_bytes, + args_to_str, + parse_filesize, + version_tuple, ) @@ -117,16 +121,16 @@ class TestUtil(unittest.TestCase): self.assertEqual(orderedSet([1, 1, 2, 3, 4, 4, 5, 6, 7, 3, 5]), [1, 2, 3, 4, 5, 6, 7]) self.assertEqual(orderedSet([]), []) self.assertEqual(orderedSet([1]), [1]) - #keep the list ordered + # keep the list ordered self.assertEqual(orderedSet([135, 1, 1, 1]), [135, 1]) def test_unescape_html(self): self.assertEqual(unescapeHTML('%20;'), '%20;') self.assertEqual( unescapeHTML('é'), 'é') - + def test_daterange(self): - _20century = DateRange("19000101","20000101") + _20century = DateRange("19000101", "20000101") self.assertFalse("17890714" in _20century) _ac = DateRange("00010101") self.assertTrue("19690721" in _ac) @@ -140,6 +144,9 @@ class TestUtil(unittest.TestCase): self.assertEqual(unified_strdate('2012/10/11 01:56:38 +0000'), '20121011') self.assertEqual(unified_strdate('1968-12-10'), '19681210') self.assertEqual(unified_strdate('28/01/2014 21:00:00 +0100'), '20140128') + self.assertEqual( + unified_strdate('11/26/2014 11:30:00 AM PST', day_first=False), + '20141126') def test_find_xpath_attr(self): testxml = ''' @@ -154,17 +161,6 @@ class TestUtil(unittest.TestCase): self.assertEqual(find_xpath_attr(doc, './/node', 'x', 'a'), doc[1]) self.assertEqual(find_xpath_attr(doc, './/node', 'y', 'c'), doc[2]) - def test_meta_parser(self): - testhtml = ''' - - - - - ''' - get_meta = lambda name: get_meta_content(name, testhtml) - self.assertEqual(get_meta('description'), 'foo & bar') - self.assertEqual(get_meta('author'), 'Plato') - def test_xpath_with_ns(self): testxml = ''' @@ -179,7 +175,7 @@ class TestUtil(unittest.TestCase): self.assertEqual(find('media:song/url').text, 'http://server.com/download.mp3') def test_smuggle_url(self): - data = {u"ö": u"ö", u"abc": [3]} + data = {"ö": "ö", "abc": [3]} url = 'https://foo.bar/baz?x=y#a' smug_url = smuggle_url(url, data) unsmug_url, unsmug_data = unsmuggle_url(smug_url) @@ -227,6 +223,10 @@ class TestUtil(unittest.TestCase): self.assertEqual(parse_duration('0m0s'), 0) self.assertEqual(parse_duration('0s'), 0) self.assertEqual(parse_duration('01:02:03.05'), 3723.05) + self.assertEqual(parse_duration('T30M38S'), 1838) + self.assertEqual(parse_duration('5 s'), 5) + self.assertEqual(parse_duration('3 min'), 180) + self.assertEqual(parse_duration('2.5 hours'), 9000) def test_fix_xml_ampersands(self): self.assertEqual( @@ -286,12 +286,17 @@ class TestUtil(unittest.TestCase): self.assertEqual(parse_iso8601('2014-03-23T23:04:26+0100'), 1395612266) self.assertEqual(parse_iso8601('2014-03-23T22:04:26+0000'), 1395612266) self.assertEqual(parse_iso8601('2014-03-23T22:04:26Z'), 1395612266) + self.assertEqual(parse_iso8601('2014-03-23T22:04:26.1234Z'), 1395612266) def test_strip_jsonp(self): stripped = strip_jsonp('cb ([ {"id":"532cb",\n\n\n"x":\n3}\n]\n);') d = json.loads(stripped) self.assertEqual(d, [{"id": "532cb", "x": 3}]) + stripped = strip_jsonp('parseMetadata({"STATUS":"OK"})\n\n\n//epc') + d = json.loads(stripped) + self.assertEqual(d, {'STATUS': 'OK'}) + def test_uppercase_escape(self): self.assertEqual(uppercase_escape('aä'), 'aä') self.assertEqual(uppercase_escape('\\U0001d550'), '𝕐') @@ -355,5 +360,35 @@ class TestUtil(unittest.TestCase): on = js_to_json('{"abc": true}') self.assertEqual(json.loads(on), {'abc': True}) + def test_clean_html(self): + self.assertEqual(clean_html('a:\nb'), 'a: b') + self.assertEqual(clean_html('a:\n "b"'), 'a: "b"') + + def test_intlist_to_bytes(self): + self.assertEqual( + intlist_to_bytes([0, 1, 127, 128, 255]), + b'\x00\x01\x7f\x80\xff') + + def test_args_to_str(self): + self.assertEqual( + args_to_str(['foo', 'ba/r', '-baz', '2 be', '']), + 'foo ba/r -baz \'2 be\' \'\'' + ) + + def test_parse_filesize(self): + self.assertEqual(parse_filesize(None), None) + self.assertEqual(parse_filesize(''), None) + self.assertEqual(parse_filesize('91 B'), 91) + self.assertEqual(parse_filesize('foobar'), None) + self.assertEqual(parse_filesize('2 MiB'), 2097152) + self.assertEqual(parse_filesize('5 GB'), 5000000000) + self.assertEqual(parse_filesize('1.2Tb'), 1200000000000) + self.assertEqual(parse_filesize('1,24 KB'), 1240) + + def test_version_tuple(self): + self.assertEqual(version_tuple('1'), (1,)) + self.assertEqual(version_tuple('10.23.344'), (10, 23, 344)) + self.assertEqual(version_tuple('10.1-6'), (10, 1, 6)) # avconv style + if __name__ == '__main__': unittest.main() diff --git a/test/test_write_annotations.py b/test/test_write_annotations.py index eac53b285..780636c77 100644 --- a/test/test_write_annotations.py +++ b/test/test_write_annotations.py @@ -1,5 +1,6 @@ #!/usr/bin/env python # coding: utf-8 +from __future__ import unicode_literals # Allow direct execution import os @@ -31,19 +32,18 @@ params = get_params({ }) - TEST_ID = 'gr51aVj-mLg' ANNOTATIONS_FILE = TEST_ID + '.flv.annotations.xml' EXPECTED_ANNOTATIONS = ['Speech bubble', 'Note', 'Title', 'Spotlight', 'Label'] + class TestAnnotations(unittest.TestCase): def setUp(self): # Clear old files self.tearDown() - def test_info_json(self): - expected = list(EXPECTED_ANNOTATIONS) #Two annotations could have the same text. + expected = list(EXPECTED_ANNOTATIONS) # Two annotations could have the same text. ie = youtube_dl.extractor.YoutubeIE() ydl = YoutubeDL(params) ydl.add_info_extractor(ie) @@ -51,7 +51,7 @@ class TestAnnotations(unittest.TestCase): self.assertTrue(os.path.exists(ANNOTATIONS_FILE)) annoxml = None with io.open(ANNOTATIONS_FILE, 'r', encoding='utf-8') as annof: - annoxml = xml.etree.ElementTree.parse(annof) + annoxml = xml.etree.ElementTree.parse(annof) self.assertTrue(annoxml is not None, 'Failed to parse annotations XML') root = annoxml.getroot() self.assertEqual(root.tag, 'document') @@ -59,18 +59,17 @@ class TestAnnotations(unittest.TestCase): self.assertEqual(annotationsTag.tag, 'annotations') annotations = annotationsTag.findall('annotation') - #Not all the annotations have TEXT children and the annotations are returned unsorted. + # Not all the annotations have TEXT children and the annotations are returned unsorted. for a in annotations: - self.assertEqual(a.tag, 'annotation') - if a.get('type') == 'text': - textTag = a.find('TEXT') - text = textTag.text - self.assertTrue(text in expected) #assertIn only added in python 2.7 - #remove the first occurance, there could be more than one annotation with the same text - expected.remove(text) - #We should have seen (and removed) all the expected annotation texts. + self.assertEqual(a.tag, 'annotation') + if a.get('type') == 'text': + textTag = a.find('TEXT') + text = textTag.text + self.assertTrue(text in expected) # assertIn only added in python 2.7 + # remove the first occurance, there could be more than one annotation with the same text + expected.remove(text) + # We should have seen (and removed) all the expected annotation texts. self.assertEqual(len(expected), 0, 'Not all expected annotations were found.') - def tearDown(self): try_rm(ANNOTATIONS_FILE) diff --git a/test/test_write_info_json.py b/test/test_write_info_json.py index 90426a559..0396ef262 100644 --- a/test/test_write_info_json.py +++ b/test/test_write_info_json.py @@ -1,5 +1,6 @@ #!/usr/bin/env python # coding: utf-8 +from __future__ import unicode_literals # Allow direct execution import os @@ -32,7 +33,7 @@ params = get_params({ TEST_ID = 'BaW_jenozKc' INFO_JSON_FILE = TEST_ID + '.info.json' DESCRIPTION_FILE = TEST_ID + '.mp4.description' -EXPECTED_DESCRIPTION = u'''test chars: "'/\ä↭𝕐 +EXPECTED_DESCRIPTION = '''test chars: "'/\ä↭𝕐 test URL: https://github.com/rg3/youtube-dl/issues/1892 This is a test video for youtube-dl. @@ -53,11 +54,11 @@ class TestInfoJSON(unittest.TestCase): self.assertTrue(os.path.exists(INFO_JSON_FILE)) with io.open(INFO_JSON_FILE, 'r', encoding='utf-8') as jsonf: jd = json.load(jsonf) - self.assertEqual(jd['upload_date'], u'20121002') + self.assertEqual(jd['upload_date'], '20121002') self.assertEqual(jd['description'], EXPECTED_DESCRIPTION) self.assertEqual(jd['id'], TEST_ID) self.assertEqual(jd['extractor'], 'youtube') - self.assertEqual(jd['title'], u'''youtube-dl test video "'/\ä↭𝕐''') + self.assertEqual(jd['title'], '''youtube-dl test video "'/\ä↭𝕐''') self.assertEqual(jd['uploader'], 'Philipp Hagemeister') self.assertTrue(os.path.exists(DESCRIPTION_FILE)) diff --git a/test/test_youtube_lists.py b/test/test_youtube_lists.py index 410f9edc2..c889b6f15 100644 --- a/test/test_youtube_lists.py +++ b/test/test_youtube_lists.py @@ -1,4 +1,5 @@ #!/usr/bin/env python +from __future__ import unicode_literals # Allow direct execution import os @@ -12,10 +13,6 @@ from test.helper import FakeYDL from youtube_dl.extractor import ( YoutubePlaylistIE, YoutubeIE, - YoutubeChannelIE, - YoutubeShowIE, - YoutubeTopListIE, - YoutubeSearchURLIE, ) @@ -31,7 +28,7 @@ class TestYoutubeLists(unittest.TestCase): result = ie.extract('https://www.youtube.com/watch?v=FXxLjLQi3Fg&list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re') self.assertEqual(result['_type'], 'url') self.assertEqual(YoutubeIE().extract_id(result['url']), 'FXxLjLQi3Fg') - + def test_youtube_course(self): dl = FakeYDL() ie = YoutubePlaylistIE(dl) diff --git a/test/test_youtube_signature.py b/test/test_youtube_signature.py index df2cb09f2..13d228cd8 100644 --- a/test/test_youtube_signature.py +++ b/test/test_youtube_signature.py @@ -14,7 +14,7 @@ import re import string from youtube_dl.extractor import YoutubeIE -from youtube_dl.utils import compat_str, compat_urlretrieve +from youtube_dl.compat import compat_str, compat_urlretrieve _TESTS = [ ( diff --git a/youtube_dl/YoutubeDL.py b/youtube_dl/YoutubeDL.py index dec0e20e7..578c8daf2 100755 --- a/youtube_dl/YoutubeDL.py +++ b/youtube_dl/YoutubeDL.py @@ -7,6 +7,7 @@ import collections import datetime import errno import io +import itertools import json import locale import os @@ -22,12 +23,15 @@ import traceback if os.name == 'nt': import ctypes -from .utils import ( +from .compat import ( compat_cookiejar, + compat_expanduser, compat_http_client, compat_str, compat_urllib_error, compat_urllib_request, +) +from .utils import ( escape_url, ContentTooShortError, date_from_str, @@ -57,11 +61,13 @@ from .utils import ( write_string, YoutubeDLHandler, prepend_extension, + args_to_str, ) from .cache import Cache from .extractor import get_info_extractor, gen_extractors from .downloader import get_suitable_downloader -from .postprocessor import FFmpegMergerPP +from .downloader.rtmp import rtmpdump_version +from .postprocessor import FFmpegMergerPP, FFmpegPostProcessor from .version import __version__ @@ -107,6 +113,8 @@ class YoutubeDL(object): forcefilename: Force printing final filename. forceduration: Force printing duration. forcejson: Force printing info_dict as JSON. + dump_single_json: Force printing the info_dict of the whole playlist + (or video) as a single JSON line. simulate: Do not download the video files. format: Video format code. format_limit: Highest quality format to try. @@ -116,6 +124,7 @@ class YoutubeDL(object): nooverwrites: Prevent overwriting files. playliststart: Playlist item to start at. playlistend: Playlist item to end at. + playlistreverse: Download playlist items in reverse order. matchtitle: Download only matching titles. rejecttitle: Reject downloads for matching titles. logger: Log messages to a logging.Logger instance. @@ -165,6 +174,8 @@ class YoutubeDL(object): 'auto' for elaborate guessing encoding: Use this encoding instead of the system-specified. extract_flat: Do not resolve URLs, return the immediate result. + Pass in 'in_playlist' to only show this behavior for + playlist items. The following parameters are not used by YoutubeDL itself, they are used by the FileDownloader: @@ -184,7 +195,7 @@ class YoutubeDL(object): _num_downloads = None _screen_file = None - def __init__(self, params=None): + def __init__(self, params=None, auto_init=True): """Create a FileDownloader object with the given options.""" if params is None: params = {} @@ -241,6 +252,26 @@ class YoutubeDL(object): self._setup_opener() + if auto_init: + self.print_debug_header() + self.add_default_info_extractors() + + def warn_if_short_id(self, argv): + # short YouTube ID starting with dash? + idxs = [ + i for i, a in enumerate(argv) + if re.match(r'^-[0-9A-Za-z_-]{10}$', a)] + if idxs: + correct_argv = ( + ['youtube-dl'] + + [a for i, a in enumerate(argv) if i not in idxs] + + ['--'] + [argv[i] for i in idxs] + ) + self.report_warning( + 'Long argument string detected. ' + 'Use -- to separate parameters and URLs, like this:\n%s\n' % + args_to_str(correct_argv)) + def add_info_extractor(self, ie): """Add an InfoExtractor object to the end of the list.""" self._ies.append(ie) @@ -285,7 +316,7 @@ class YoutubeDL(object): self._output_process.stdin.write((message + '\n').encode('utf-8')) self._output_process.stdin.flush() res = ''.join(self._output_channel.readline().decode('utf-8') - for _ in range(line_count)) + for _ in range(line_count)) return res[:-len('\n')] def to_screen(self, message, skip_eol=False): @@ -447,7 +478,7 @@ class YoutubeDL(object): template_dict = collections.defaultdict(lambda: 'NA', template_dict) outtmpl = self.params.get('outtmpl', DEFAULT_OUTTMPL) - tmpl = os.path.expanduser(outtmpl) + tmpl = compat_expanduser(outtmpl) filename = tmpl % template_dict return filename except ValueError as err: @@ -522,7 +553,7 @@ class YoutubeDL(object): try: ie_result = ie.extract(url) - if ie_result is None: # Finished already (backwards compatibility; listformats and friends should be moved here) + if ie_result is None: # Finished already (backwards compatibility; listformats and friends should be moved here) break if isinstance(ie_result, list): # Backwards compatibility: old IE result format @@ -535,7 +566,7 @@ class YoutubeDL(object): return self.process_ie_result(ie_result, download, extra_info) else: return ie_result - except ExtractorError as de: # An error we somewhat expected + except ExtractorError as de: # An error we somewhat expected self.report_error(compat_str(de), de.format_traceback()) break except MaxDownloadsReached: @@ -568,8 +599,12 @@ class YoutubeDL(object): result_type = ie_result.get('_type', 'video') - if self.params.get('extract_flat', False): - if result_type in ('url', 'url_transparent'): + if result_type in ('url', 'url_transparent'): + extract_flat = self.params.get('extract_flat', False) + if ((extract_flat == 'in_playlist' and 'playlist' in extra_info) or + extract_flat is True): + if self.params.get('forcejson', False): + self.to_stdout(json.dumps(ie_result)) return ie_result if result_type == 'video': @@ -588,27 +623,19 @@ class YoutubeDL(object): ie_result['url'], ie_key=ie_result.get('ie_key'), extra_info=extra_info, download=False, process=False) - def make_result(embedded_info): - new_result = ie_result.copy() - for f in ('_type', 'url', 'ext', 'player_url', 'formats', - 'entries', 'ie_key', 'duration', - 'subtitles', 'annotations', 'format', - 'thumbnail', 'thumbnails'): - if f in new_result: - del new_result[f] - if f in embedded_info: - new_result[f] = embedded_info[f] - return new_result - new_result = make_result(info) + force_properties = dict( + (k, v) for k, v in ie_result.items() if v is not None) + for f in ('_type', 'url'): + if f in force_properties: + del force_properties[f] + new_result = info.copy() + new_result.update(force_properties) assert new_result.get('_type') != 'url_transparent' - if new_result.get('_type') == 'compat_list': - new_result['entries'] = [ - make_result(e) for e in new_result['entries']] return self.process_ie_result( new_result, download=download, extra_info=extra_info) - elif result_type == 'playlist': + elif result_type == 'playlist' or result_type == 'multi_video': # We process each entry in the playlist playlist = ie_result.get('title', None) or ie_result.get('id', None) self.to_screen('[download] Downloading playlist: %s' % playlist) @@ -621,27 +648,39 @@ class YoutubeDL(object): if playlistend == -1: playlistend = None - if isinstance(ie_result['entries'], list): - n_all_entries = len(ie_result['entries']) - entries = ie_result['entries'][playliststart:playlistend] + ie_entries = ie_result['entries'] + if isinstance(ie_entries, list): + n_all_entries = len(ie_entries) + entries = ie_entries[playliststart:playlistend] n_entries = len(entries) self.to_screen( "[%s] playlist %s: Collected %d video ids (downloading %d of them)" % (ie_result['extractor'], playlist, n_all_entries, n_entries)) - else: - assert isinstance(ie_result['entries'], PagedList) - entries = ie_result['entries'].getslice( + elif isinstance(ie_entries, PagedList): + entries = ie_entries.getslice( playliststart, playlistend) n_entries = len(entries) self.to_screen( "[%s] playlist %s: Downloading %d videos" % (ie_result['extractor'], playlist, n_entries)) + else: # iterable + entries = list(itertools.islice( + ie_entries, playliststart, playlistend)) + n_entries = len(entries) + self.to_screen( + "[%s] playlist %s: Downloading %d videos" % + (ie_result['extractor'], playlist, n_entries)) + + if self.params.get('playlistreverse', False): + entries = entries[::-1] for i, entry in enumerate(entries, 1): self.to_screen('[download] Downloading video #%s of %s' % (i, n_entries)) extra = { 'n_entries': n_entries, 'playlist': playlist, + 'playlist_id': ie_result.get('id'), + 'playlist_title': ie_result.get('title'), 'playlist_index': i + playliststart, 'extractor': ie_result['extractor'], 'webpage_url': ie_result['webpage_url'], @@ -661,14 +700,20 @@ class YoutubeDL(object): ie_result['entries'] = playlist_results return ie_result elif result_type == 'compat_list': + self.report_warning( + 'Extractor %s returned a compat_list result. ' + 'It needs to be updated.' % ie_result.get('extractor')) + def _fixup(r): - self.add_extra_info(r, + self.add_extra_info( + r, { 'extractor': ie_result['extractor'], 'webpage_url': ie_result['webpage_url'], 'webpage_url_basename': url_basename(ie_result['webpage_url']), 'extractor_key': ie_result['extractor_key'], - }) + } + ) return r ie_result['entries'] = [ self.process_ie_result(_fixup(r), download, extra_info) @@ -746,6 +791,10 @@ class YoutubeDL(object): info_dict['display_id'] = info_dict['id'] if info_dict.get('upload_date') is None and info_dict.get('timestamp') is not None: + # Working around negative timestamps in Windows + # (see http://bugs.python.org/issue1646728) + if info_dict['timestamp'] < 0 and os.name == 'nt': + info_dict['timestamp'] = 0 upload_date = datetime.datetime.utcfromtimestamp( info_dict['timestamp']) info_dict['upload_date'] = upload_date.strftime('%Y%m%d') @@ -818,8 +867,15 @@ class YoutubeDL(object): # Two formats have been requested like '137+139' format_1, format_2 = rf.split('+') formats_info = (self.select_format(format_1, formats), - self.select_format(format_2, formats)) + self.select_format(format_2, formats)) if all(formats_info): + # The first format must contain the video and the + # second the audio + if formats_info[0].get('vcodec') == 'none': + self.report_error('The first format must ' + 'contain the video, try using ' + '"-f %s+%s"' % (format_2, format_1)) + return selected_format = { 'requested_formats': formats_info, 'format': rf, @@ -882,8 +938,12 @@ class YoutubeDL(object): if self.params.get('forceid', False): self.to_stdout(info_dict['id']) if self.params.get('forceurl', False): - # For RTMP URLs, also include the playpath - self.to_stdout(info_dict['url'] + info_dict.get('play_path', '')) + if info_dict.get('requested_formats') is not None: + for f in info_dict['requested_formats']: + self.to_stdout(f['url'] + f.get('play_path', '')) + else: + # For RTMP URLs, also include the playpath + self.to_stdout(info_dict['url'] + info_dict.get('play_path', '')) if self.params.get('forcethumbnail', False) and info_dict.get('thumbnail') is not None: self.to_stdout(info_dict['thumbnail']) if self.params.get('forcedescription', False) and info_dict.get('description') is not None: @@ -897,6 +957,8 @@ class YoutubeDL(object): if self.params.get('forcejson', False): info_dict['_filename'] = filename self.to_stdout(json.dumps(info_dict)) + if self.params.get('dump_single_json', False): + info_dict['_filename'] = filename # Do nothing else if in simulate mode if self.params.get('simulate', False): @@ -962,7 +1024,7 @@ class YoutubeDL(object): else: self.to_screen('[info] Writing video subtitles to: ' + sub_filename) with io.open(encodeFilename(sub_filename), 'w', encoding='utf-8') as subfile: - subfile.write(sub) + subfile.write(sub) except (OSError, IOError): self.report_error('Cannot write subtitles file ' + sub_filename) return @@ -974,7 +1036,7 @@ class YoutubeDL(object): else: self.to_screen('[info] Writing video description metadata as JSON to: ' + infofn) try: - write_json_file(info_dict, encodeFilename(infofn)) + write_json_file(info_dict, infofn) except (OSError, IOError): self.report_error('Cannot write metadata to JSON file ' + infofn) return @@ -994,10 +1056,10 @@ class YoutubeDL(object): with open(thumb_filename, 'wb') as thumbf: shutil.copyfileobj(uf, thumbf) self.to_screen('[%s] %s: Writing thumbnail to: %s' % - (info_dict['extractor'], info_dict['id'], thumb_filename)) + (info_dict['extractor'], info_dict['id'], thumb_filename)) except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err: self.report_warning('Unable to download thumbnail "%s": %s' % - (info_dict['thumbnail'], compat_str(err))) + (info_dict['thumbnail'], compat_str(err))) if not self.params.get('skip_download', False): if self.params.get('nooverwrites', False) and os.path.exists(encodeFilename(filename)): @@ -1015,11 +1077,11 @@ class YoutubeDL(object): downloaded = [] success = True merger = FFmpegMergerPP(self, not self.params.get('keepvideo')) - if not merger._get_executable(): + if not merger._executable: postprocessors = [] self.report_warning('You have requested multiple ' - 'formats but ffmpeg or avconv are not installed.' - ' The formats won\'t be merged') + 'formats but ffmpeg or avconv are not installed.' + ' The formats won\'t be merged') else: postprocessors = [merger] for f in info_dict['requested_formats']: @@ -1063,13 +1125,16 @@ class YoutubeDL(object): for url in url_list: try: - #It also downloads the videos - self.extract_info(url) + # It also downloads the videos + res = self.extract_info(url) except UnavailableVideoError: self.report_error('unable to download video') except MaxDownloadsReached: self.to_screen('[info] Maximum number of downloaded files reached.') raise + else: + if self.params.get('dump_single_json', False): + self.to_stdout(json.dumps(res)) return self._download_retcode @@ -1193,6 +1258,8 @@ class YoutubeDL(object): res += 'video@' if fdict.get('vbr') is not None: res += '%4dk' % fdict['vbr'] + if fdict.get('fps') is not None: + res += ', %sfps' % fdict['fps'] if fdict.get('acodec') is not None: if res: res += ', ' @@ -1274,11 +1341,13 @@ class YoutubeDL(object): self.report_warning( 'Your Python is broken! Update to a newer and supported version') + stdout_encoding = getattr( + sys.stdout, 'encoding', 'missing (%s)' % type(sys.stdout).__name__) encoding_str = ( '[debug] Encodings: locale %s, fs %s, out %s, pref %s\n' % ( locale.getpreferredencoding(), sys.getfilesystemencoding(), - sys.stdout.encoding, + stdout_encoding, self.get_encoding())) write_string(encoding_str, encoding=None) @@ -1297,8 +1366,19 @@ class YoutubeDL(object): sys.exc_clear() except: pass - self._write_string('[debug] Python version %s - %s' % - (platform.python_version(), platform_name()) + '\n') + self._write_string('[debug] Python version %s - %s\n' % ( + platform.python_version(), platform_name())) + + exe_versions = FFmpegPostProcessor.get_versions() + exe_versions['rtmpdump'] = rtmpdump_version() + exe_str = ', '.join( + '%s %s' % (exe, v) + for exe, v in sorted(exe_versions.items()) + if v + ) + if not exe_str: + exe_str = 'none' + self._write_string('[debug] exe versions: %s\n' % exe_str) proxy_map = {} for handler in self._opener.handlers: diff --git a/youtube_dl/__init__.py b/youtube_dl/__init__.py index 7f2b4dfcc..70c4f25b1 100644 --- a/youtube_dl/__init__.py +++ b/youtube_dl/__init__.py @@ -1,85 +1,7 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -__authors__ = ( - 'Ricardo Garcia Gonzalez', - 'Danny Colligan', - 'Benjamin Johnson', - 'Vasyl\' Vavrychuk', - 'Witold Baryluk', - 'Paweł Paprota', - 'Gergely Imreh', - 'Rogério Brito', - 'Philipp Hagemeister', - 'Sören Schulze', - 'Kevin Ngo', - 'Ori Avtalion', - 'shizeeg', - 'Filippo Valsorda', - 'Christian Albrecht', - 'Dave Vasilevsky', - 'Jaime Marquínez Ferrándiz', - 'Jeff Crouse', - 'Osama Khalid', - 'Michael Walter', - 'M. Yasoob Ullah Khalid', - 'Julien Fraichard', - 'Johny Mo Swag', - 'Axel Noack', - 'Albert Kim', - 'Pierre Rudloff', - 'Huarong Huo', - 'Ismael Mejía', - 'Steffan \'Ruirize\' James', - 'Andras Elso', - 'Jelle van der Waa', - 'Marcin Cieślak', - 'Anton Larionov', - 'Takuya Tsuchida', - 'Sergey M.', - 'Michael Orlitzky', - 'Chris Gahan', - 'Saimadhav Heblikar', - 'Mike Col', - 'Oleg Prutz', - 'pulpe', - 'Andreas Schmitz', - 'Michael Kaiser', - 'Niklas Laxström', - 'David Triendl', - 'Anthony Weems', - 'David Wagner', - 'Juan C. Olivares', - 'Mattias Harrysson', - 'phaer', - 'Sainyam Kapoor', - 'Nicolas Évrard', - 'Jason Normore', - 'Hoje Lee', - 'Adam Thalhammer', - 'Georg Jähnig', - 'Ralf Haring', - 'Koki Takahashi', - 'Ariset Llerena', - 'Adam Malcontenti-Wilson', - 'Tobias Bell', - 'Naglis Jonaitis', - 'Charles Chen', - 'Hassaan Ali', - 'Dobrosław Å»ybort', - 'David Fabijan', - 'Sebastian Haas', - 'Alexander Kirk', - 'Erik Johnson', - 'Keith Beckman', - 'Ole Ernst', - 'Aaron McDaniel (mcd1992)', - 'Magnus Kolstad', - 'Hari Padmanaban', - 'Carlos Ramos', - '5moufl', - 'lenaten', -) +from __future__ import unicode_literals __license__ = 'Public Domain' @@ -93,9 +15,13 @@ import sys from .options import ( parseOpts, ) -from .utils import ( +from .compat import ( + compat_expanduser, compat_getpass, compat_print, + workaround_optparse_bug9161, +) +from .utils import ( DateRange, DEFAULT_OUTTMPL, decodeOption, @@ -132,7 +58,9 @@ def _real_main(argv=None): # https://github.com/rg3/youtube-dl/issues/820 codecs.register(lambda name: codecs.lookup('utf-8') if name == 'cp65001' else None) - setproctitle(u'youtube-dl') + workaround_optparse_bug9161() + + setproctitle('youtube-dl') parser, opts, args = parseOpts(argv) @@ -148,10 +76,10 @@ def _real_main(argv=None): if opts.headers is not None: for h in opts.headers: if h.find(':', 1) < 0: - parser.error(u'wrong header formatting, it should be key:value, not "%s"'%h) + parser.error('wrong header formatting, it should be key:value, not "%s"' % h) key, value = h.split(':', 2) if opts.verbose: - write_string(u'[debug] Adding header from command line option %s:%s\n'%(key, value)) + write_string('[debug] Adding header from command line option %s:%s\n' % (key, value)) std_headers[key] = value # Dump user agent @@ -169,9 +97,9 @@ def _real_main(argv=None): batchfd = io.open(opts.batchfile, 'r', encoding='utf-8', errors='ignore') batch_urls = read_batch_urls(batchfd) if opts.verbose: - write_string(u'[debug] Batch file urls: ' + repr(batch_urls) + u'\n') + write_string('[debug] Batch file urls: ' + repr(batch_urls) + '\n') except IOError: - sys.exit(u'ERROR: batch file could not be read') + sys.exit('ERROR: batch file could not be read') all_urls = batch_urls + args all_urls = [url.strip() for url in all_urls] _enc = preferredencoding() @@ -184,7 +112,7 @@ def _real_main(argv=None): compat_print(ie.IE_NAME + (' (CURRENTLY BROKEN)' if not ie._WORKING else '')) matchedUrls = [url for url in all_urls if ie.suitable(url)] for mu in matchedUrls: - compat_print(u' ' + mu) + compat_print(' ' + mu) sys.exit(0) if opts.list_extractor_descriptions: for ie in sorted(extractors, key=lambda ie: ie.IE_NAME.lower()): @@ -194,69 +122,66 @@ def _real_main(argv=None): if desc is False: continue if hasattr(ie, 'SEARCH_KEY'): - _SEARCHES = (u'cute kittens', u'slithering pythons', u'falling cat', u'angry poodle', u'purple fish', u'running tortoise', u'sleeping bunny') - _COUNTS = (u'', u'5', u'10', u'all') - desc += u' (Example: "%s%s:%s" )' % (ie.SEARCH_KEY, random.choice(_COUNTS), random.choice(_SEARCHES)) + _SEARCHES = ('cute kittens', 'slithering pythons', 'falling cat', 'angry poodle', 'purple fish', 'running tortoise', 'sleeping bunny') + _COUNTS = ('', '5', '10', 'all') + desc += ' (Example: "%s%s:%s" )' % (ie.SEARCH_KEY, random.choice(_COUNTS), random.choice(_SEARCHES)) compat_print(desc) sys.exit(0) - # Conflicting, missing and erroneous options if opts.usenetrc and (opts.username is not None or opts.password is not None): - parser.error(u'using .netrc conflicts with giving username/password') + parser.error('using .netrc conflicts with giving username/password') if opts.password is not None and opts.username is None: - parser.error(u'account username missing\n') + parser.error('account username missing\n') if opts.outtmpl is not None and (opts.usetitle or opts.autonumber or opts.useid): - parser.error(u'using output template conflicts with using title, video ID or auto number') + parser.error('using output template conflicts with using title, video ID or auto number') if opts.usetitle and opts.useid: - parser.error(u'using title conflicts with using video ID') + parser.error('using title conflicts with using video ID') if opts.username is not None and opts.password is None: - opts.password = compat_getpass(u'Type account password and press [Return]: ') + opts.password = compat_getpass('Type account password and press [Return]: ') if opts.ratelimit is not None: numeric_limit = FileDownloader.parse_bytes(opts.ratelimit) if numeric_limit is None: - parser.error(u'invalid rate limit specified') + parser.error('invalid rate limit specified') opts.ratelimit = numeric_limit if opts.min_filesize is not None: numeric_limit = FileDownloader.parse_bytes(opts.min_filesize) if numeric_limit is None: - parser.error(u'invalid min_filesize specified') + parser.error('invalid min_filesize specified') opts.min_filesize = numeric_limit if opts.max_filesize is not None: numeric_limit = FileDownloader.parse_bytes(opts.max_filesize) if numeric_limit is None: - parser.error(u'invalid max_filesize specified') + parser.error('invalid max_filesize specified') opts.max_filesize = numeric_limit if opts.retries is not None: try: opts.retries = int(opts.retries) except (TypeError, ValueError): - parser.error(u'invalid retry count specified') + parser.error('invalid retry count specified') if opts.buffersize is not None: numeric_buffersize = FileDownloader.parse_bytes(opts.buffersize) if numeric_buffersize is None: - parser.error(u'invalid buffer size specified') + parser.error('invalid buffer size specified') opts.buffersize = numeric_buffersize if opts.playliststart <= 0: - raise ValueError(u'Playlist start must be positive') + raise ValueError('Playlist start must be positive') if opts.playlistend not in (-1, None) and opts.playlistend < opts.playliststart: - raise ValueError(u'Playlist end must be greater than playlist start') + raise ValueError('Playlist end must be greater than playlist start') if opts.extractaudio: if opts.audioformat not in ['best', 'aac', 'mp3', 'm4a', 'opus', 'vorbis', 'wav']: - parser.error(u'invalid audio format specified') + parser.error('invalid audio format specified') if opts.audioquality: opts.audioquality = opts.audioquality.strip('k').strip('K') if not opts.audioquality.isdigit(): - parser.error(u'invalid audio quality specified') + parser.error('invalid audio quality specified') if opts.recodevideo is not None: if opts.recodevideo not in ['mp4', 'flv', 'webm', 'ogg', 'mkv']: - parser.error(u'invalid video recode format specified') + parser.error('invalid video recode format specified') if opts.date is not None: date = DateRange.day(opts.date) else: date = DateRange(opts.dateafter, opts.datebefore) - if opts.default_search not in ('auto', 'auto_warning', 'error', 'fixup_error', None) and ':' not in opts.default_search: - parser.error(u'--default-search invalid; did you forget a colon (:) at the end?') # Do not download videos when there are audio-only formats if opts.extractaudio and not opts.keepvideo and opts.format is None: @@ -264,28 +189,28 @@ def _real_main(argv=None): # --all-sub automatically sets --write-sub if --write-auto-sub is not given # this was the old behaviour if only --all-sub was given. - if opts.allsubtitles and (opts.writeautomaticsub == False): + if opts.allsubtitles and not opts.writeautomaticsub: opts.writesubtitles = True if sys.version_info < (3,): # In Python 2, sys.argv is a bytestring (also note http://bugs.python.org/issue2128 for Windows systems) if opts.outtmpl is not None: opts.outtmpl = opts.outtmpl.decode(preferredencoding()) - outtmpl =((opts.outtmpl is not None and opts.outtmpl) - or (opts.format == '-1' and opts.usetitle and u'%(title)s-%(id)s-%(format)s.%(ext)s') - or (opts.format == '-1' and u'%(id)s-%(format)s.%(ext)s') - or (opts.usetitle and opts.autonumber and u'%(autonumber)s-%(title)s-%(id)s.%(ext)s') - or (opts.usetitle and u'%(title)s-%(id)s.%(ext)s') - or (opts.useid and u'%(id)s.%(ext)s') - or (opts.autonumber and u'%(autonumber)s-%(id)s.%(ext)s') - or DEFAULT_OUTTMPL) + outtmpl = ((opts.outtmpl is not None and opts.outtmpl) + or (opts.format == '-1' and opts.usetitle and '%(title)s-%(id)s-%(format)s.%(ext)s') + or (opts.format == '-1' and '%(id)s-%(format)s.%(ext)s') + or (opts.usetitle and opts.autonumber and '%(autonumber)s-%(title)s-%(id)s.%(ext)s') + or (opts.usetitle and '%(title)s-%(id)s.%(ext)s') + or (opts.useid and '%(id)s.%(ext)s') + or (opts.autonumber and '%(autonumber)s-%(id)s.%(ext)s') + or DEFAULT_OUTTMPL) if not os.path.splitext(outtmpl)[1] and opts.extractaudio: - parser.error(u'Cannot download a video and extract audio into the same' - u' file! Use "{0}.%(ext)s" instead of "{0}" as the output' - u' template'.format(outtmpl)) + parser.error('Cannot download a video and extract audio into the same' + ' file! Use "{0}.%(ext)s" instead of "{0}" as the output' + ' template'.format(outtmpl)) - any_printing = opts.geturl or opts.gettitle or opts.getid or opts.getthumbnail or opts.getdescription or opts.getfilename or opts.getformat or opts.getduration or opts.dumpjson - download_archive_fn = os.path.expanduser(opts.download_archive) if opts.download_archive is not None else opts.download_archive + any_printing = opts.geturl or opts.gettitle or opts.getid or opts.getthumbnail or opts.getdescription or opts.getfilename or opts.getformat or opts.getduration or opts.dumpjson or opts.dump_single_json + download_archive_fn = compat_expanduser(opts.download_archive) if opts.download_archive is not None else opts.download_archive ydl_opts = { 'usenetrc': opts.usenetrc, @@ -304,8 +229,9 @@ def _real_main(argv=None): 'forcefilename': opts.getfilename, 'forceformat': opts.getformat, 'forcejson': opts.dumpjson, - 'simulate': opts.simulate, - 'skip_download': (opts.skip_download or opts.simulate or any_printing), + 'dump_single_json': opts.dump_single_json, + 'simulate': opts.simulate or any_printing, + 'skip_download': opts.skip_download, 'format': opts.format, 'format_limit': opts.format_limit, 'listformats': opts.listformats, @@ -323,6 +249,7 @@ def _real_main(argv=None): 'progress_with_newline': opts.progress_with_newline, 'playliststart': opts.playliststart, 'playlistend': opts.playlistend, + 'playlistreverse': opts.playlist_reverse, 'noplaylist': opts.noplaylist, 'logtostderr': opts.outtmpl == '-', 'consoletitle': opts.consoletitle, @@ -369,12 +296,10 @@ def _real_main(argv=None): 'youtube_include_dash_manifest': opts.youtube_include_dash_manifest, 'encoding': opts.encoding, 'exec_cmd': opts.exec_cmd, + 'extract_flat': opts.extract_flat, } with YoutubeDL(ydl_opts) as ydl: - ydl.print_debug_header() - ydl.add_default_info_extractors() - # PostProcessors # Add the metadata pp first, the other pps will copy it if opts.addmetadata: @@ -392,7 +317,6 @@ def _real_main(argv=None): ydl.add_post_processor(FFmpegAudioFixPP()) ydl.add_post_processor(AtomicParsleyPP()) - # Please keep ExecAfterDownload towards the bottom as it allows the user to modify the final file in any way. # So if the user is able to remove the file before your postprocessor runs it might cause a few problems. if opts.exec_cmd: @@ -409,18 +333,19 @@ def _real_main(argv=None): # Maybe do nothing if (len(all_urls) < 1) and (opts.load_info_filename is None): - if not (opts.update_self or opts.rm_cachedir): - parser.error(u'you must provide at least one URL') - else: + if opts.update_self or opts.rm_cachedir: sys.exit() + ydl.warn_if_short_id(sys.argv[1:] if argv is None else argv) + parser.error('you must provide at least one URL') + try: if opts.load_info_filename is not None: retcode = ydl.download_with_info_file(opts.load_info_filename) else: retcode = ydl.download(all_urls) except MaxDownloadsReached: - ydl.to_screen(u'--max-download limit reached, aborting.') + ydl.to_screen('--max-download limit reached, aborting.') retcode = 101 sys.exit(retcode) @@ -432,6 +357,6 @@ def main(argv=None): except DownloadError: sys.exit(1) except SameFileError: - sys.exit(u'ERROR: fixed output name but more than one file to download') + sys.exit('ERROR: fixed output name but more than one file to download') except KeyboardInterrupt: - sys.exit(u'\nERROR: Interrupted by user') + sys.exit('\nERROR: Interrupted by user') diff --git a/youtube_dl/__main__.py b/youtube_dl/__main__.py index 3fe29c91f..65a0f891c 100755 --- a/youtube_dl/__main__.py +++ b/youtube_dl/__main__.py @@ -1,4 +1,5 @@ #!/usr/bin/env python +from __future__ import unicode_literals # Execute with # $ python youtube_dl/__main__.py (2.6+) diff --git a/youtube_dl/aes.py b/youtube_dl/aes.py index e9c5e2152..5efd0f836 100644 --- a/youtube_dl/aes.py +++ b/youtube_dl/aes.py @@ -1,3 +1,5 @@ +from __future__ import unicode_literals + __all__ = ['aes_encrypt', 'key_expansion', 'aes_ctr_decrypt', 'aes_cbc_decrypt', 'aes_decrypt_text'] import base64 @@ -7,10 +9,11 @@ from .utils import bytes_to_intlist, intlist_to_bytes BLOCK_SIZE_BYTES = 16 + def aes_ctr_decrypt(data, key, counter): """ Decrypt with aes in counter mode - + @param {int[]} data cipher @param {int[]} key 16/24/32-Byte cipher key @param {instance} counter Instance whose next_value function (@returns {int[]} 16-Byte block) @@ -19,23 +22,24 @@ def aes_ctr_decrypt(data, key, counter): """ expanded_key = key_expansion(key) block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES)) - - decrypted_data=[] + + decrypted_data = [] for i in range(block_count): counter_block = counter.next_value() - block = data[i*BLOCK_SIZE_BYTES : (i+1)*BLOCK_SIZE_BYTES] - block += [0]*(BLOCK_SIZE_BYTES - len(block)) - + block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES] + block += [0] * (BLOCK_SIZE_BYTES - len(block)) + cipher_counter_block = aes_encrypt(counter_block, expanded_key) decrypted_data += xor(block, cipher_counter_block) decrypted_data = decrypted_data[:len(data)] - + return decrypted_data + def aes_cbc_decrypt(data, key, iv): """ Decrypt with aes in CBC mode - + @param {int[]} data cipher @param {int[]} key 16/24/32-Byte cipher key @param {int[]} iv 16-Byte IV @@ -43,94 +47,98 @@ def aes_cbc_decrypt(data, key, iv): """ expanded_key = key_expansion(key) block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES)) - - decrypted_data=[] + + decrypted_data = [] previous_cipher_block = iv for i in range(block_count): - block = data[i*BLOCK_SIZE_BYTES : (i+1)*BLOCK_SIZE_BYTES] - block += [0]*(BLOCK_SIZE_BYTES - len(block)) - + block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES] + block += [0] * (BLOCK_SIZE_BYTES - len(block)) + decrypted_block = aes_decrypt(block, expanded_key) decrypted_data += xor(decrypted_block, previous_cipher_block) previous_cipher_block = block decrypted_data = decrypted_data[:len(data)] - + return decrypted_data + def key_expansion(data): """ Generate key schedule - + @param {int[]} data 16/24/32-Byte cipher key - @returns {int[]} 176/208/240-Byte expanded key + @returns {int[]} 176/208/240-Byte expanded key """ - data = data[:] # copy + data = data[:] # copy rcon_iteration = 1 key_size_bytes = len(data) expanded_key_size_bytes = (key_size_bytes // 4 + 7) * BLOCK_SIZE_BYTES - + while len(data) < expanded_key_size_bytes: temp = data[-4:] temp = key_schedule_core(temp, rcon_iteration) rcon_iteration += 1 - data += xor(temp, data[-key_size_bytes : 4-key_size_bytes]) - + data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes]) + for _ in range(3): temp = data[-4:] - data += xor(temp, data[-key_size_bytes : 4-key_size_bytes]) - + data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes]) + if key_size_bytes == 32: temp = data[-4:] temp = sub_bytes(temp) - data += xor(temp, data[-key_size_bytes : 4-key_size_bytes]) - - for _ in range(3 if key_size_bytes == 32 else 2 if key_size_bytes == 24 else 0): + data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes]) + + for _ in range(3 if key_size_bytes == 32 else 2 if key_size_bytes == 24 else 0): temp = data[-4:] - data += xor(temp, data[-key_size_bytes : 4-key_size_bytes]) + data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes]) data = data[:expanded_key_size_bytes] - + return data + def aes_encrypt(data, expanded_key): """ Encrypt one block with aes - + @param {int[]} data 16-Byte state - @param {int[]} expanded_key 176/208/240-Byte expanded key + @param {int[]} expanded_key 176/208/240-Byte expanded key @returns {int[]} 16-Byte cipher """ rounds = len(expanded_key) // BLOCK_SIZE_BYTES - 1 data = xor(data, expanded_key[:BLOCK_SIZE_BYTES]) - for i in range(1, rounds+1): + for i in range(1, rounds + 1): data = sub_bytes(data) data = shift_rows(data) if i != rounds: data = mix_columns(data) - data = xor(data, expanded_key[i*BLOCK_SIZE_BYTES : (i+1)*BLOCK_SIZE_BYTES]) + data = xor(data, expanded_key[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]) return data + def aes_decrypt(data, expanded_key): """ Decrypt one block with aes - + @param {int[]} data 16-Byte cipher @param {int[]} expanded_key 176/208/240-Byte expanded key @returns {int[]} 16-Byte state """ rounds = len(expanded_key) // BLOCK_SIZE_BYTES - 1 - + for i in range(rounds, 0, -1): - data = xor(data, expanded_key[i*BLOCK_SIZE_BYTES : (i+1)*BLOCK_SIZE_BYTES]) + data = xor(data, expanded_key[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]) if i != rounds: data = mix_columns_inv(data) data = shift_rows_inv(data) data = sub_bytes_inv(data) data = xor(data, expanded_key[:BLOCK_SIZE_BYTES]) - + return data + def aes_decrypt_text(data, password, key_size_bytes): """ Decrypt text @@ -138,33 +146,34 @@ def aes_decrypt_text(data, password, key_size_bytes): - The cipher key is retrieved by encrypting the first 16 Byte of 'password' with the first 'key_size_bytes' Bytes from 'password' (if necessary filled with 0's) - Mode of operation is 'counter' - + @param {str} data Base64 encoded string @param {str,unicode} password Password (will be encoded with utf-8) @param {int} key_size_bytes Possible values: 16 for 128-Bit, 24 for 192-Bit or 32 for 256-Bit @returns {str} Decrypted data """ NONCE_LENGTH_BYTES = 8 - + data = bytes_to_intlist(base64.b64decode(data)) password = bytes_to_intlist(password.encode('utf-8')) - - key = password[:key_size_bytes] + [0]*(key_size_bytes - len(password)) + + key = password[:key_size_bytes] + [0] * (key_size_bytes - len(password)) key = aes_encrypt(key[:BLOCK_SIZE_BYTES], key_expansion(key)) * (key_size_bytes // BLOCK_SIZE_BYTES) - + nonce = data[:NONCE_LENGTH_BYTES] cipher = data[NONCE_LENGTH_BYTES:] - + class Counter: - __value = nonce + [0]*(BLOCK_SIZE_BYTES - NONCE_LENGTH_BYTES) + __value = nonce + [0] * (BLOCK_SIZE_BYTES - NONCE_LENGTH_BYTES) + def next_value(self): temp = self.__value self.__value = inc(self.__value) return temp - + decrypted_data = aes_ctr_decrypt(cipher, key, Counter()) plaintext = intlist_to_bytes(decrypted_data) - + return plaintext RCON = (0x8d, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36) @@ -200,14 +209,14 @@ SBOX_INV = (0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38, 0xbf, 0x40, 0xa3, 0x 0x60, 0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d, 0x2d, 0xe5, 0x7a, 0x9f, 0x93, 0xc9, 0x9c, 0xef, 0xa0, 0xe0, 0x3b, 0x4d, 0xae, 0x2a, 0xf5, 0xb0, 0xc8, 0xeb, 0xbb, 0x3c, 0x83, 0x53, 0x99, 0x61, 0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26, 0xe1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0c, 0x7d) -MIX_COLUMN_MATRIX = ((0x2,0x3,0x1,0x1), - (0x1,0x2,0x3,0x1), - (0x1,0x1,0x2,0x3), - (0x3,0x1,0x1,0x2)) -MIX_COLUMN_MATRIX_INV = ((0xE,0xB,0xD,0x9), - (0x9,0xE,0xB,0xD), - (0xD,0x9,0xE,0xB), - (0xB,0xD,0x9,0xE)) +MIX_COLUMN_MATRIX = ((0x2, 0x3, 0x1, 0x1), + (0x1, 0x2, 0x3, 0x1), + (0x1, 0x1, 0x2, 0x3), + (0x3, 0x1, 0x1, 0x2)) +MIX_COLUMN_MATRIX_INV = ((0xE, 0xB, 0xD, 0x9), + (0x9, 0xE, 0xB, 0xD), + (0xD, 0x9, 0xE, 0xB), + (0xB, 0xD, 0x9, 0xE)) RIJNDAEL_EXP_TABLE = (0x01, 0x03, 0x05, 0x0F, 0x11, 0x33, 0x55, 0xFF, 0x1A, 0x2E, 0x72, 0x96, 0xA1, 0xF8, 0x13, 0x35, 0x5F, 0xE1, 0x38, 0x48, 0xD8, 0x73, 0x95, 0xA4, 0xF7, 0x02, 0x06, 0x0A, 0x1E, 0x22, 0x66, 0xAA, 0xE5, 0x34, 0x5C, 0xE4, 0x37, 0x59, 0xEB, 0x26, 0x6A, 0xBE, 0xD9, 0x70, 0x90, 0xAB, 0xE6, 0x31, @@ -241,30 +250,37 @@ RIJNDAEL_LOG_TABLE = (0x00, 0x00, 0x19, 0x01, 0x32, 0x02, 0x1a, 0xc6, 0x4b, 0xc7 0x44, 0x11, 0x92, 0xd9, 0x23, 0x20, 0x2e, 0x89, 0xb4, 0x7c, 0xb8, 0x26, 0x77, 0x99, 0xe3, 0xa5, 0x67, 0x4a, 0xed, 0xde, 0xc5, 0x31, 0xfe, 0x18, 0x0d, 0x63, 0x8c, 0x80, 0xc0, 0xf7, 0x70, 0x07) + def sub_bytes(data): return [SBOX[x] for x in data] + def sub_bytes_inv(data): return [SBOX_INV[x] for x in data] + def rotate(data): return data[1:] + [data[0]] + def key_schedule_core(data, rcon_iteration): data = rotate(data) data = sub_bytes(data) data[0] = data[0] ^ RCON[rcon_iteration] - + return data + def xor(data1, data2): - return [x^y for x, y in zip(data1, data2)] + return [x ^ y for x, y in zip(data1, data2)] + def rijndael_mul(a, b): - if(a==0 or b==0): + if(a == 0 or b == 0): return 0 return RIJNDAEL_EXP_TABLE[(RIJNDAEL_LOG_TABLE[a] + RIJNDAEL_LOG_TABLE[b]) % 0xFF] + def mix_column(data, matrix): data_mixed = [] for row in range(4): @@ -275,33 +291,38 @@ def mix_column(data, matrix): data_mixed.append(mixed) return data_mixed + def mix_columns(data, matrix=MIX_COLUMN_MATRIX): data_mixed = [] for i in range(4): - column = data[i*4 : (i+1)*4] + column = data[i * 4: (i + 1) * 4] data_mixed += mix_column(column, matrix) return data_mixed + def mix_columns_inv(data): return mix_columns(data, MIX_COLUMN_MATRIX_INV) + def shift_rows(data): data_shifted = [] for column in range(4): for row in range(4): - data_shifted.append( data[((column + row) & 0b11) * 4 + row] ) + data_shifted.append(data[((column + row) & 0b11) * 4 + row]) return data_shifted + def shift_rows_inv(data): data_shifted = [] for column in range(4): for row in range(4): - data_shifted.append( data[((column - row) & 0b11) * 4 + row] ) + data_shifted.append(data[((column - row) & 0b11) * 4 + row]) return data_shifted + def inc(data): - data = data[:] # copy - for i in range(len(data)-1,-1,-1): + data = data[:] # copy + for i in range(len(data) - 1, -1, -1): if data[i] == 255: data[i] = 0 else: diff --git a/youtube_dl/cache.py b/youtube_dl/cache.py index 79ff09f78..5fe839eb1 100644 --- a/youtube_dl/cache.py +++ b/youtube_dl/cache.py @@ -8,9 +8,8 @@ import re import shutil import traceback -from .utils import ( - write_json_file, -) +from .compat import compat_expanduser, compat_getenv +from .utils import write_json_file class Cache(object): @@ -20,9 +19,9 @@ class Cache(object): def _get_root_dir(self): res = self._ydl.params.get('cachedir') if res is None: - cache_root = os.environ.get('XDG_CACHE_HOME', '~/.cache') + cache_root = compat_getenv('XDG_CACHE_HOME', '~/.cache') res = os.path.join(cache_root, 'youtube-dl') - return os.path.expanduser(res) + return compat_expanduser(res) def _get_cache_fn(self, section, key, dtype): assert re.match(r'^[a-zA-Z0-9_.-]+$', section), \ diff --git a/youtube_dl/compat.py b/youtube_dl/compat.py new file mode 100644 index 000000000..46d438846 --- /dev/null +++ b/youtube_dl/compat.py @@ -0,0 +1,358 @@ +from __future__ import unicode_literals + +import getpass +import optparse +import os +import re +import subprocess +import sys + + +try: + import urllib.request as compat_urllib_request +except ImportError: # Python 2 + import urllib2 as compat_urllib_request + +try: + import urllib.error as compat_urllib_error +except ImportError: # Python 2 + import urllib2 as compat_urllib_error + +try: + import urllib.parse as compat_urllib_parse +except ImportError: # Python 2 + import urllib as compat_urllib_parse + +try: + from urllib.parse import urlparse as compat_urllib_parse_urlparse +except ImportError: # Python 2 + from urlparse import urlparse as compat_urllib_parse_urlparse + +try: + import urllib.parse as compat_urlparse +except ImportError: # Python 2 + import urlparse as compat_urlparse + +try: + import http.cookiejar as compat_cookiejar +except ImportError: # Python 2 + import cookielib as compat_cookiejar + +try: + import html.entities as compat_html_entities +except ImportError: # Python 2 + import htmlentitydefs as compat_html_entities + +try: + import html.parser as compat_html_parser +except ImportError: # Python 2 + import HTMLParser as compat_html_parser + +try: + import http.client as compat_http_client +except ImportError: # Python 2 + import httplib as compat_http_client + +try: + from urllib.error import HTTPError as compat_HTTPError +except ImportError: # Python 2 + from urllib2 import HTTPError as compat_HTTPError + +try: + from urllib.request import urlretrieve as compat_urlretrieve +except ImportError: # Python 2 + from urllib import urlretrieve as compat_urlretrieve + + +try: + from subprocess import DEVNULL + compat_subprocess_get_DEVNULL = lambda: DEVNULL +except ImportError: + compat_subprocess_get_DEVNULL = lambda: open(os.path.devnull, 'w') + +try: + from urllib.parse import unquote as compat_urllib_parse_unquote +except ImportError: + def compat_urllib_parse_unquote(string, encoding='utf-8', errors='replace'): + if string == '': + return string + res = string.split('%') + if len(res) == 1: + return string + if encoding is None: + encoding = 'utf-8' + if errors is None: + errors = 'replace' + # pct_sequence: contiguous sequence of percent-encoded bytes, decoded + pct_sequence = b'' + string = res[0] + for item in res[1:]: + try: + if not item: + raise ValueError + pct_sequence += item[:2].decode('hex') + rest = item[2:] + if not rest: + # This segment was just a single percent-encoded character. + # May be part of a sequence of code units, so delay decoding. + # (Stored in pct_sequence). + continue + except ValueError: + rest = '%' + item + # Encountered non-percent-encoded characters. Flush the current + # pct_sequence. + string += pct_sequence.decode(encoding, errors) + rest + pct_sequence = b'' + if pct_sequence: + # Flush the final pct_sequence + string += pct_sequence.decode(encoding, errors) + return string + + +try: + from urllib.parse import parse_qs as compat_parse_qs +except ImportError: # Python 2 + # HACK: The following is the correct parse_qs implementation from cpython 3's stdlib. + # Python 2's version is apparently totally broken + + def _parse_qsl(qs, keep_blank_values=False, strict_parsing=False, + encoding='utf-8', errors='replace'): + qs, _coerce_result = qs, unicode + pairs = [s2 for s1 in qs.split('&') for s2 in s1.split(';')] + r = [] + for name_value in pairs: + if not name_value and not strict_parsing: + continue + nv = name_value.split('=', 1) + if len(nv) != 2: + if strict_parsing: + raise ValueError("bad query field: %r" % (name_value,)) + # Handle case of a control-name with no equal sign + if keep_blank_values: + nv.append('') + else: + continue + if len(nv[1]) or keep_blank_values: + name = nv[0].replace('+', ' ') + name = compat_urllib_parse_unquote( + name, encoding=encoding, errors=errors) + name = _coerce_result(name) + value = nv[1].replace('+', ' ') + value = compat_urllib_parse_unquote( + value, encoding=encoding, errors=errors) + value = _coerce_result(value) + r.append((name, value)) + return r + + def compat_parse_qs(qs, keep_blank_values=False, strict_parsing=False, + encoding='utf-8', errors='replace'): + parsed_result = {} + pairs = _parse_qsl(qs, keep_blank_values, strict_parsing, + encoding=encoding, errors=errors) + for name, value in pairs: + if name in parsed_result: + parsed_result[name].append(value) + else: + parsed_result[name] = [value] + return parsed_result + +try: + compat_str = unicode # Python 2 +except NameError: + compat_str = str + +try: + compat_chr = unichr # Python 2 +except NameError: + compat_chr = chr + +try: + from xml.etree.ElementTree import ParseError as compat_xml_parse_error +except ImportError: # Python 2.6 + from xml.parsers.expat import ExpatError as compat_xml_parse_error + +try: + from shlex import quote as shlex_quote +except ImportError: # Python < 3.3 + def shlex_quote(s): + if re.match(r'^[-_\w./]+$', s): + return s + else: + return "'" + s.replace("'", "'\"'\"'") + "'" + + +def compat_ord(c): + if type(c) is int: + return c + else: + return ord(c) + + +if sys.version_info >= (3, 0): + compat_getenv = os.getenv + compat_expanduser = os.path.expanduser +else: + # Environment variables should be decoded with filesystem encoding. + # Otherwise it will fail if any non-ASCII characters present (see #3854 #3217 #2918) + + def compat_getenv(key, default=None): + from .utils import get_filesystem_encoding + env = os.getenv(key, default) + if env: + env = env.decode(get_filesystem_encoding()) + return env + + # HACK: The default implementations of os.path.expanduser from cpython do not decode + # environment variables with filesystem encoding. We will work around this by + # providing adjusted implementations. + # The following are os.path.expanduser implementations from cpython 2.7.8 stdlib + # for different platforms with correct environment variables decoding. + + if os.name == 'posix': + def compat_expanduser(path): + """Expand ~ and ~user constructions. If user or $HOME is unknown, + do nothing.""" + if not path.startswith('~'): + return path + i = path.find('/', 1) + if i < 0: + i = len(path) + if i == 1: + if 'HOME' not in os.environ: + import pwd + userhome = pwd.getpwuid(os.getuid()).pw_dir + else: + userhome = compat_getenv('HOME') + else: + import pwd + try: + pwent = pwd.getpwnam(path[1:i]) + except KeyError: + return path + userhome = pwent.pw_dir + userhome = userhome.rstrip('/') + return (userhome + path[i:]) or '/' + elif os.name == 'nt' or os.name == 'ce': + def compat_expanduser(path): + """Expand ~ and ~user constructs. + + If user or $HOME is unknown, do nothing.""" + if path[:1] != '~': + return path + i, n = 1, len(path) + while i < n and path[i] not in '/\\': + i = i + 1 + + if 'HOME' in os.environ: + userhome = compat_getenv('HOME') + elif 'USERPROFILE' in os.environ: + userhome = compat_getenv('USERPROFILE') + elif 'HOMEPATH' not in os.environ: + return path + else: + try: + drive = compat_getenv('HOMEDRIVE') + except KeyError: + drive = '' + userhome = os.path.join(drive, compat_getenv('HOMEPATH')) + + if i != 1: # ~user + userhome = os.path.join(os.path.dirname(userhome), path[1:i]) + + return userhome + path[i:] + else: + compat_expanduser = os.path.expanduser + + +if sys.version_info < (3, 0): + def compat_print(s): + from .utils import preferredencoding + print(s.encode(preferredencoding(), 'xmlcharrefreplace')) +else: + def compat_print(s): + assert isinstance(s, compat_str) + print(s) + + +try: + subprocess_check_output = subprocess.check_output +except AttributeError: + def subprocess_check_output(*args, **kwargs): + assert 'input' not in kwargs + p = subprocess.Popen(*args, stdout=subprocess.PIPE, **kwargs) + output, _ = p.communicate() + ret = p.poll() + if ret: + raise subprocess.CalledProcessError(ret, p.args, output=output) + return output + +if sys.version_info < (3, 0) and sys.platform == 'win32': + def compat_getpass(prompt, *args, **kwargs): + if isinstance(prompt, compat_str): + from .utils import preferredencoding + prompt = prompt.encode(preferredencoding()) + return getpass.getpass(prompt, *args, **kwargs) +else: + compat_getpass = getpass.getpass + +# Old 2.6 and 2.7 releases require kwargs to be bytes +try: + def _testfunc(x): + pass + _testfunc(**{'x': 0}) +except TypeError: + def compat_kwargs(kwargs): + return dict((bytes(k), v) for k, v in kwargs.items()) +else: + compat_kwargs = lambda kwargs: kwargs + + +# Fix https://github.com/rg3/youtube-dl/issues/4223 +# See http://bugs.python.org/issue9161 for what is broken +def workaround_optparse_bug9161(): + op = optparse.OptionParser() + og = optparse.OptionGroup(op, 'foo') + try: + og.add_option('-t') + except TypeError: + real_add_option = optparse.OptionGroup.add_option + + def _compat_add_option(self, *args, **kwargs): + enc = lambda v: ( + v.encode('ascii', 'replace') if isinstance(v, compat_str) + else v) + bargs = [enc(a) for a in args] + bkwargs = dict( + (k, enc(v)) for k, v in kwargs.items()) + return real_add_option(self, *bargs, **bkwargs) + optparse.OptionGroup.add_option = _compat_add_option + + +__all__ = [ + 'compat_HTTPError', + 'compat_chr', + 'compat_cookiejar', + 'compat_expanduser', + 'compat_getenv', + 'compat_getpass', + 'compat_html_entities', + 'compat_html_parser', + 'compat_http_client', + 'compat_kwargs', + 'compat_ord', + 'compat_parse_qs', + 'compat_print', + 'compat_str', + 'compat_subprocess_get_DEVNULL', + 'compat_urllib_error', + 'compat_urllib_parse', + 'compat_urllib_parse_unquote', + 'compat_urllib_parse_urlparse', + 'compat_urllib_request', + 'compat_urlparse', + 'compat_urlretrieve', + 'compat_xml_parse_error', + 'shlex_quote', + 'subprocess_check_output', + 'workaround_optparse_bug9161', +] diff --git a/youtube_dl/downloader/__init__.py b/youtube_dl/downloader/__init__.py index 3f941596e..31e28df58 100644 --- a/youtube_dl/downloader/__init__.py +++ b/youtube_dl/downloader/__init__.py @@ -30,3 +30,8 @@ def get_suitable_downloader(info_dict): return F4mFD else: return HttpFD + +__all__ = [ + 'get_suitable_downloader', + 'FileDownloader', +] diff --git a/youtube_dl/downloader/common.py b/youtube_dl/downloader/common.py index f85f0c94e..8181bca09 100644 --- a/youtube_dl/downloader/common.py +++ b/youtube_dl/downloader/common.py @@ -1,10 +1,12 @@ +from __future__ import unicode_literals + import os import re import sys import time +from ..compat import compat_str from ..utils import ( - compat_str, encodeFilename, format_bytes, timeconvert, @@ -78,8 +80,10 @@ class FileDownloader(object): def calc_eta(start, now, total, current): if total is None: return None + if now is None: + now = time.time() dif = now - start - if current == 0 or dif < 0.001: # One millisecond + if current == 0 or dif < 0.001: # One millisecond return None rate = float(current) / dif return int((float(total) - float(current)) / rate) @@ -93,7 +97,7 @@ class FileDownloader(object): @staticmethod def calc_speed(start, now, bytes): dif = now - start - if bytes == 0 or dif < 0.001: # One millisecond + if bytes == 0 or dif < 0.001: # One millisecond return None return float(bytes) / dif @@ -106,7 +110,7 @@ class FileDownloader(object): @staticmethod def best_block_size(elapsed_time, bytes): new_min = max(bytes / 2.0, 1.0) - new_max = min(max(bytes * 2.0, 1.0), 4194304) # Do not surpass 4 MB + new_max = min(max(bytes * 2.0, 1.0), 4194304) # Do not surpass 4 MB if elapsed_time < 0.001: return int(new_max) rate = bytes / elapsed_time @@ -144,29 +148,30 @@ class FileDownloader(object): def report_error(self, *args, **kargs): self.ydl.report_error(*args, **kargs) - def slow_down(self, start_time, byte_counter): + def slow_down(self, start_time, now, byte_counter): """Sleep if the download speed is over the rate limit.""" rate_limit = self.params.get('ratelimit', None) if rate_limit is None or byte_counter == 0: return - now = time.time() + if now is None: + now = time.time() elapsed = now - start_time if elapsed <= 0.0: return speed = float(byte_counter) / elapsed if speed > rate_limit: - time.sleep((byte_counter - rate_limit * (now - start_time)) / rate_limit) + time.sleep(max((byte_counter // rate_limit) - elapsed, 0)) def temp_name(self, filename): """Returns a temporary filename for the given filename.""" - if self.params.get('nopart', False) or filename == u'-' or \ + if self.params.get('nopart', False) or filename == '-' or \ (os.path.exists(encodeFilename(filename)) and not os.path.isfile(encodeFilename(filename))): return filename - return filename + u'.part' + return filename + '.part' def undo_temp_name(self, filename): - if filename.endswith(u'.part'): - return filename[:-len(u'.part')] + if filename.endswith('.part'): + return filename[:-len('.part')] return filename def try_rename(self, old_filename, new_filename): @@ -175,7 +180,7 @@ class FileDownloader(object): return os.rename(encodeFilename(old_filename), encodeFilename(new_filename)) except (IOError, OSError) as err: - self.report_error(u'unable to rename file: %s' % compat_str(err)) + self.report_error('unable to rename file: %s' % compat_str(err)) def try_utime(self, filename, last_modified_hdr): """Try to set the last-modified time of the given file.""" @@ -200,10 +205,10 @@ class FileDownloader(object): def report_destination(self, filename): """Report destination filename.""" - self.to_screen(u'[download] Destination: ' + filename) + self.to_screen('[download] Destination: ' + filename) def _report_progress_status(self, msg, is_last_line=False): - fullmsg = u'[download] ' + msg + fullmsg = '[download] ' + msg if self.params.get('progress_with_newline', False): self.to_screen(fullmsg) else: @@ -211,13 +216,13 @@ class FileDownloader(object): prev_len = getattr(self, '_report_progress_prev_line_length', 0) if prev_len > len(fullmsg): - fullmsg += u' ' * (prev_len - len(fullmsg)) + fullmsg += ' ' * (prev_len - len(fullmsg)) self._report_progress_prev_line_length = len(fullmsg) - clear_line = u'\r' + clear_line = '\r' else: - clear_line = (u'\r\x1b[K' if sys.stderr.isatty() else u'\r') + clear_line = ('\r\x1b[K' if sys.stderr.isatty() else '\r') self.to_screen(clear_line + fullmsg, skip_eol=not is_last_line) - self.to_console_title(u'youtube-dl ' + msg) + self.to_console_title('youtube-dl ' + msg) def report_progress(self, percent, data_len_str, speed, eta): """Report download progress.""" @@ -233,7 +238,7 @@ class FileDownloader(object): percent_str = 'Unknown %' speed_str = self.format_speed(speed) - msg = (u'%s of %s at %s ETA %s' % + msg = ('%s of %s at %s ETA %s' % (percent_str, data_len_str, speed_str, eta_str)) self._report_progress_status(msg) @@ -243,37 +248,37 @@ class FileDownloader(object): downloaded_str = format_bytes(downloaded_data_len) speed_str = self.format_speed(speed) elapsed_str = FileDownloader.format_seconds(elapsed) - msg = u'%s at %s (%s)' % (downloaded_str, speed_str, elapsed_str) + msg = '%s at %s (%s)' % (downloaded_str, speed_str, elapsed_str) self._report_progress_status(msg) def report_finish(self, data_len_str, tot_time): """Report download finished.""" if self.params.get('noprogress', False): - self.to_screen(u'[download] Download completed') + self.to_screen('[download] Download completed') else: self._report_progress_status( - (u'100%% of %s in %s' % + ('100%% of %s in %s' % (data_len_str, self.format_seconds(tot_time))), is_last_line=True) def report_resuming_byte(self, resume_len): """Report attempt to resume at given byte.""" - self.to_screen(u'[download] Resuming download at byte %s' % resume_len) + self.to_screen('[download] Resuming download at byte %s' % resume_len) def report_retry(self, count, retries): """Report retry in case of HTTP error 5xx""" - self.to_screen(u'[download] Got server HTTP error. Retrying (attempt %d of %d)...' % (count, retries)) + self.to_screen('[download] Got server HTTP error. Retrying (attempt %d of %d)...' % (count, retries)) def report_file_already_downloaded(self, file_name): """Report file has already been fully downloaded.""" try: - self.to_screen(u'[download] %s has already been downloaded' % file_name) + self.to_screen('[download] %s has already been downloaded' % file_name) except UnicodeEncodeError: - self.to_screen(u'[download] The file has already been downloaded') + self.to_screen('[download] The file has already been downloaded') def report_unable_to_resume(self): """Report it was impossible to resume download.""" - self.to_screen(u'[download] Unable to resume') + self.to_screen('[download] Unable to resume') def download(self, filename, info_dict): """Download to a filename using the info from info_dict @@ -293,7 +298,7 @@ class FileDownloader(object): def real_download(self, filename, info_dict): """Real download process. Redefine in subclasses.""" - raise NotImplementedError(u'This method must be implemented by subclasses') + raise NotImplementedError('This method must be implemented by subclasses') def _hook_progress(self, status): for ph in self._progress_hooks: diff --git a/youtube_dl/downloader/f4m.py b/youtube_dl/downloader/f4m.py index 54dd6ac3f..ef3e0d5f4 100644 --- a/youtube_dl/downloader/f4m.py +++ b/youtube_dl/downloader/f4m.py @@ -9,10 +9,12 @@ import xml.etree.ElementTree as etree from .common import FileDownloader from .http import HttpFD +from ..compat import ( + compat_urlparse, +) from ..utils import ( struct_pack, struct_unpack, - compat_urlparse, format_bytes, encodeFilename, sanitize_open, @@ -55,7 +57,7 @@ class FlvReader(io.BytesIO): if size == 1: real_size = self.read_unsigned_long_long() header_end = 16 - return real_size, box_type, self.read(real_size-header_end) + return real_size, box_type, self.read(real_size - header_end) def read_asrt(self): # version @@ -180,7 +182,7 @@ def build_fragments_list(boot_info): n_frags = segment_run_entry[1] fragment_run_entry_table = boot_info['fragments'][0]['fragments'] first_frag_number = fragment_run_entry_table[0]['first'] - for (i, frag_number) in zip(range(1, n_frags+1), itertools.count(first_frag_number)): + for (i, frag_number) in zip(range(1, n_frags + 1), itertools.count(first_frag_number)): res.append((1, frag_number)) return res @@ -225,14 +227,16 @@ class F4mFD(FileDownloader): self.to_screen('[download] Downloading f4m manifest') manifest = self.ydl.urlopen(man_url).read() self.report_destination(filename) - http_dl = HttpQuietDownloader(self.ydl, + http_dl = HttpQuietDownloader( + self.ydl, { 'continuedl': True, 'quiet': True, 'noprogress': True, 'ratelimit': self.params.get('ratelimit', None), 'test': self.params.get('test', False), - }) + } + ) doc = etree.fromstring(manifest) formats = [(int(f.attrib.get('bitrate', -1)), f) for f in doc.findall(_add_ns('media'))] @@ -245,9 +249,16 @@ class F4mFD(FileDownloader): lambda f: int(f[0]) == requested_bitrate, formats))[0] base_url = compat_urlparse.urljoin(man_url, media.attrib['url']) - bootstrap = base64.b64decode(doc.find(_add_ns('bootstrapInfo')).text) + bootstrap_node = doc.find(_add_ns('bootstrapInfo')) + if bootstrap_node.text is None: + bootstrap_url = compat_urlparse.urljoin( + base_url, bootstrap_node.attrib['url']) + bootstrap = self.ydl.urlopen(bootstrap_url).read() + else: + bootstrap = base64.b64decode(bootstrap_node.text) metadata = base64.b64decode(media.find(_add_ns('metadata')).text) boot_info = read_bootstrap_info(bootstrap) + fragments_list = build_fragments_list(boot_info) if self.params.get('test', False): # We only download the first fragment @@ -271,7 +282,7 @@ class F4mFD(FileDownloader): def frag_progress_hook(status): frag_total_bytes = status.get('total_bytes', 0) estimated_size = (state['downloaded_bytes'] + - (total_frags - state['frag_counter']) * frag_total_bytes) + (total_frags - state['frag_counter']) * frag_total_bytes) if status['status'] == 'finished': state['downloaded_bytes'] += frag_total_bytes state['frag_counter'] += 1 @@ -281,13 +292,13 @@ class F4mFD(FileDownloader): frag_downloaded_bytes = status['downloaded_bytes'] byte_counter = state['downloaded_bytes'] + frag_downloaded_bytes frag_progress = self.calc_percent(frag_downloaded_bytes, - frag_total_bytes) + frag_total_bytes) progress = self.calc_percent(state['frag_counter'], total_frags) progress += frag_progress / float(total_frags) eta = self.calc_eta(start, time.time(), estimated_size, byte_counter) self.report_progress(progress, format_bytes(estimated_size), - status.get('speed'), eta) + status.get('speed'), eta) http_dl.add_progress_hook(frag_progress_hook) frags_filenames = [] diff --git a/youtube_dl/downloader/hls.py b/youtube_dl/downloader/hls.py index 68eafa403..5bb0f3cfd 100644 --- a/youtube_dl/downloader/hls.py +++ b/youtube_dl/downloader/hls.py @@ -4,10 +4,13 @@ import os import re import subprocess +from ..postprocessor.ffmpeg import FFmpegPostProcessor from .common import FileDownloader -from ..utils import ( +from ..compat import ( compat_urlparse, compat_urllib_request, +) +from ..utils import ( check_executable, encodeFilename, ) @@ -28,14 +31,17 @@ class HlsFD(FileDownloader): if check_executable(program, ['-version']): break else: - self.report_error(u'm3u8 download detected but ffmpeg or avconv could not be found. Please install one.') + self.report_error('m3u8 download detected but ffmpeg or avconv could not be found. Please install one.') return False cmd = [program] + args + ffpp = FFmpegPostProcessor(downloader=self) + ffpp.check_version() + retval = subprocess.call(cmd) if retval == 0: fsize = os.path.getsize(encodeFilename(tmpfilename)) - self.to_screen(u'\r[%s] %s bytes' % (cmd[0], fsize)) + self.to_screen('\r[%s] %s bytes' % (cmd[0], fsize)) self.try_rename(tmpfilename, filename) self._hook_progress({ 'downloaded_bytes': fsize, @@ -45,8 +51,8 @@ class HlsFD(FileDownloader): }) return True else: - self.to_stderr(u"\n") - self.report_error(u'%s exited with code %d' % (program, retval)) + self.to_stderr('\n') + self.report_error('%s exited with code %d' % (program, retval)) return False @@ -101,4 +107,3 @@ class NativeHlsFD(FileDownloader): }) self.try_rename(tmpfilename, filename) return True - diff --git a/youtube_dl/downloader/http.py b/youtube_dl/downloader/http.py index f62555ce0..e68f20c9f 100644 --- a/youtube_dl/downloader/http.py +++ b/youtube_dl/downloader/http.py @@ -1,12 +1,15 @@ +from __future__ import unicode_literals + import os import time from .common import FileDownloader -from ..utils import ( +from ..compat import ( compat_urllib_request, compat_urllib_error, +) +from ..utils import ( ContentTooShortError, - encodeFilename, sanitize_open, format_bytes, @@ -106,7 +109,7 @@ class HttpFD(FileDownloader): self.report_retry(count, retries) if count > retries: - self.report_error(u'giving up after %s retries' % retries) + self.report_error('giving up after %s retries' % retries) return False data_len = data.info().get('Content-length', None) @@ -124,26 +127,31 @@ class HttpFD(FileDownloader): min_data_len = self.params.get("min_filesize", None) max_data_len = self.params.get("max_filesize", None) if min_data_len is not None and data_len < min_data_len: - self.to_screen(u'\r[download] File is smaller than min-filesize (%s bytes < %s bytes). Aborting.' % (data_len, min_data_len)) + self.to_screen('\r[download] File is smaller than min-filesize (%s bytes < %s bytes). Aborting.' % (data_len, min_data_len)) return False if max_data_len is not None and data_len > max_data_len: - self.to_screen(u'\r[download] File is larger than max-filesize (%s bytes > %s bytes). Aborting.' % (data_len, max_data_len)) + self.to_screen('\r[download] File is larger than max-filesize (%s bytes > %s bytes). Aborting.' % (data_len, max_data_len)) return False data_len_str = format_bytes(data_len) byte_counter = 0 + resume_len block_size = self.params.get('buffersize', 1024) start = time.time() + + # measure time over whole while-loop, so slow_down() and best_block_size() work together properly + now = None # needed for slow_down() in the first loop run + before = start # start measuring while True: + # Download and write - before = time.time() data_block = data.read(block_size if not is_test else min(block_size, data_len - byte_counter)) - after = time.time() + byte_counter += len(data_block) + + # exit loop when download is finished if len(data_block) == 0: break - byte_counter += len(data_block) - # Open file just in time + # Open destination file just in time if stream is None: try: (stream, tmpfilename) = sanitize_open(tmpfilename, open_mode) @@ -151,19 +159,30 @@ class HttpFD(FileDownloader): filename = self.undo_temp_name(tmpfilename) self.report_destination(filename) except (OSError, IOError) as err: - self.report_error(u'unable to open for writing: %s' % str(err)) + self.report_error('unable to open for writing: %s' % str(err)) return False try: stream.write(data_block) except (IOError, OSError) as err: - self.to_stderr(u"\n") - self.report_error(u'unable to write data: %s' % str(err)) + self.to_stderr('\n') + self.report_error('unable to write data: %s' % str(err)) return False + + # Apply rate limit + self.slow_down(start, now, byte_counter - resume_len) + + # end measuring of one loop run + now = time.time() + after = now + + # Adjust block size if not self.params.get('noresizebuffer', False): block_size = self.best_block_size(after - before, len(data_block)) + before = after + # Progress message - speed = self.calc_speed(start, time.time(), byte_counter - resume_len) + speed = self.calc_speed(start, now, byte_counter - resume_len) if data_len is None: eta = percent = None else: @@ -184,14 +203,11 @@ class HttpFD(FileDownloader): if is_test and byte_counter == data_len: break - # Apply rate limit - self.slow_down(start, byte_counter - resume_len) - if stream is None: - self.to_stderr(u"\n") - self.report_error(u'Did not get any data blocks') + self.to_stderr('\n') + self.report_error('Did not get any data blocks') return False - if tmpfilename != u'-': + if tmpfilename != '-': stream.close() self.report_finish(data_len_str, (time.time() - start)) if data_len is not None and byte_counter != data_len: diff --git a/youtube_dl/downloader/mplayer.py b/youtube_dl/downloader/mplayer.py index 4de7f15f4..c53195da0 100644 --- a/youtube_dl/downloader/mplayer.py +++ b/youtube_dl/downloader/mplayer.py @@ -1,7 +1,10 @@ +from __future__ import unicode_literals + import os import subprocess from .common import FileDownloader +from ..compat import compat_subprocess_get_DEVNULL from ..utils import ( encodeFilename, ) @@ -13,19 +16,23 @@ class MplayerFD(FileDownloader): self.report_destination(filename) tmpfilename = self.temp_name(filename) - args = ['mplayer', '-really-quiet', '-vo', 'null', '-vc', 'dummy', '-dumpstream', '-dumpfile', tmpfilename, url] + args = [ + 'mplayer', '-really-quiet', '-vo', 'null', '-vc', 'dummy', + '-dumpstream', '-dumpfile', tmpfilename, url] # Check for mplayer first try: - subprocess.call(['mplayer', '-h'], stdout=(open(os.path.devnull, 'w')), stderr=subprocess.STDOUT) + subprocess.call( + ['mplayer', '-h'], + stdout=compat_subprocess_get_DEVNULL(), stderr=subprocess.STDOUT) except (OSError, IOError): - self.report_error(u'MMS or RTSP download detected but "%s" could not be run' % args[0]) + self.report_error('MMS or RTSP download detected but "%s" could not be run' % args[0]) return False # Download using mplayer. retval = subprocess.call(args) if retval == 0: fsize = os.path.getsize(encodeFilename(tmpfilename)) - self.to_screen(u'\r[%s] %s bytes' % (args[0], fsize)) + self.to_screen('\r[%s] %s bytes' % (args[0], fsize)) self.try_rename(tmpfilename, filename) self._hook_progress({ 'downloaded_bytes': fsize, @@ -35,6 +42,6 @@ class MplayerFD(FileDownloader): }) return True else: - self.to_stderr(u"\n") - self.report_error(u'mplayer exited with code %d' % retval) + self.to_stderr('\n') + self.report_error('mplayer exited with code %d' % retval) return False diff --git a/youtube_dl/downloader/rtmp.py b/youtube_dl/downloader/rtmp.py index 5eb108302..575912675 100644 --- a/youtube_dl/downloader/rtmp.py +++ b/youtube_dl/downloader/rtmp.py @@ -7,14 +7,20 @@ import sys import time from .common import FileDownloader +from ..compat import compat_str from ..utils import ( check_executable, - compat_str, encodeFilename, format_bytes, + get_exe_version, ) +def rtmpdump_version(): + return get_exe_version( + 'rtmpdump', ['--help'], r'(?i)RTMPDump\s*v?([0-9a-zA-Z._-]+)') + + class RtmpFD(FileDownloader): def real_download(self, filename, info_dict): def run_rtmpdump(args): @@ -40,13 +46,13 @@ class RtmpFD(FileDownloader): continue mobj = re.search(r'([0-9]+\.[0-9]{3}) kB / [0-9]+\.[0-9]{2} sec \(([0-9]{1,2}\.[0-9])%\)', line) if mobj: - downloaded_data_len = int(float(mobj.group(1))*1024) + downloaded_data_len = int(float(mobj.group(1)) * 1024) percent = float(mobj.group(2)) if not resume_percent: resume_percent = percent resume_downloaded_data_len = downloaded_data_len - eta = self.calc_eta(start, time.time(), 100-resume_percent, percent-resume_percent) - speed = self.calc_speed(start, time.time(), downloaded_data_len-resume_downloaded_data_len) + eta = self.calc_eta(start, time.time(), 100 - resume_percent, percent - resume_percent) + speed = self.calc_speed(start, time.time(), downloaded_data_len - resume_downloaded_data_len) data_len = None if percent > 0: data_len = int(downloaded_data_len * 100 / percent) @@ -66,7 +72,7 @@ class RtmpFD(FileDownloader): # no percent for live streams mobj = re.search(r'([0-9]+\.[0-9]{3}) kB / [0-9]+\.[0-9]{2} sec', line) if mobj: - downloaded_data_len = int(float(mobj.group(1))*1024) + downloaded_data_len = int(float(mobj.group(1)) * 1024) time_now = time.time() speed = self.calc_speed(start, time_now, downloaded_data_len) self.report_progress_live_stream(downloaded_data_len, speed, time_now - start) @@ -82,7 +88,7 @@ class RtmpFD(FileDownloader): if not cursor_in_new_line: self.to_screen('') cursor_in_new_line = True - self.to_screen('[rtmpdump] '+line) + self.to_screen('[rtmpdump] ' + line) proc.wait() if not cursor_in_new_line: self.to_screen('') @@ -174,7 +180,7 @@ class RtmpFD(FileDownloader): while (retval == RD_INCOMPLETE or retval == RD_FAILED) and not test and not live: prevsize = os.path.getsize(encodeFilename(tmpfilename)) self.to_screen('[rtmpdump] %s bytes' % prevsize) - time.sleep(5.0) # This seems to be needed + time.sleep(5.0) # This seems to be needed retval = run_rtmpdump(basic_args + ['-e'] + [[], ['-k', '1']][retval == RD_FAILED]) cursize = os.path.getsize(encodeFilename(tmpfilename)) if prevsize == cursize and retval == RD_FAILED: diff --git a/youtube_dl/extractor/__init__.py b/youtube_dl/extractor/__init__.py index dd770fdf1..119ec2044 100644 --- a/youtube_dl/extractor/__init__.py +++ b/youtube_dl/extractor/__init__.py @@ -1,3 +1,5 @@ +from __future__ import unicode_literals + from .abc import ABCIE from .academicearth import AcademicEarthCourseIE from .addanime import AddAnimeIE @@ -20,19 +22,25 @@ from .arte import ( ArteTVDDCIE, ArteTVEmbedIE, ) +from .audiomack import AudiomackIE from .auengine import AUEngineIE +from .azubu import AzubuIE from .bambuser import BambuserIE, BambuserChannelIE from .bandcamp import BandcampIE, BandcampAlbumIE from .bbccouk import BBCCoUkIE from .beeg import BeegIE from .behindkink import BehindKinkIE +from .bet import BetIE +from .bild import BildIE from .bilibili import BiliBiliIE from .blinkx import BlinkxIE from .bliptv import BlipTVIE, BlipTVUserIE from .bloomberg import BloombergIE +from .bpb import BpbIE from .br import BRIE from .breakcom import BreakIE from .brightcove import BrightcoveIE +from .buzzfeed import BuzzFeedIE from .byutv import BYUtvIE from .c56 import C56IE from .canal13cl import Canal13clIE @@ -43,7 +51,7 @@ from .cbsnews import CBSNewsIE from .ceskatelevize import CeskaTelevizeIE from .channel9 import Channel9IE from .chilloutzone import ChilloutzoneIE -from .cinemassacre import CinemassacreIE +from .cinchcast import CinchcastIE from .clipfish import ClipfishIE from .cliphunter import CliphunterIE from .clipsyndicate import ClipsyndicateIE @@ -57,12 +65,15 @@ from .cnn import ( ) from .collegehumor import CollegeHumorIE from .comedycentral import ComedyCentralIE, ComedyCentralShowsIE +from .comcarcoff import ComCarCoffIE from .condenast import CondeNastIE from .cracked import CrackedIE from .criterion import CriterionIE -from .crunchyroll import CrunchyrollIE +from .crunchyroll import ( + CrunchyrollIE, + CrunchyrollShowPlaylistIE +) from .cspan import CSpanIE -from .d8 import D8IE from .dailymotion import ( DailymotionIE, DailymotionPlaylistIE, @@ -111,7 +122,10 @@ from .fktv import ( FKTVPosteckeIE, ) from .flickr import FlickrIE +from .folketinget import FolketingetIE from .fourtube import FourTubeIE +from .foxgay import FoxgayIE +from .foxnews import FoxNewsIE from .franceculture import FranceCultureIE from .franceinter import FranceInterIE from .francetv import ( @@ -123,6 +137,7 @@ from .francetv import ( ) from .freesound import FreesoundIE from .freespeech import FreespeechIE +from .freevideo import FreeVideoIE from .funnyordie import FunnyOrDieIE from .gamekings import GamekingsIE from .gameone import ( @@ -134,14 +149,18 @@ from .gamestar import GameStarIE from .gametrailers import GametrailersIE from .gdcvault import GDCVaultIE from .generic import GenericIE +from .giantbomb import GiantBombIE +from .glide import GlideIE from .globo import GloboIE from .godtube import GodTubeIE +from .goldenmoustache import GoldenMoustacheIE from .golem import GolemIE from .googleplus import GooglePlusIE from .googlesearch import GoogleSearchIE from .gorillavid import GorillaVidIE from .goshgay import GoshgayIE from .grooveshark import GroovesharkIE +from .groupon import GrouponIE from .hark import HarkIE from .heise import HeiseIE from .helsinki import HelsinkiIE @@ -173,7 +192,6 @@ from .jadorecettepub import JadoreCettePubIE from .jeuxvideo import JeuxVideoIE from .jove import JoveIE from .jukebox import JukeboxIE -from .justintv import JustinTVIE from .jpopsukitv import JpopsukiIE from .kankan import KankanIE from .keezmovies import KeezMoviesIE @@ -184,6 +202,7 @@ from .kontrtube import KontrTubeIE from .krasview import KrasViewIE from .ku6 import Ku6IE from .la7 import LA7IE +from .laola1tv import Laola1TvIE from .lifenews import LifeNewsIE from .liveleak import LiveLeakIE from .livestream import ( @@ -204,6 +223,7 @@ from .mdr import MDRIE from .metacafe import MetacafeIE from .metacritic import MetacriticIE from .mgoon import MgoonIE +from .minhateca import MinhatecaIE from .ministrygrid import MinistryGridIE from .mit import TechTVMITIE, MITIE, OCWMITIE from .mitele import MiTeleIE @@ -230,9 +250,10 @@ from .muenchentv import MuenchenTVIE from .musicplayon import MusicPlayOnIE from .musicvault import MusicVaultIE from .muzu import MuzuTVIE -from .myspace import MySpaceIE +from .myspace import MySpaceIE, MySpaceAlbumIE from .myspass import MySpassIE from .myvideo import MyVideoIE +from .myvidster import MyVidsterIE from .naver import NaverIE from .nba import NBAIE from .nbc import ( @@ -246,7 +267,7 @@ from .newstube import NewstubeIE from .nfb import NFBIE from .nfl import NFLIE from .nhl import NHLIE, NHLVideocenterIE -from .niconico import NiconicoIE +from .niconico import NiconicoIE, NiconicoPlaylistIE from .ninegag import NineGagIE from .noco import NocoIE from .normalboots import NormalbootsIE @@ -275,6 +296,7 @@ from .orf import ( from .parliamentliveuk import ParliamentLiveUKIE from .patreon import PatreonIE from .pbs import PBSIE +from .phoenix import PhoenixIE from .photobucket import PhotobucketIE from .planetaplay import PlanetaPlayIE from .played import PlayedIE @@ -288,6 +310,8 @@ from .pornoxo import PornoXOIE from .promptfile import PromptFileIE from .prosiebensat1 import ProSiebenSat1IE from .pyvideo import PyvideoIE +from .quickvid import QuickVidIE +from .radiode import RadioDeIE from .radiofrance import RadioFranceIE from .rai import RaiIE from .rbmaradio import RBMARadioIE @@ -300,6 +324,7 @@ from .roxwel import RoxwelIE from .rtbf import RTBFIE from .rtlnl import RtlXlIE from .rtlnow import RTLnowIE +from .rtp import RTPIE from .rts import RTSIE from .rtve import RTVEALaCartaIE, RTVELiveIE from .ruhd import RUHDIE @@ -315,7 +340,10 @@ from .savefrom import SaveFromIE from .sbs import SBSIE from .scivee import SciVeeIE from .screencast import ScreencastIE +from .screenwavemedia import CinemassacreIE, ScreenwaveMediaIE, TeamFourIE from .servingsys import ServingSysIE +from .sexu import SexuIE +from .sexykarma import SexyKarmaIE from .shared import SharedIE from .sharesix import ShareSixIE from .sina import SinaIE @@ -349,6 +377,7 @@ from .spike import SpikeIE from .sport5 import Sport5IE from .sportbox import SportBoxIE from .sportdeutschland import SportDeutschlandIE +from .srmediathek import SRMediathekIE from .stanfordoc import StanfordOpenClassroomIE from .steam import SteamIE from .streamcloud import StreamcloudIE @@ -359,6 +388,7 @@ from .syfy import SyfyIE from .sztvhu import SztvHuIE from .tagesschau import TagesschauIE from .tapely import TapelyIE +from .tass import TassIE from .teachertube import ( TeacherTubeIE, TeacherTubeUserIE, @@ -367,15 +397,19 @@ from .teachingchannel import TeachingChannelIE from .teamcoco import TeamcocoIE from .techtalks import TechTalksIE from .ted import TEDIE +from .telebruxelles import TeleBruxellesIE +from .telecinco import TelecincoIE from .telemb import TeleMBIE from .tenplay import TenPlayIE from .testurl import TestURLIE from .tf1 import TF1IE +from .theonion import TheOnionIE from .theplatform import ThePlatformIE from .thesixtyone import TheSixtyOneIE from .thisav import ThisAVIE from .tinypic import TinyPicIE from .tlc import TlcIE, TlcDeIE +from .tmz import TMZIE from .tnaflix import TNAFlixIE from .thvideo import ( THVideoIE, @@ -389,11 +423,14 @@ from .trutube import TruTubeIE from .tube8 import Tube8IE from .tudou import TudouIE from .tumblr import TumblrIE +from .tunein import TuneInIE from .turbo import TurboIE from .tutv import TutvIE from .tvigle import TvigleIE from .tvp import TvpIE from .tvplay import TVPlayIE +from .twentyfourvideo import TwentyFourVideoIE +from .twitch import TwitchIE from .ubu import UbuIE from .udemy import ( UdemyIE, @@ -409,6 +446,7 @@ from .vesti import VestiIE from .vevo import VevoIE from .vgtv import VGTVIE from .vh1 import VH1IE +from .vice import ViceIE from .viddler import ViddlerIE from .videobam import VideoBamIE from .videodetective import VideoDetectiveIE @@ -419,6 +457,7 @@ from .videopremium import VideoPremiumIE from .videott import VideoTtIE from .videoweed import VideoWeedIE from .vidme import VidmeIE +from .vidzi import VidziIE from .vimeo import ( VimeoIE, VimeoAlbumIE, @@ -435,9 +474,13 @@ from .vine import ( VineUserIE, ) from .viki import VikiIE -from .vk import VKIE +from .vk import ( + VKIE, + VKUserVideosIE, +) from .vodlocker import VodlockerIE from .vporn import VpornIE +from .vrt import VRTIE from .vube import VubeIE from .vuclip import VuClipIE from .vulture import VultureIE @@ -458,6 +501,7 @@ from .wrzuta import WrzutaIE from .xbef import XBefIE from .xboxclips import XboxClipsIE from .xhamster import XHamsterIE +from .xminus import XMinusIE from .xnxx import XNXXIE from .xvideos import XVideosIE from .xtube import XTubeUserIE, XTubeIE @@ -487,9 +531,11 @@ from .youtube import ( YoutubeUserIE, YoutubeWatchLaterIE, ) - -from .zdf import ZDFIE - +from .zdf import ZDFIE, ZDFChannelIE +from .zingmp3 import ( + ZingMp3SongIE, + ZingMp3AlbumIE, +) _ALL_CLASSES = [ klass @@ -508,4 +554,4 @@ def gen_extractors(): def get_info_extractor(ie_name): """Returns the info extractor class with the given ie_name""" - return globals()[ie_name+'IE'] + return globals()[ie_name + 'IE'] diff --git a/youtube_dl/extractor/abc.py b/youtube_dl/extractor/abc.py index 69f89320c..dc0fb85d6 100644 --- a/youtube_dl/extractor/abc.py +++ b/youtube_dl/extractor/abc.py @@ -11,13 +11,13 @@ class ABCIE(InfoExtractor): _VALID_URL = r'http://www\.abc\.net\.au/news/[^/]+/[^/]+/(?P\d+)' _TEST = { - 'url': 'http://www.abc.net.au/news/2014-07-25/bringing-asylum-seekers-to-australia-would-give/5624716', - 'md5': 'dad6f8ad011a70d9ddf887ce6d5d0742', + 'url': 'http://www.abc.net.au/news/2014-11-05/australia-to-staff-ebola-treatment-centre-in-sierra-leone/5868334', + 'md5': 'cb3dd03b18455a661071ee1e28344d9f', 'info_dict': { - 'id': '5624716', + 'id': '5868334', 'ext': 'mp4', - 'title': 'Bringing asylum seekers to Australia would give them right to asylum claims: professor', - 'description': 'md5:ba36fa5e27e5c9251fd929d339aea4af', + 'title': 'Australia to help staff Ebola treatment centre in Sierra Leone', + 'description': 'md5:809ad29c67a05f54eb41f2a105693a67', }, } diff --git a/youtube_dl/extractor/academicearth.py b/youtube_dl/extractor/academicearth.py index c983ef0f5..47313fba8 100644 --- a/youtube_dl/extractor/academicearth.py +++ b/youtube_dl/extractor/academicearth.py @@ -1,4 +1,5 @@ from __future__ import unicode_literals + import re from .common import InfoExtractor @@ -18,15 +19,14 @@ class AcademicEarthCourseIE(InfoExtractor): } def _real_extract(self, url): - m = re.match(self._VALID_URL, url) - playlist_id = m.group('id') + playlist_id = self._match_id(url) webpage = self._download_webpage(url, playlist_id) title = self._html_search_regex( - r'

    ]*?>(.*?)

    ', webpage, u'title') + r'

    ]*?>(.*?)

    ', webpage, 'title') description = self._html_search_regex( r'

    ]*?>(.*?)

    ', - webpage, u'description', fatal=False) + webpage, 'description', fatal=False) urls = re.findall( r'
  • \s*?', webpage) diff --git a/youtube_dl/extractor/addanime.py b/youtube_dl/extractor/addanime.py index fcf296057..203936e54 100644 --- a/youtube_dl/extractor/addanime.py +++ b/youtube_dl/extractor/addanime.py @@ -3,19 +3,19 @@ from __future__ import unicode_literals import re from .common import InfoExtractor -from ..utils import ( +from ..compat import ( compat_HTTPError, compat_str, compat_urllib_parse, compat_urllib_parse_urlparse, - +) +from ..utils import ( ExtractorError, ) class AddAnimeIE(InfoExtractor): - - _VALID_URL = r'^http://(?:\w+\.)?add-anime\.net/watch_video\.php\?(?:.*?)v=(?P[\w_]+)(?:.*)' + _VALID_URL = r'^http://(?:\w+\.)?add-anime\.net/watch_video\.php\?(?:.*?)v=(?P[\w_]+)(?:.*)' _TEST = { 'url': 'http://www.add-anime.net/watch_video.php?v=24MR3YO5SAS9', 'md5': '72954ea10bc979ab5e2eb288b21425a0', @@ -28,9 +28,9 @@ class AddAnimeIE(InfoExtractor): } def _real_extract(self, url): + video_id = self._match_id(url) + try: - mobj = re.match(self._VALID_URL, url) - video_id = mobj.group('video_id') webpage = self._download_webpage(url, video_id) except ExtractorError as ee: if not isinstance(ee.cause, compat_HTTPError) or \ @@ -48,7 +48,7 @@ class AddAnimeIE(InfoExtractor): r'a\.value = ([0-9]+)[+]([0-9]+)[*]([0-9]+);', redir_webpage) if av is None: - raise ExtractorError(u'Cannot find redirect math task') + raise ExtractorError('Cannot find redirect math task') av_res = int(av.group(1)) + int(av.group(2)) * int(av.group(3)) parsed_url = compat_urllib_parse_urlparse(url) diff --git a/youtube_dl/extractor/adultswim.py b/youtube_dl/extractor/adultswim.py index b4b40f2d4..39e4ca296 100644 --- a/youtube_dl/extractor/adultswim.py +++ b/youtube_dl/extractor/adultswim.py @@ -2,122 +2,147 @@ from __future__ import unicode_literals import re +import json from .common import InfoExtractor +from ..utils import ( + ExtractorError, +) + class AdultSwimIE(InfoExtractor): - _VALID_URL = r'https?://video\.adultswim\.com/(?P.+?)(?:\.html)?(?:\?.*)?(?:#.*)?$' - _TEST = { - 'url': 'http://video.adultswim.com/rick-and-morty/close-rick-counters-of-the-rick-kind.html?x=y#title', + _VALID_URL = r'https?://(?:www\.)?adultswim\.com/videos/(?Pplaylists/)?(?P[^/]+)/(?P[^/?#]+)/?' + + _TESTS = [{ + 'url': 'http://adultswim.com/videos/rick-and-morty/pilot', 'playlist': [ { - 'md5': '4da359ec73b58df4575cd01a610ba5dc', + 'md5': '247572debc75c7652f253c8daa51a14d', 'info_dict': { - 'id': '8a250ba1450996e901453d7f02ca02f5', + 'id': 'rQxZvXQ4ROaSOqq-or2Mow-0', 'ext': 'flv', - 'title': 'Rick and Morty Close Rick-Counters of the Rick Kind part 1', - 'description': 'Rick has a run in with some old associates, resulting in a fallout with Morty. You got any chips, broh?', - 'uploader': 'Rick and Morty', - 'thumbnail': 'http://i.cdn.turner.com/asfix/repository/8a250ba13f865824013fc9db8b6b0400/thumbnail_267549017116827057.jpg' - } + 'title': 'Rick and Morty - Pilot Part 1', + 'description': "Rick moves in with his daughter's family and establishes himself as a bad influence on his grandson, Morty. " + }, }, { - 'md5': 'ffbdf55af9331c509d95350bd0cc1819', + 'md5': '77b0e037a4b20ec6b98671c4c379f48d', 'info_dict': { - 'id': '8a250ba1450996e901453d7f4bd102f6', + 'id': 'rQxZvXQ4ROaSOqq-or2Mow-3', 'ext': 'flv', - 'title': 'Rick and Morty Close Rick-Counters of the Rick Kind part 2', - 'description': 'Rick has a run in with some old associates, resulting in a fallout with Morty. You got any chips, broh?', - 'uploader': 'Rick and Morty', - 'thumbnail': 'http://i.cdn.turner.com/asfix/repository/8a250ba13f865824013fc9db8b6b0400/thumbnail_267549017116827057.jpg' - } - }, - { - 'md5': 'b92409635540304280b4b6c36bd14a0a', - 'info_dict': { - 'id': '8a250ba1450996e901453d7fa73c02f7', - 'ext': 'flv', - 'title': 'Rick and Morty Close Rick-Counters of the Rick Kind part 3', - 'description': 'Rick has a run in with some old associates, resulting in a fallout with Morty. You got any chips, broh?', - 'uploader': 'Rick and Morty', - 'thumbnail': 'http://i.cdn.turner.com/asfix/repository/8a250ba13f865824013fc9db8b6b0400/thumbnail_267549017116827057.jpg' - } + 'title': 'Rick and Morty - Pilot Part 4', + 'description': "Rick moves in with his daughter's family and establishes himself as a bad influence on his grandson, Morty. " + }, }, + ], + 'info_dict': { + 'title': 'Rick and Morty - Pilot', + 'description': "Rick moves in with his daughter's family and establishes himself as a bad influence on his grandson, Morty. " + } + }, { + 'url': 'http://www.adultswim.com/videos/playlists/american-parenting/putting-francine-out-of-business/', + 'playlist': [ { - 'md5': 'e8818891d60e47b29cd89d7b0278156d', + 'md5': '2eb5c06d0f9a1539da3718d897f13ec5', 'info_dict': { - 'id': '8a250ba1450996e901453d7fc8ba02f8', + 'id': '-t8CamQlQ2aYZ49ItZCFog-0', 'ext': 'flv', - 'title': 'Rick and Morty Close Rick-Counters of the Rick Kind part 4', - 'description': 'Rick has a run in with some old associates, resulting in a fallout with Morty. You got any chips, broh?', - 'uploader': 'Rick and Morty', - 'thumbnail': 'http://i.cdn.turner.com/asfix/repository/8a250ba13f865824013fc9db8b6b0400/thumbnail_267549017116827057.jpg' - } + 'title': 'American Dad - Putting Francine Out of Business', + 'description': 'Stan hatches a plan to get Francine out of the real estate business.Watch more American Dad on [adult swim].' + }, } - ] - } - - _video_extensions = { - '3500': 'flv', - '640': 'mp4', - '150': 'mp4', - 'ipad': 'm3u8', - 'iphone': 'm3u8' - } - _video_dimensions = { - '3500': (1280, 720), - '640': (480, 270), - '150': (320, 180) - } + ], + 'info_dict': { + 'title': 'American Dad - Putting Francine Out of Business', + 'description': 'Stan hatches a plan to get Francine out of the real estate business.Watch more American Dad on [adult swim].' + }, + }] + + @staticmethod + def find_video_info(collection, slug): + for video in collection.get('videos'): + if video.get('slug') == slug: + return video + + @staticmethod + def find_collection_by_linkURL(collections, linkURL): + for collection in collections: + if collection.get('linkURL') == linkURL: + return collection + + @staticmethod + def find_collection_containing_video(collections, slug): + for collection in collections: + for video in collection.get('videos'): + if video.get('slug') == slug: + return collection, video def _real_extract(self, url): mobj = re.match(self._VALID_URL, url) - video_path = mobj.group('path') - - webpage = self._download_webpage(url, video_path) - episode_id = self._html_search_regex( - r'', - webpage, 'episode_id') - title = self._og_search_title(webpage) - - index_url = 'http://asfix.adultswim.com/asfix-svc/episodeSearch/getEpisodesByIDs?networkName=AS&ids=%s' % episode_id - idoc = self._download_xml(index_url, title, 'Downloading episode index', 'Unable to download episode index') - - episode_el = idoc.find('.//episode') - show_title = episode_el.attrib.get('collectionTitle') - episode_title = episode_el.attrib.get('title') - thumbnail = episode_el.attrib.get('thumbnailUrl') - description = episode_el.find('./description').text.strip() + show_path = mobj.group('show_path') + episode_path = mobj.group('episode_path') + is_playlist = True if mobj.group('is_playlist') else False + + webpage = self._download_webpage(url, episode_path) + + # Extract the value of `bootstrappedData` from the Javascript in the page. + bootstrappedDataJS = self._search_regex(r'var bootstrappedData = ({.*});', webpage, episode_path) + + try: + bootstrappedData = json.loads(bootstrappedDataJS) + except ValueError as ve: + errmsg = '%s: Failed to parse JSON ' % episode_path + raise ExtractorError(errmsg, cause=ve) + + # Downloading videos from a /videos/playlist/ URL needs to be handled differently. + # NOTE: We are only downloading one video (the current one) not the playlist + if is_playlist: + collections = bootstrappedData['playlists']['collections'] + collection = self.find_collection_by_linkURL(collections, show_path) + video_info = self.find_video_info(collection, episode_path) + + show_title = video_info['showTitle'] + segment_ids = [video_info['videoPlaybackID']] + else: + collections = bootstrappedData['show']['collections'] + collection, video_info = self.find_collection_containing_video(collections, episode_path) + + show = bootstrappedData['show'] + show_title = show['title'] + segment_ids = [clip['videoPlaybackID'] for clip in video_info['clips']] + + episode_id = video_info['id'] + episode_title = video_info['title'] + episode_description = video_info['description'] + episode_duration = video_info.get('duration') entries = [] - segment_els = episode_el.findall('./segments/segment') + for part_num, segment_id in enumerate(segment_ids): + segment_url = 'http://www.adultswim.com/videos/api/v0/assets?id=%s&platform=mobile' % segment_id - for part_num, segment_el in enumerate(segment_els): - segment_id = segment_el.attrib.get('id') - segment_title = '%s %s part %d' % (show_title, episode_title, part_num + 1) - thumbnail = segment_el.attrib.get('thumbnailUrl') - duration = segment_el.attrib.get('duration') + segment_title = '%s - %s' % (show_title, episode_title) + if len(segment_ids) > 1: + segment_title += ' Part %d' % (part_num + 1) - segment_url = 'http://asfix.adultswim.com/asfix-svc/episodeservices/getCvpPlaylist?networkName=AS&id=%s' % segment_id idoc = self._download_xml( segment_url, segment_title, 'Downloading segment information', 'Unable to download segment information') + segment_duration = idoc.find('.//trt').text.strip() + formats = [] file_els = idoc.findall('.//files/file') for file_el in file_els: bitrate = file_el.attrib.get('bitrate') - type = file_el.attrib.get('type') - width, height = self._video_dimensions.get(bitrate, (None, None)) + ftype = file_el.attrib.get('type') + formats.append({ - 'format_id': '%s-%s' % (bitrate, type), - 'url': file_el.text, - 'ext': self._video_extensions.get(bitrate, 'mp4'), + 'format_id': '%s_%s' % (bitrate, ftype), + 'url': file_el.text.strip(), # The bitrate may not be a number (for example: 'iphone') 'tbr': int(bitrate) if bitrate.isdigit() else None, - 'height': height, - 'width': width + 'quality': 1 if ftype == 'hd' else -1 }) self._sort_formats(formats) @@ -126,18 +151,16 @@ class AdultSwimIE(InfoExtractor): 'id': segment_id, 'title': segment_title, 'formats': formats, - 'uploader': show_title, - 'thumbnail': thumbnail, - 'duration': duration, - 'description': description + 'duration': segment_duration, + 'description': episode_description }) return { '_type': 'playlist', 'id': episode_id, - 'display_id': video_path, + 'display_id': episode_path, 'entries': entries, - 'title': '%s %s' % (show_title, episode_title), - 'description': description, - 'thumbnail': thumbnail + 'title': '%s - %s' % (show_title, episode_title), + 'description': episode_description, + 'duration': episode_duration } diff --git a/youtube_dl/extractor/allocine.py b/youtube_dl/extractor/allocine.py index 7bd797884..623aeaf34 100644 --- a/youtube_dl/extractor/allocine.py +++ b/youtube_dl/extractor/allocine.py @@ -5,10 +5,9 @@ import re import json from .common import InfoExtractor +from ..compat import compat_str from ..utils import ( - compat_str, qualities, - determine_ext, ) @@ -22,7 +21,7 @@ class AllocineIE(InfoExtractor): 'id': '19546517', 'ext': 'mp4', 'title': 'Astérix - Le Domaine des Dieux Teaser VF', - 'description': 'md5:4a754271d9c6f16c72629a8a993ee884', + 'description': 'md5:abcd09ce503c6560512c14ebfdb720d2', 'thumbnail': 're:http://.*\.jpg', }, }, { @@ -75,9 +74,7 @@ class AllocineIE(InfoExtractor): 'format_id': format_id, 'quality': quality(format_id), 'url': v, - 'ext': determine_ext(v), }) - self._sort_formats(formats) return { diff --git a/youtube_dl/extractor/aol.py b/youtube_dl/extractor/aol.py index 47f8e4157..b51eafc45 100644 --- a/youtube_dl/extractor/aol.py +++ b/youtube_dl/extractor/aol.py @@ -3,7 +3,6 @@ from __future__ import unicode_literals import re from .common import InfoExtractor -from .fivemin import FiveMinIE class AolIE(InfoExtractor): @@ -42,31 +41,30 @@ class AolIE(InfoExtractor): def _real_extract(self, url): mobj = re.match(self._VALID_URL, url) video_id = mobj.group('id') - playlist_id = mobj.group('playlist_id') - if playlist_id and not self._downloader.params.get('noplaylist'): - self.to_screen('Downloading playlist %s - add --no-playlist to just download video %s' % (playlist_id, video_id)) + if not playlist_id or self._downloader.params.get('noplaylist'): + return self.url_result('5min:%s' % video_id) - webpage = self._download_webpage(url, playlist_id) - title = self._html_search_regex( - r'

    (.+?)

    ', webpage, 'title') - playlist_html = self._search_regex( - r"(?s)(.*?)", webpage, - 'playlist HTML') - entries = [{ - '_type': 'url', - 'url': 'aol-video:%s' % m.group('id'), - 'ie_key': 'Aol', - } for m in re.finditer( - r"[0-9]+)'\s+class='video-thumb'>", - playlist_html)] + self.to_screen('Downloading playlist %s - add --no-playlist to just download video %s' % (playlist_id, video_id)) - return { - '_type': 'playlist', - 'id': playlist_id, - 'display_id': mobj.group('playlist_display_id'), - 'title': title, - 'entries': entries, - } + webpage = self._download_webpage(url, playlist_id) + title = self._html_search_regex( + r'

    (.+?)

    ', webpage, 'title') + playlist_html = self._search_regex( + r"(?s)(.*?)", webpage, + 'playlist HTML') + entries = [{ + '_type': 'url', + 'url': 'aol-video:%s' % m.group('id'), + 'ie_key': 'Aol', + } for m in re.finditer( + r"[0-9]+)'\s+class='video-thumb'>", + playlist_html)] - return FiveMinIE._build_result(video_id) + return { + '_type': 'playlist', + 'id': playlist_id, + 'display_id': mobj.group('playlist_display_id'), + 'title': title, + 'entries': entries, + } diff --git a/youtube_dl/extractor/aparat.py b/youtube_dl/extractor/aparat.py index 748608826..15006336f 100644 --- a/youtube_dl/extractor/aparat.py +++ b/youtube_dl/extractor/aparat.py @@ -1,5 +1,4 @@ -#coding: utf-8 - +# coding: utf-8 from __future__ import unicode_literals import re @@ -26,8 +25,7 @@ class AparatIE(InfoExtractor): } def _real_extract(self, url): - m = re.match(self._VALID_URL, url) - video_id = m.group('id') + video_id = self._match_id(url) # Note: There is an easier-to-parse configuration at # http://www.aparat.com/video/video/config/videohash/%video_id @@ -40,15 +38,15 @@ class AparatIE(InfoExtractor): for i, video_url in enumerate(video_urls): req = HEADRequest(video_url) res = self._request_webpage( - req, video_id, note=u'Testing video URL %d' % i, errnote=False) + req, video_id, note='Testing video URL %d' % i, errnote=False) if res: break else: - raise ExtractorError(u'No working video URLs found') + raise ExtractorError('No working video URLs found') - title = self._search_regex(r'\s+title:\s*"([^"]+)"', webpage, u'title') + title = self._search_regex(r'\s+title:\s*"([^"]+)"', webpage, 'title') thumbnail = self._search_regex( - r'\s+image:\s*"([^"]+)"', webpage, u'thumbnail', fatal=False) + r'\s+image:\s*"([^"]+)"', webpage, 'thumbnail', fatal=False) return { 'id': video_id, diff --git a/youtube_dl/extractor/appletrailers.py b/youtube_dl/extractor/appletrailers.py index 4359b88d1..7cd0482c7 100644 --- a/youtube_dl/extractor/appletrailers.py +++ b/youtube_dl/extractor/appletrailers.py @@ -4,8 +4,8 @@ import re import json from .common import InfoExtractor +from ..compat import compat_urlparse from ..utils import ( - compat_urlparse, int_or_none, ) @@ -70,15 +70,17 @@ class AppleTrailersIE(InfoExtractor): uploader_id = mobj.group('company') playlist_url = compat_urlparse.urljoin(url, 'includes/playlists/itunes.inc') + def fix_html(s): s = re.sub(r'(?s).*?', '', s) s = re.sub(r'', r'', s) # The ' in the onClick attributes are not escaped, it couldn't be parsed # like: http://trailers.apple.com/trailers/wb/gravity/ + def _clean_json(m): return 'iTunes.playURL(%s);' % m.group(1).replace('\'', ''') s = re.sub(self._JSON_RE, _clean_json, s) - s = '' + s + u'' + s = '%s' % s return s doc = self._download_xml(playlist_url, movie, transform_source=fix_html) @@ -86,7 +88,7 @@ class AppleTrailersIE(InfoExtractor): for li in doc.findall('./div/ul/li'): on_click = li.find('.//a').attrib['onClick'] trailer_info_json = self._search_regex(self._JSON_RE, - on_click, 'trailer info') + on_click, 'trailer info') trailer_info = json.loads(trailer_info_json) title = trailer_info['title'] video_id = movie + '-' + re.sub(r'[^a-zA-Z0-9]', '', title).lower() diff --git a/youtube_dl/extractor/ard.py b/youtube_dl/extractor/ard.py index 8de9c11ea..967bd865c 100644 --- a/youtube_dl/extractor/ard.py +++ b/youtube_dl/extractor/ard.py @@ -4,6 +4,7 @@ from __future__ import unicode_literals import re from .common import InfoExtractor +from .generic import GenericIE from ..utils import ( determine_ext, ExtractorError, @@ -12,6 +13,7 @@ from ..utils import ( parse_duration, unified_strdate, xpath_text, + parse_xml, ) @@ -54,6 +56,11 @@ class ARDMediathekIE(InfoExtractor): if '>Der gewünschte Beitrag ist nicht mehr verfügbar.<' in webpage: raise ExtractorError('Video %s is no longer available' % video_id, expected=True) + if re.search(r'[\?&]rss($|[=&])', url): + doc = parse_xml(webpage) + if doc.tag == 'rss': + return GenericIE()._extract_rss(url, video_id, doc) + title = self._html_search_regex( [r'(.*?)', r'', @@ -185,4 +192,3 @@ class ARDIE(InfoExtractor): 'upload_date': upload_date, 'thumbnail': thumbnail, } - diff --git a/youtube_dl/extractor/arte.py b/youtube_dl/extractor/arte.py index c3d02f85e..219631b9b 100644 --- a/youtube_dl/extractor/arte.py +++ b/youtube_dl/extractor/arte.py @@ -5,16 +5,15 @@ import re from .common import InfoExtractor from ..utils import ( - ExtractorError, find_xpath_attr, unified_strdate, - determine_ext, get_element_by_id, - compat_str, get_element_by_attribute, + int_or_none, + qualities, ) -# There are different sources of video in arte.tv, the extraction process +# There are different sources of video in arte.tv, the extraction process # is different for each one. The videos usually expire in 7 days, so we can't # add tests. @@ -90,92 +89,66 @@ class ArteTVPlus7IE(InfoExtractor): if not upload_date_str: upload_date_str = player_info.get('VDA', '').split(' ')[0] + title = player_info['VTI'].strip() + subtitle = player_info.get('VSU', '').strip() + if subtitle: + title += ' - %s' % subtitle + info_dict = { 'id': player_info['VID'], - 'title': player_info['VTI'], + 'title': title, 'description': player_info.get('VDE'), 'upload_date': unified_strdate(upload_date_str), 'thumbnail': player_info.get('programImage') or player_info.get('VTU', {}).get('IUR'), } - - all_formats = player_info['VSR'].values() - # Some formats use the m3u8 protocol - all_formats = list(filter(lambda f: f.get('videoFormat') != 'M3U8', all_formats)) - def _match_lang(f): - if f.get('versionCode') is None: - return True - # Return true if that format is in the language of the url - if lang == 'fr': - l = 'F' - elif lang == 'de': - l = 'A' - else: - l = lang - regexes = [r'VO?%s' % l, r'VO?.-ST%s' % l] - return any(re.match(r, f['versionCode']) for r in regexes) - # Some formats may not be in the same language as the url - # TODO: Might want not to drop videos that does not match requested language - # but to process those formats with lower precedence - formats = filter(_match_lang, all_formats) - formats = list(formats) # in python3 filter returns an iterator - if not formats: - # Some videos are only available in the 'Originalversion' - # they aren't tagged as being in French or German - # Sometimes there are neither videos of requested lang code - # nor original version videos available - # For such cases we just take all_formats as is - formats = all_formats - if not formats: - raise ExtractorError('The formats list is empty') - - if re.match(r'[A-Z]Q', formats[0]['quality']) is not None: - def sort_key(f): - return ['HQ', 'MQ', 'EQ', 'SQ'].index(f['quality']) - else: - def sort_key(f): - versionCode = f.get('versionCode') - if versionCode is None: - versionCode = '' - return ( - # Sort first by quality - int(f.get('height', -1)), - int(f.get('bitrate', -1)), - # The original version with subtitles has lower relevance - re.match(r'VO-ST(F|A)', versionCode) is None, - # The version with sourds/mal subtitles has also lower relevance - re.match(r'VO?(F|A)-STM\1', versionCode) is None, - # Prefer http downloads over m3u8 - 0 if f['url'].endswith('m3u8') else 1, - ) - formats = sorted(formats, key=sort_key) - def _format(format_info): - quality = '' - height = format_info.get('height') - if height is not None: - quality = compat_str(height) - bitrate = format_info.get('bitrate') - if bitrate is not None: - quality += '-%d' % bitrate - if format_info.get('versionCode') is not None: - format_id = '%s-%s' % (quality, format_info['versionCode']) - else: - format_id = quality - info = { + qfunc = qualities(['HQ', 'MQ', 'EQ', 'SQ']) + + formats = [] + for format_id, format_dict in player_info['VSR'].items(): + f = dict(format_dict) + versionCode = f.get('versionCode') + + langcode = { + 'fr': 'F', + 'de': 'A', + }.get(lang, lang) + lang_rexs = [r'VO?%s' % langcode, r'VO?.-ST%s' % langcode] + lang_pref = ( + None if versionCode is None else ( + 10 if any(re.match(r, versionCode) for r in lang_rexs) + else -10)) + source_pref = 0 + if versionCode is not None: + # The original version with subtitles has lower relevance + if re.match(r'VO-ST(F|A)', versionCode): + source_pref -= 10 + # The version with sourds/mal subtitles has also lower relevance + elif re.match(r'VO?(F|A)-STM\1', versionCode): + source_pref -= 9 + format = { 'format_id': format_id, - 'format_note': format_info.get('versionLibelle'), - 'width': format_info.get('width'), - 'height': height, + 'preference': -10 if f.get('videoFormat') == 'M3U8' else None, + 'language_preference': lang_pref, + 'format_note': '%s, %s' % (f.get('versionCode'), f.get('versionLibelle')), + 'width': int_or_none(f.get('width')), + 'height': int_or_none(f.get('height')), + 'tbr': int_or_none(f.get('bitrate')), + 'quality': qfunc(f['quality']), + 'source_preference': source_pref, } - if format_info['mediaType'] == 'rtmp': - info['url'] = format_info['streamer'] - info['play_path'] = 'mp4:' + format_info['url'] - info['ext'] = 'flv' + + if f.get('mediaType') == 'rtmp': + format['url'] = f['streamer'] + format['play_path'] = 'mp4:' + f['url'] + format['ext'] = 'flv' else: - info['url'] = format_info['url'] - info['ext'] = determine_ext(info['url']) - return info - info_dict['formats'] = [_format(f) for f in formats] + format['url'] = f['url'] + + formats.append(format) + + self._sort_formats(formats) + info_dict['formats'] = formats return info_dict diff --git a/youtube_dl/extractor/audiomack.py b/youtube_dl/extractor/audiomack.py new file mode 100644 index 000000000..622b20989 --- /dev/null +++ b/youtube_dl/extractor/audiomack.py @@ -0,0 +1,69 @@ +# coding: utf-8 +from __future__ import unicode_literals + +from .common import InfoExtractor +from .soundcloud import SoundcloudIE +from ..utils import ExtractorError + +import time + + +class AudiomackIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?audiomack\.com/song/(?P[\w/-]+)' + IE_NAME = 'audiomack' + _TESTS = [ + # hosted on audiomack + { + 'url': 'http://www.audiomack.com/song/roosh-williams/extraordinary', + 'info_dict': + { + 'id': 'roosh-williams/extraordinary', + 'ext': 'mp3', + 'title': 'Roosh Williams - Extraordinary' + } + }, + # hosted on soundcloud via audiomack + { + 'add_ie': ['Soundcloud'], + 'url': 'http://www.audiomack.com/song/xclusiveszone/take-kare', + 'info_dict': { + 'id': '172419696', + 'ext': 'mp3', + 'description': 'md5:1fc3272ed7a635cce5be1568c2822997', + 'title': 'Young Thug ft Lil Wayne - Take Kare', + 'uploader': 'Young Thug World', + 'upload_date': '20141016', + } + }, + ] + + def _real_extract(self, url): + video_id = self._match_id(url) + + api_response = self._download_json( + "http://www.audiomack.com/api/music/url/song/%s?_=%d" % ( + video_id, time.time()), + video_id) + + if "url" not in api_response: + raise ExtractorError("Unable to deduce api url of song") + realurl = api_response["url"] + + # Audiomack wraps a lot of soundcloud tracks in their branded wrapper + # - if so, pass the work off to the soundcloud extractor + if SoundcloudIE.suitable(realurl): + return {'_type': 'url', 'url': realurl, 'ie_key': 'Soundcloud'} + + webpage = self._download_webpage(url, video_id) + artist = self._html_search_regex( + r'(.*?)', webpage, "artist") + songtitle = self._html_search_regex( + r'

    .*?(.*?)

    ', + webpage, "title") + title = artist + " - " + songtitle + + return { + 'id': video_id, + 'title': title, + 'url': realurl, + } diff --git a/youtube_dl/extractor/auengine.py b/youtube_dl/extractor/auengine.py index 20bf12550..014a21952 100644 --- a/youtube_dl/extractor/auengine.py +++ b/youtube_dl/extractor/auengine.py @@ -3,8 +3,8 @@ from __future__ import unicode_literals import re from .common import InfoExtractor +from ..compat import compat_urllib_parse from ..utils import ( - compat_urllib_parse, determine_ext, ExtractorError, ) @@ -24,8 +24,7 @@ class AUEngineIE(InfoExtractor): } def _real_extract(self, url): - mobj = re.match(self._VALID_URL, url) - video_id = mobj.group('id') + video_id = self._match_id(url) webpage = self._download_webpage(url, video_id) title = self._html_search_regex(r'(?P<title>.+?)', webpage, 'title') diff --git a/youtube_dl/extractor/azubu.py b/youtube_dl/extractor/azubu.py new file mode 100644 index 000000000..0961d339f --- /dev/null +++ b/youtube_dl/extractor/azubu.py @@ -0,0 +1,93 @@ +from __future__ import unicode_literals + +import json + +from .common import InfoExtractor +from ..utils import float_or_none + + +class AzubuIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?azubu\.tv/[^/]+#!/play/(?P\d+)' + _TESTS = [ + { + 'url': 'http://www.azubu.tv/GSL#!/play/15575/2014-hot6-cup-last-big-match-ro8-day-1', + 'md5': 'a88b42fcf844f29ad6035054bd9ecaf4', + 'info_dict': { + 'id': '15575', + 'ext': 'mp4', + 'title': '2014 HOT6 CUP LAST BIG MATCH Ro8 Day 1', + 'description': 'md5:d06bdea27b8cc4388a90ad35b5c66c01', + 'thumbnail': 're:^https?://.*\.jpe?g', + 'timestamp': 1417523507.334, + 'upload_date': '20141202', + 'duration': 9988.7, + 'uploader': 'GSL', + 'uploader_id': 414310, + 'view_count': int, + }, + }, + { + 'url': 'http://www.azubu.tv/FnaticTV#!/play/9344/-fnatic-at-worlds-2014:-toyz---%22i-love-rekkles,-he-has-amazing-mechanics%22-', + 'md5': 'b72a871fe1d9f70bd7673769cdb3b925', + 'info_dict': { + 'id': '9344', + 'ext': 'mp4', + 'title': 'Fnatic at Worlds 2014: Toyz - "I love Rekkles, he has amazing mechanics"', + 'description': 'md5:4a649737b5f6c8b5c5be543e88dc62af', + 'thumbnail': 're:^https?://.*\.jpe?g', + 'timestamp': 1410530893.320, + 'upload_date': '20140912', + 'duration': 172.385, + 'uploader': 'FnaticTV', + 'uploader_id': 272749, + 'view_count': int, + }, + }, + ] + + def _real_extract(self, url): + video_id = self._match_id(url) + + data = self._download_json( + 'http://www.azubu.tv/api/video/%s' % video_id, video_id)['data'] + + title = data['title'].strip() + description = data['description'] + thumbnail = data['thumbnail'] + view_count = data['view_count'] + uploader = data['user']['username'] + uploader_id = data['user']['id'] + + stream_params = json.loads(data['stream_params']) + + timestamp = float_or_none(stream_params['creationDate'], 1000) + duration = float_or_none(stream_params['length'], 1000) + + renditions = stream_params.get('renditions') or [] + video = stream_params.get('FLVFullLength') or stream_params.get('videoFullLength') + if video: + renditions.append(video) + + formats = [{ + 'url': fmt['url'], + 'width': fmt['frameWidth'], + 'height': fmt['frameHeight'], + 'vbr': float_or_none(fmt['encodingRate'], 1000), + 'filesize': fmt['size'], + 'vcodec': fmt['videoCodec'], + 'container': fmt['videoContainer'], + } for fmt in renditions if fmt['url']] + self._sort_formats(formats) + + return { + 'id': video_id, + 'title': title, + 'description': description, + 'thumbnail': thumbnail, + 'timestamp': timestamp, + 'duration': duration, + 'uploader': uploader, + 'uploader_id': uploader_id, + 'view_count': view_count, + 'formats': formats, + } diff --git a/youtube_dl/extractor/bambuser.py b/youtube_dl/extractor/bambuser.py index de5d4faf3..98e1443ab 100644 --- a/youtube_dl/extractor/bambuser.py +++ b/youtube_dl/extractor/bambuser.py @@ -5,7 +5,7 @@ import json import itertools from .common import InfoExtractor -from ..utils import ( +from ..compat import ( compat_urllib_request, ) @@ -18,7 +18,7 @@ class BambuserIE(InfoExtractor): _TEST = { 'url': 'http://bambuser.com/v/4050584', # MD5 seems to be flaky, see https://travis-ci.org/rg3/youtube-dl/jobs/14051016#L388 - #u'md5': 'fba8f7693e48fd4e8641b3fd5539a641', + # 'md5': 'fba8f7693e48fd4e8641b3fd5539a641', 'info_dict': { 'id': '4050584', 'ext': 'flv', @@ -38,7 +38,7 @@ class BambuserIE(InfoExtractor): mobj = re.match(self._VALID_URL, url) video_id = mobj.group('id') info_url = ('http://player-c.api.bambuser.com/getVideo.json?' - '&api_key=%s&vid=%s' % (self._API_KEY, video_id)) + '&api_key=%s&vid=%s' % (self._API_KEY, video_id)) info_json = self._download_webpage(info_url, video_id) info = json.loads(info_json)['result'] @@ -73,10 +73,11 @@ class BambuserChannelIE(InfoExtractor): urls = [] last_id = '' for i in itertools.count(1): - req_url = ('http://bambuser.com/xhr-api/index.php?username={user}' + req_url = ( + 'http://bambuser.com/xhr-api/index.php?username={user}' '&sort=created&access_mode=0%2C1%2C2&limit={count}' '&method=broadcast&format=json&vid_older_than={last}' - ).format(user=user, count=self._STEP, last=last_id) + ).format(user=user, count=self._STEP, last=last_id) req = compat_urllib_request.Request(req_url) # Without setting this header, we wouldn't get any result req.add_header('Referer', 'http://bambuser.com/channel/%s' % user) diff --git a/youtube_dl/extractor/bandcamp.py b/youtube_dl/extractor/bandcamp.py index c13446665..9fb770cb1 100644 --- a/youtube_dl/extractor/bandcamp.py +++ b/youtube_dl/extractor/bandcamp.py @@ -4,9 +4,11 @@ import json import re from .common import InfoExtractor -from ..utils import ( +from ..compat import ( compat_str, compat_urlparse, +) +from ..utils import ( ExtractorError, ) @@ -83,12 +85,12 @@ class BandcampIE(InfoExtractor): initial_url = mp3_info['url'] re_url = r'(?Phttp://(.*?)\.bandcamp\.com)/download/track\?enc=mp3-320&fsig=(?P.*?)&id=(?P.*?)&ts=(?P.*)$' m_url = re.match(re_url, initial_url) - #We build the url we will use to get the final track url + # We build the url we will use to get the final track url # This url is build in Bandcamp in the script download_bunde_*.js request_url = '%s/statdownload/track?enc=mp3-320&fsig=%s&id=%s&ts=%s&.rand=665028774616&.vrs=1' % (m_url.group('server'), m_url.group('fsig'), video_id, m_url.group('ts')) final_url_webpage = self._download_webpage(request_url, video_id, 'Requesting download url') # If we could correctly generate the .rand field the url would be - #in the "download_url" key + # in the "download_url" key final_url = re.search(r'"retry_url":"(.*?)"', final_url_webpage).group(1) return { @@ -110,20 +112,25 @@ class BandcampAlbumIE(InfoExtractor): 'url': 'http://blazo.bandcamp.com/album/jazz-format-mixtape-vol-1', 'playlist': [ { - 'file': '1353101989.mp3', 'md5': '39bc1eded3476e927c724321ddf116cf', 'info_dict': { + 'id': '1353101989', + 'ext': 'mp3', 'title': 'Intro', } }, { - 'file': '38097443.mp3', 'md5': '1a2c32e2691474643e912cc6cd4bffaa', 'info_dict': { + 'id': '38097443', + 'ext': 'mp3', 'title': 'Kero One - Keep It Alive (Blazo remix)', } }, ], + 'info_dict': { + 'title': 'Jazz Format Mixtape vol.1', + }, 'params': { 'playlistend': 2 }, diff --git a/youtube_dl/extractor/bbccouk.py b/youtube_dl/extractor/bbccouk.py index 75e608f99..01c02d360 100644 --- a/youtube_dl/extractor/bbccouk.py +++ b/youtube_dl/extractor/bbccouk.py @@ -1,9 +1,10 @@ from __future__ import unicode_literals -import re +import xml.etree.ElementTree from .subtitles import SubtitlesInfoExtractor from ..utils import ExtractorError +from ..compat import compat_HTTPError class BBCCoUkIE(SubtitlesInfoExtractor): @@ -55,7 +56,22 @@ class BBCCoUkIE(SubtitlesInfoExtractor): 'skip_download': True, }, 'skip': 'Currently BBC iPlayer TV programmes are available to play in the UK only', - } + }, + { + 'url': 'http://www.bbc.co.uk/iplayer/episode/p026c7jt/tomorrows-worlds-the-unearthly-history-of-science-fiction-2-invasion', + 'info_dict': { + 'id': 'b03k3pb7', + 'ext': 'flv', + 'title': "Tomorrow's Worlds: The Unearthly History of Science Fiction", + 'description': '2. Invasion', + 'duration': 3600, + }, + 'params': { + # rtmp download + 'skip_download': True, + }, + 'skip': 'Currently BBC iPlayer TV programmes are available to play in the UK only', + }, ] def _extract_asx_playlist(self, connection, programme_id): @@ -102,6 +118,10 @@ class BBCCoUkIE(SubtitlesInfoExtractor): return playlist.findall('./{http://bbc.co.uk/2008/emp/playlist}item') def _extract_medias(self, media_selection): + error = media_selection.find('./{http://bbc.co.uk/2008/mp/mediaselection}error') + if error is not None: + raise ExtractorError( + '%s returned error: %s' % (self.IE_NAME, error.get('id')), expected=True) return media_selection.findall('./{http://bbc.co.uk/2008/mp/mediaselection}media') def _extract_connections(self, media): @@ -158,54 +178,73 @@ class BBCCoUkIE(SubtitlesInfoExtractor): subtitles[lang] = srt return subtitles - def _real_extract(self, url): - mobj = re.match(self._VALID_URL, url) - group_id = mobj.group('id') - - webpage = self._download_webpage(url, group_id, 'Downloading video page') - if re.search(r'id="emp-error" class="notinuk">', webpage): - raise ExtractorError('Currently BBC iPlayer TV programmes are available to play in the UK only', - expected=True) - - playlist = self._download_xml('http://www.bbc.co.uk/iplayer/playlist/%s' % group_id, group_id, - 'Downloading playlist XML') - - no_items = playlist.find('./{http://bbc.co.uk/2008/emp/playlist}noItems') - if no_items is not None: - reason = no_items.get('reason') - if reason == 'preAvailability': - msg = 'Episode %s is not yet available' % group_id - elif reason == 'postAvailability': - msg = 'Episode %s is no longer available' % group_id + def _download_media_selector(self, programme_id): + try: + media_selection = self._download_xml( + 'http://open.live.bbc.co.uk/mediaselector/5/select/version/2.0/mediaset/pc/vpid/%s' % programme_id, + programme_id, 'Downloading media selection XML') + except ExtractorError as ee: + if isinstance(ee.cause, compat_HTTPError) and ee.cause.code == 403: + media_selection = xml.etree.ElementTree.fromstring(ee.cause.read().encode('utf-8')) else: - msg = 'Episode %s is not available: %s' % (group_id, reason) - raise ExtractorError(msg, expected=True) + raise formats = [] subtitles = None - for item in self._extract_items(playlist): - kind = item.get('kind') - if kind != 'programme' and kind != 'radioProgramme': - continue - title = playlist.find('./{http://bbc.co.uk/2008/emp/playlist}title').text - description = playlist.find('./{http://bbc.co.uk/2008/emp/playlist}summary').text + for media in self._extract_medias(media_selection): + kind = media.get('kind') + if kind == 'audio': + formats.extend(self._extract_audio(media, programme_id)) + elif kind == 'video': + formats.extend(self._extract_video(media, programme_id)) + elif kind == 'captions': + subtitles = self._extract_captions(media, programme_id) - programme_id = item.get('identifier') - duration = int(item.get('duration')) + return formats, subtitles - media_selection = self._download_xml( - 'http://open.live.bbc.co.uk/mediaselector/5/select/version/2.0/mediaset/pc/vpid/%s' % programme_id, - programme_id, 'Downloading media selection XML') + def _real_extract(self, url): + group_id = self._match_id(url) + + webpage = self._download_webpage(url, group_id, 'Downloading video page') - for media in self._extract_medias(media_selection): - kind = media.get('kind') - if kind == 'audio': - formats.extend(self._extract_audio(media, programme_id)) - elif kind == 'video': - formats.extend(self._extract_video(media, programme_id)) - elif kind == 'captions': - subtitles = self._extract_captions(media, programme_id) + programme_id = self._search_regex( + r'"vpid"\s*:\s*"([\da-z]{8})"', webpage, 'vpid', fatal=False) + if programme_id: + player = self._download_json( + 'http://www.bbc.co.uk/iplayer/episode/%s.json' % group_id, + group_id)['jsConf']['player'] + title = player['title'] + description = player['subtitle'] + duration = player['duration'] + formats, subtitles = self._download_media_selector(programme_id) + else: + playlist = self._download_xml( + 'http://www.bbc.co.uk/iplayer/playlist/%s' % group_id, + group_id, 'Downloading playlist XML') + + no_items = playlist.find('./{http://bbc.co.uk/2008/emp/playlist}noItems') + if no_items is not None: + reason = no_items.get('reason') + if reason == 'preAvailability': + msg = 'Episode %s is not yet available' % group_id + elif reason == 'postAvailability': + msg = 'Episode %s is no longer available' % group_id + elif reason == 'noMedia': + msg = 'Episode %s is not currently available' % group_id + else: + msg = 'Episode %s is not available: %s' % (group_id, reason) + raise ExtractorError(msg, expected=True) + + for item in self._extract_items(playlist): + kind = item.get('kind') + if kind != 'programme' and kind != 'radioProgramme': + continue + title = playlist.find('./{http://bbc.co.uk/2008/emp/playlist}title').text + description = playlist.find('./{http://bbc.co.uk/2008/emp/playlist}summary').text + programme_id = item.get('identifier') + duration = int(item.get('duration')) + formats, subtitles = self._download_media_selector(programme_id) if self._downloader.params.get('listsubtitles', False): self._list_available_subtitles(programme_id, subtitles) @@ -220,4 +259,4 @@ class BBCCoUkIE(SubtitlesInfoExtractor): 'duration': duration, 'formats': formats, 'subtitles': subtitles, - } \ No newline at end of file + } diff --git a/youtube_dl/extractor/beeg.py b/youtube_dl/extractor/beeg.py index 314e37f8b..4e79fea8f 100644 --- a/youtube_dl/extractor/beeg.py +++ b/youtube_dl/extractor/beeg.py @@ -40,7 +40,7 @@ class BeegIE(InfoExtractor): title = self._html_search_regex( r'([^<]+)\s*-\s*beeg\.?', webpage, 'title') - + description = self._html_search_regex( r'[0-9]{4})/(?P[0-9]{2})/(?P[0-9]{2})/(?P[^/#?_]+)' _TEST = { - 'url': 'http://www.behindkink.com/2014/08/14/ab1576-performers-voice-finally-heard-the-bill-is-killed/', - 'md5': '41ad01222b8442089a55528fec43ec01', + 'url': 'http://www.behindkink.com/2014/12/05/what-are-you-passionate-about-marley-blaze/', + 'md5': '507b57d8fdcd75a41a9a7bdb7989c762', 'info_dict': { - 'id': '36370', + 'id': '37127', 'ext': 'mp4', - 'title': 'AB1576 - PERFORMERS VOICE FINALLY HEARD - THE BILL IS KILLED!', - 'description': 'The adult industry voice was finally heard as Assembly Bill 1576 remained\xa0 in suspense today at the Senate Appropriations Hearing. AB1576 was, among other industry damaging issues, a condom mandate...', - 'upload_date': '20140814', - 'thumbnail': 'http://www.behindkink.com/wp-content/uploads/2014/08/36370_AB1576_Win.jpg', + 'title': 'What are you passionate about – Marley Blaze', + 'description': 'md5:aee8e9611b4ff70186f752975d9b94b4', + 'upload_date': '20141205', + 'thumbnail': 'http://www.behindkink.com/wp-content/uploads/2014/12/blaze-1.jpg', 'age_limit': 18, } } @@ -26,26 +26,19 @@ class BehindKinkIE(InfoExtractor): def _real_extract(self, url): mobj = re.match(self._VALID_URL, url) display_id = mobj.group('id') - year = mobj.group('year') - month = mobj.group('month') - day = mobj.group('day') - upload_date = year + month + day webpage = self._download_webpage(url, display_id) video_url = self._search_regex( - r"'file':\s*'([^']+)'", - webpage, 'URL base') - - video_id = url_basename(video_url) - video_id = video_id.split('_')[0] + r'.+?)\.html' + _TESTS = [ + { + 'url': 'http://www.bet.com/news/politics/2014/12/08/in-bet-exclusive-obama-talks-race-and-racism.html', + 'info_dict': { + 'id': '417cd61c-c793-4e8e-b006-e445ecc45add', + 'display_id': 'in-bet-exclusive-obama-talks-race-and-racism', + 'ext': 'flv', + 'title': 'BET News Presents: A Conversation With President Obama', + 'description': 'md5:5a88d8ae912c1b33e090290af7ec33c6', + 'duration': 1534, + 'timestamp': 1418075340, + 'upload_date': '20141208', + 'uploader': 'admin', + 'thumbnail': 're:(?i)^https?://.*\.jpg$', + }, + 'params': { + # rtmp download + 'skip_download': True, + }, + }, + { + 'url': 'http://www.bet.com/video/news/national/2014/justice-for-ferguson-a-community-reacts.html', + 'info_dict': { + 'id': '4160e53b-ad41-43b1-980f-8d85f63121f4', + 'display_id': 'justice-for-ferguson-a-community-reacts', + 'ext': 'flv', + 'title': 'Justice for Ferguson: A Community Reacts', + 'description': 'A BET News special.', + 'duration': 1696, + 'timestamp': 1416942360, + 'upload_date': '20141125', + 'uploader': 'admin', + 'thumbnail': 're:(?i)^https?://.*\.jpg$', + }, + 'params': { + # rtmp download + 'skip_download': True, + }, + } + ] + + def _real_extract(self, url): + display_id = self._match_id(url) + + webpage = self._download_webpage(url, display_id) + + media_url = compat_urllib_parse.unquote(self._search_regex( + [r'mediaURL\s*:\s*"([^"]+)"', r"var\s+mrssMediaUrl\s*=\s*'([^']+)'"], + webpage, 'media URL')) + + mrss = self._download_xml(media_url, display_id) + + item = mrss.find('./channel/item') + + NS_MAP = { + 'dc': 'http://purl.org/dc/elements/1.1/', + 'media': 'http://search.yahoo.com/mrss/', + 'ka': 'http://kickapps.com/karss', + } + + title = xpath_text(item, './title', 'title') + description = xpath_text( + item, './description', 'description', fatal=False) + + video_id = xpath_text(item, './guid', 'video id', fatal=False) + + timestamp = parse_iso8601(xpath_text( + item, xpath_with_ns('./dc:date', NS_MAP), + 'upload date', fatal=False)) + uploader = xpath_text( + item, xpath_with_ns('./dc:creator', NS_MAP), + 'uploader', fatal=False) + + media_content = item.find( + xpath_with_ns('./media:content', NS_MAP)) + duration = int_or_none(media_content.get('duration')) + smil_url = media_content.get('url') + + thumbnail = media_content.find( + xpath_with_ns('./media:thumbnail', NS_MAP)).get('url') + + formats = self._extract_smil_formats(smil_url, display_id) + + return { + 'id': video_id, + 'display_id': display_id, + 'title': title, + 'description': description, + 'thumbnail': thumbnail, + 'timestamp': timestamp, + 'uploader': uploader, + 'duration': duration, + 'formats': formats, + } diff --git a/youtube_dl/extractor/bild.py b/youtube_dl/extractor/bild.py new file mode 100644 index 000000000..77b562d99 --- /dev/null +++ b/youtube_dl/extractor/bild.py @@ -0,0 +1,39 @@ +# coding: utf-8 +from __future__ import unicode_literals + +from .common import InfoExtractor +from ..utils import int_or_none + + +class BildIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?bild\.de/(?:[^/]+/)+(?P[^/]+)-(?P\d+)(?:,auto=true)?\.bild\.html' + IE_DESC = 'Bild.de' + _TEST = { + 'url': 'http://www.bild.de/video/clip/apple-ipad-air/das-koennen-die-neuen-ipads-38184146.bild.html', + 'md5': 'dd495cbd99f2413502a1713a1156ac8a', + 'info_dict': { + 'id': '38184146', + 'ext': 'mp4', + 'title': 'BILD hat sie getestet', + 'thumbnail': 'http://bilder.bild.de/fotos/stand-das-koennen-die-neuen-ipads-38184138/Bild/1.bild.jpg', + 'duration': 196, + 'description': 'Mit dem iPad Air 2 und dem iPad Mini 3 hat Apple zwei neue Tablet-Modelle präsentiert. BILD-Reporter Sven Stein durfte die Geräte bereits testen. ', + } + } + + def _real_extract(self, url): + video_id = self._match_id(url) + + xml_url = url.split(".bild.html")[0] + ",view=xml.bild.xml" + doc = self._download_xml(xml_url, video_id) + + duration = int_or_none(doc.attrib.get('duration'), scale=1000) + + return { + 'id': video_id, + 'title': doc.attrib['ueberschrift'], + 'description': doc.attrib.get('text'), + 'url': doc.attrib['src'], + 'thumbnail': doc.attrib.get('img'), + 'duration': duration, + } diff --git a/youtube_dl/extractor/bilibili.py b/youtube_dl/extractor/bilibili.py index 0d5889f5d..241b904a9 100644 --- a/youtube_dl/extractor/bilibili.py +++ b/youtube_dl/extractor/bilibili.py @@ -4,8 +4,8 @@ from __future__ import unicode_literals import re from .common import InfoExtractor +from ..compat import compat_parse_qs from ..utils import ( - compat_parse_qs, ExtractorError, int_or_none, unified_strdate, @@ -29,10 +29,9 @@ class BiliBiliIE(InfoExtractor): } def _real_extract(self, url): - mobj = re.match(self._VALID_URL, url) - video_id = mobj.group('id') - + video_id = self._match_id(url) webpage = self._download_webpage(url, video_id) + video_code = self._search_regex( r'(?s)
    (.*?)
    ', webpage, 'video code') diff --git a/youtube_dl/extractor/bliptv.py b/youtube_dl/extractor/bliptv.py index 57d17bea3..14b814120 100644 --- a/youtube_dl/extractor/bliptv.py +++ b/youtube_dl/extractor/bliptv.py @@ -4,13 +4,17 @@ import re from .common import InfoExtractor from .subtitles import SubtitlesInfoExtractor -from ..utils import ( + +from ..compat import ( + compat_str, compat_urllib_request, - unescapeHTML, - parse_iso8601, compat_urlparse, +) +from ..utils import ( clean_html, - compat_str, + int_or_none, + parse_iso8601, + unescapeHTML, ) @@ -64,20 +68,55 @@ class BlipTVIE(SubtitlesInfoExtractor): 'uploader': 'redvsblue', 'uploader_id': '792887', } - } + }, + { + 'url': 'http://blip.tv/play/gbk766dkj4Yn', + 'md5': 'fe0a33f022d49399a241e84a8ea8b8e3', + 'info_dict': { + 'id': '1749452', + 'ext': 'mp4', + 'upload_date': '20090208', + 'description': 'Witness the first appearance of the Nostalgia Critic character, as Doug reviews the movie Transformers.', + 'title': 'Nostalgia Critic: Transformers', + 'timestamp': 1234068723, + 'uploader': 'NostalgiaCritic', + 'uploader_id': '246467', + } + }, + { + # https://github.com/rg3/youtube-dl/pull/4404 + 'note': 'Audio only', + 'url': 'http://blip.tv/hilarios-productions/weekly-manga-recap-kingdom-7119982', + 'md5': '76c0a56f24e769ceaab21fbb6416a351', + 'info_dict': { + 'id': '7103299', + 'ext': 'flv', + 'title': 'Weekly Manga Recap: Kingdom', + 'description': 'And then Shin breaks the enemy line, and he's all like HWAH! And then he slices a guy and it's all like FWASHING! And... it's really hard to describe the best parts of this series without breaking down into sound effects, okay?', + 'timestamp': 1417660321, + 'upload_date': '20141204', + 'uploader': 'The Rollo T', + 'uploader_id': '407429', + 'duration': 7251, + 'vcodec': 'none', + } + }, ] def _real_extract(self, url): mobj = re.match(self._VALID_URL, url) lookup_id = mobj.group('lookup_id') - # See https://github.com/rg3/youtube-dl/issues/857 + # See https://github.com/rg3/youtube-dl/issues/857 and + # https://github.com/rg3/youtube-dl/issues/4197 if lookup_id: - info_page = self._download_webpage( - 'http://blip.tv/play/%s.x?p=1' % lookup_id, lookup_id, 'Resolving lookup id') - video_id = self._search_regex(r'data-episode-id="([0-9]+)', info_page, 'video_id') - else: - video_id = mobj.group('id') + urlh = self._request_webpage( + 'http://blip.tv/play/%s' % lookup_id, lookup_id, 'Resolving lookup id') + url = compat_urlparse.urlparse(urlh.geturl()) + qs = compat_urlparse.parse_qs(url.query) + mobj = re.match(self._VALID_URL, qs['file'][0]) + + video_id = mobj.group('id') rss = self._download_xml('http://blip.tv/rss/flash/%s' % video_id, video_id, 'Downloading video RSS') @@ -113,7 +152,7 @@ class BlipTVIE(SubtitlesInfoExtractor): msg = self._download_webpage( url + '?showplayer=20140425131715&referrer=http://blip.tv&mask=7&skin=flashvars&view=url', video_id, 'Resolving URL for %s' % role) - real_url = compat_urlparse.parse_qs(msg)['message'][0] + real_url = compat_urlparse.parse_qs(msg.strip())['message'][0] media_type = media_content.get('type') if media_type == 'text/srt' or url.endswith('.srt'): @@ -128,11 +167,11 @@ class BlipTVIE(SubtitlesInfoExtractor): 'url': real_url, 'format_id': role, 'format_note': media_type, - 'vcodec': media_content.get(blip('vcodec')), + 'vcodec': media_content.get(blip('vcodec')) or 'none', 'acodec': media_content.get(blip('acodec')), 'filesize': media_content.get('filesize'), - 'width': int(media_content.get('width')), - 'height': int(media_content.get('height')), + 'width': int_or_none(media_content.get('width')), + 'height': int_or_none(media_content.get('height')), }) self._sort_formats(formats) @@ -165,9 +204,17 @@ class BlipTVIE(SubtitlesInfoExtractor): class BlipTVUserIE(InfoExtractor): - _VALID_URL = r'(?:(?:(?:https?://)?(?:\w+\.)?blip\.tv/)|bliptvuser:)(?!api\.swf)([^/]+)/*$' + _VALID_URL = r'(?:(?:https?://(?:\w+\.)?blip\.tv/)|bliptvuser:)(?!api\.swf)([^/]+)/*$' _PAGE_SIZE = 12 IE_NAME = 'blip.tv:user' + _TEST = { + 'url': 'http://blip.tv/actone', + 'info_dict': { + 'id': 'actone', + 'title': 'Act One: The Series', + }, + 'playlist_count': 5, + } def _real_extract(self, url): mobj = re.match(self._VALID_URL, url) @@ -178,6 +225,7 @@ class BlipTVUserIE(InfoExtractor): page = self._download_webpage(url, username, 'Downloading user page') mobj = re.search(r'data-users-id="([^"]+)"', page) page_base = page_base % mobj.group(1) + title = self._og_search_title(page) # Download video ids using BlipTV Ajax calls. Result size per # query is limited (currently to 12 videos) so we need to query @@ -214,4 +262,5 @@ class BlipTVUserIE(InfoExtractor): urls = ['http://blip.tv/%s' % video_id for video_id in video_ids] url_entries = [self.url_result(vurl, 'BlipTV') for vurl in urls] - return [self.playlist_result(url_entries, playlist_title=username)] + return self.playlist_result( + url_entries, playlist_title=title, playlist_id=username) diff --git a/youtube_dl/extractor/bpb.py b/youtube_dl/extractor/bpb.py new file mode 100644 index 000000000..510813f76 --- /dev/null +++ b/youtube_dl/extractor/bpb.py @@ -0,0 +1,37 @@ +# coding: utf-8 +from __future__ import unicode_literals + +from .common import InfoExtractor + + +class BpbIE(InfoExtractor): + IE_DESC = 'Bundeszentrale für politische Bildung' + _VALID_URL = r'http://www\.bpb\.de/mediathek/(?P[0-9]+)/' + + _TEST = { + 'url': 'http://www.bpb.de/mediathek/297/joachim-gauck-zu-1989-und-die-erinnerung-an-die-ddr', + 'md5': '0792086e8e2bfbac9cdf27835d5f2093', + 'info_dict': { + 'id': '297', + 'ext': 'mp4', + 'title': 'Joachim Gauck zu 1989 und die Erinnerung an die DDR', + 'description': 'Joachim Gauck, erster Beauftragter für die Stasi-Unterlagen, spricht auf dem Geschichtsforum über die friedliche Revolution 1989 und eine "gewisse Traurigkeit" im Umgang mit der DDR-Vergangenheit.' + } + } + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + + title = self._html_search_regex( + r'

    (.*?)

    ', webpage, 'title') + video_url = self._html_search_regex( + r'(http://film\.bpb\.de/player/dokument_[0-9]+\.mp4)', + webpage, 'video URL') + + return { + 'id': video_id, + 'url': video_url, + 'title': title, + 'description': self._og_search_description(webpage), + } diff --git a/youtube_dl/extractor/br.py b/youtube_dl/extractor/br.py index 2e277c8c3..45ba51732 100644 --- a/youtube_dl/extractor/br.py +++ b/youtube_dl/extractor/br.py @@ -1,8 +1,6 @@ # coding: utf-8 from __future__ import unicode_literals -import re - from .common import InfoExtractor from ..utils import ( ExtractorError, diff --git a/youtube_dl/extractor/breakcom.py b/youtube_dl/extractor/breakcom.py index 2c0e5eea2..4bcc897c9 100644 --- a/youtube_dl/extractor/breakcom.py +++ b/youtube_dl/extractor/breakcom.py @@ -14,7 +14,6 @@ class BreakIE(InfoExtractor): _VALID_URL = r'http://(?:www\.)?break\.com/video/(?:[^/]+/)*.+-(?P\d+)' _TESTS = [{ 'url': 'http://www.break.com/video/when-girls-act-like-guys-2468056', - 'md5': '33aa4ff477ecd124d18d7b5d23b87ce5', 'info_dict': { 'id': '2468056', 'ext': 'mp4', diff --git a/youtube_dl/extractor/brightcove.py b/youtube_dl/extractor/brightcove.py index 294670386..1eca00470 100644 --- a/youtube_dl/extractor/brightcove.py +++ b/youtube_dl/extractor/brightcove.py @@ -6,24 +6,26 @@ import json import xml.etree.ElementTree from .common import InfoExtractor -from ..utils import ( - compat_urllib_parse, - find_xpath_attr, - fix_xml_ampersands, - compat_urlparse, +from ..compat import ( + compat_parse_qs, compat_str, + compat_urllib_parse, + compat_urllib_parse_urlparse, compat_urllib_request, - compat_parse_qs, - + compat_urlparse, +) +from ..utils import ( determine_ext, ExtractorError, - unsmuggle_url, + find_xpath_attr, + fix_xml_ampersands, unescapeHTML, + unsmuggle_url, ) class BrightcoveIE(InfoExtractor): - _VALID_URL = r'https?://.*brightcove\.com/(services|viewer).*\?(?P.*)' + _VALID_URL = r'https?://.*brightcove\.com/(services|viewer).*?\?(?P.*)' _FEDERATED_URL_TEMPLATE = 'http://c.brightcove.com/services/viewer/htmlFederated?%s' _TESTS = [ @@ -87,6 +89,15 @@ class BrightcoveIE(InfoExtractor): 'description': 'UCI MTB World Cup 2014: Fort William, UK - Downhill Finals', }, }, + { + # playlist test + # from http://support.brightcove.com/en/video-cloud/docs/playlist-support-single-video-players + 'url': 'http://c.brightcove.com/services/viewer/htmlFederated?playerID=3550052898001&playerKey=AQ%7E%7E%2CAAABmA9XpXk%7E%2C-Kp7jNgisre1fG5OdqpAFUTcs0lP_ZoL', + 'info_dict': { + 'title': 'Sealife', + }, + 'playlist_mincount': 7, + }, ] @classmethod @@ -101,6 +112,8 @@ class BrightcoveIE(InfoExtractor): lambda m: m.group(1) + '/>', object_str) # Fix up some stupid XML, see https://github.com/rg3/youtube-dl/issues/1608 object_str = object_str.replace('<--', ' %s\n%s\n\n' % (i, start, end, text) return output - def _convert_subtitles_to_ass(self, subtitles): + def _convert_subtitles_to_ass(self, sub_root): output = '' def ass_bool(strvalue): @@ -128,10 +129,6 @@ class CrunchyrollIE(SubtitlesInfoExtractor): assvalue = '-1' return assvalue - sub_root = xml.etree.ElementTree.fromstring(subtitles) - if not sub_root: - return output - output = '[Script Info]\n' output += 'Title: %s\n' % sub_root.attrib["title"] output += 'ScriptType: v4.00+\n' @@ -188,7 +185,7 @@ Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text return output - def _real_extract(self,url): + def _real_extract(self, url): mobj = re.match(self._VALID_URL, url) video_id = mobj.group('video_id') @@ -231,18 +228,20 @@ Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text formats = [] for fmt in re.findall(r'\?p([0-9]{3,4})=1', webpage): stream_quality, stream_format = self._FORMAT_IDS[fmt] - video_format = fmt+'p' + video_format = fmt + 'p' streamdata_req = compat_urllib_request.Request('http://www.crunchyroll.com/xml/') # urlencode doesn't work! - streamdata_req.data = 'req=RpcApiVideoEncode%5FGetStreamInfo&video%5Fencode%5Fquality='+stream_quality+'&media%5Fid='+stream_id+'&video%5Fformat='+stream_format + streamdata_req.data = 'req=RpcApiVideoEncode%5FGetStreamInfo&video%5Fencode%5Fquality=' + stream_quality + '&media%5Fid=' + stream_id + '&video%5Fformat=' + stream_format streamdata_req.add_header('Content-Type', 'application/x-www-form-urlencoded') streamdata_req.add_header('Content-Length', str(len(streamdata_req.data))) - streamdata = self._download_webpage(streamdata_req, video_id, note='Downloading media info for '+video_format) - video_url = self._search_regex(r'([^<]+)', streamdata, 'video_url') - video_play_path = self._search_regex(r'([^<]+)', streamdata, 'video_play_path') + streamdata = self._download_xml( + streamdata_req, video_id, + note='Downloading media info for %s' % video_format) + video_url = streamdata.find('.//host').text + video_play_path = streamdata.find('.//file').text formats.append({ 'url': video_url, - 'play_path': video_play_path, + 'play_path': video_play_path, 'ext': 'flv', 'format': video_format, 'format_id': video_format, @@ -251,8 +250,9 @@ Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text subtitles = {} sub_format = self._downloader.params.get('subtitlesformat', 'srt') for sub_id, sub_name in re.findall(r'\?ssid=([0-9]+)" title="([^"]+)', webpage): - sub_page = self._download_webpage('http://www.crunchyroll.com/xml/?req=RpcApiSubtitle_GetXml&subtitle_script_id='+sub_id,\ - video_id, note='Downloading subtitles for '+sub_name) + sub_page = self._download_webpage( + 'http://www.crunchyroll.com/xml/?req=RpcApiSubtitle_GetXml&subtitle_script_id=' + sub_id, + video_id, note='Downloading subtitles for ' + sub_name) id = self._search_regex(r'id=\'([0-9]+)', sub_page, 'subtitle_id', fatal=False) iv = self._search_regex(r'([^<]+)', sub_page, 'subtitle_iv', fatal=False) data = self._search_regex(r'([^<]+)', sub_page, 'subtitle_data', fatal=False) @@ -266,22 +266,60 @@ Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text lang_code = self._search_regex(r'lang_code=["\']([^"\']+)', subtitle, 'subtitle_lang_code', fatal=False) if not lang_code: continue + sub_root = xml.etree.ElementTree.fromstring(subtitle) if sub_format == 'ass': - subtitles[lang_code] = self._convert_subtitles_to_ass(subtitle) + subtitles[lang_code] = self._convert_subtitles_to_ass(sub_root) else: - subtitles[lang_code] = self._convert_subtitles_to_srt(subtitle) + subtitles[lang_code] = self._convert_subtitles_to_srt(sub_root) if self._downloader.params.get('listsubtitles', False): self._list_available_subtitles(video_id, subtitles) return return { - 'id': video_id, - 'title': video_title, + 'id': video_id, + 'title': video_title, 'description': video_description, - 'thumbnail': video_thumbnail, - 'uploader': video_uploader, + 'thumbnail': video_thumbnail, + 'uploader': video_uploader, 'upload_date': video_upload_date, - 'subtitles': subtitles, - 'formats': formats, + 'subtitles': subtitles, + 'formats': formats, + } + + +class CrunchyrollShowPlaylistIE(InfoExtractor): + IE_NAME = "crunchyroll:playlist" + _VALID_URL = r'https?://(?:(?Pwww|m)\.)?(?Pcrunchyroll\.com/(?!(?:news|anime-news|library|forum|launchcalendar|lineup|store|comics|freetrial|login))(?P[\w\-]+))/?$' + + _TESTS = [{ + 'url': 'http://www.crunchyroll.com/a-bridge-to-the-starry-skies-hoshizora-e-kakaru-hashi', + 'info_dict': { + 'id': 'a-bridge-to-the-starry-skies-hoshizora-e-kakaru-hashi', + 'title': 'A Bridge to the Starry Skies - Hoshizora e Kakaru Hashi' + }, + 'playlist_count': 13, + }] + + def _real_extract(self, url): + show_id = self._match_id(url) + + webpage = self._download_webpage(url, show_id) + title = self._html_search_regex( + r'(?s)]*>\s*(.*?)', + webpage, 'title') + episode_paths = re.findall( + r'(?s)
  • ]+>.*?.*)' - _VIDEO_INFO_TEMPLATE = 'http://service.canal-plus.com/video/rest/getVideosLiees/d8/%s' - IE_NAME = 'd8.tv' - - _TEST = { - 'url': 'http://www.d8.tv/d8-docs-mags/pid6589-d8-campagne-intime.html', - 'file': '966289.flv', - 'info_dict': { - 'title': 'Campagne intime - Documentaire exceptionnel', - 'description': 'md5:d2643b799fb190846ae09c61e59a859f', - 'upload_date': '20131108', - }, - 'params': { - # rtmp - 'skip_download': True, - }, - 'skip': 'videos get deleted after a while', - } diff --git a/youtube_dl/extractor/dailymotion.py b/youtube_dl/extractor/dailymotion.py index dbcf5d6a7..cf5841a7c 100644 --- a/youtube_dl/extractor/dailymotion.py +++ b/youtube_dl/extractor/dailymotion.py @@ -1,4 +1,4 @@ -#coding: utf-8 +# coding: utf-8 from __future__ import unicode_literals import re @@ -8,16 +8,19 @@ import itertools from .common import InfoExtractor from .subtitles import SubtitlesInfoExtractor -from ..utils import ( - compat_urllib_request, +from ..compat import ( compat_str, + compat_urllib_request, +) +from ..utils import ( + ExtractorError, + int_or_none, orderedSet, str_to_int, - int_or_none, - ExtractorError, unescapeHTML, ) + class DailymotionBaseInfoExtractor(InfoExtractor): @staticmethod def _build_request(url): @@ -27,6 +30,7 @@ class DailymotionBaseInfoExtractor(InfoExtractor): request.add_header('Cookie', 'ff=off') return request + class DailymotionIE(DailymotionBaseInfoExtractor, SubtitlesInfoExtractor): """Information Extractor for Dailymotion""" @@ -94,7 +98,7 @@ class DailymotionIE(DailymotionBaseInfoExtractor, SubtitlesInfoExtractor): # It may just embed a vevo video: m_vevo = re.search( - r'[\w]*)', + r'[\w]*)', webpage) if m_vevo is not None: vevo_id = m_vevo.group('id') @@ -112,7 +116,7 @@ class DailymotionIE(DailymotionBaseInfoExtractor, SubtitlesInfoExtractor): embed_page = self._download_webpage(embed_url, video_id, 'Downloading embed page') info = self._search_regex(r'var info = ({.*?}),$', embed_page, - 'video info', flags=re.MULTILINE) + 'video info', flags=re.MULTILINE) info = json.loads(info) if info.get('error') is not None: msg = 'Couldn\'t get video, Dailymotion says: %s' % info['error']['title'] @@ -206,7 +210,7 @@ class DailymotionPlaylistIE(DailymotionBaseInfoExtractor): if re.search(self._MORE_PAGES_INDICATOR, webpage) is None: break return [self.url_result('http://www.dailymotion.com/video/%s' % video_id, 'Dailymotion') - for video_id in orderedSet(video_ids)] + for video_id in orderedSet(video_ids)] def _real_extract(self, url): mobj = re.match(self._VALID_URL, url) diff --git a/youtube_dl/extractor/daum.py b/youtube_dl/extractor/daum.py index 45d66e2e6..c6b813f58 100644 --- a/youtube_dl/extractor/daum.py +++ b/youtube_dl/extractor/daum.py @@ -5,7 +5,7 @@ from __future__ import unicode_literals import re from .common import InfoExtractor -from ..utils import ( +from ..compat import ( compat_urllib_parse, ) diff --git a/youtube_dl/extractor/defense.py b/youtube_dl/extractor/defense.py index c5529f8d4..5e50c63d9 100644 --- a/youtube_dl/extractor/defense.py +++ b/youtube_dl/extractor/defense.py @@ -9,7 +9,7 @@ from .common import InfoExtractor class DefenseGouvFrIE(InfoExtractor): IE_NAME = 'defense.gouv.fr' _VALID_URL = (r'http://.*?\.defense\.gouv\.fr/layout/set/' - r'ligthboxvideo/base-de-medias/webtv/(.*)') + r'ligthboxvideo/base-de-medias/webtv/(.*)') _TEST = { 'url': 'http://www.defense.gouv.fr/layout/set/ligthboxvideo/base-de-medias/webtv/attaque-chimique-syrienne-du-21-aout-2013-1', @@ -26,13 +26,13 @@ class DefenseGouvFrIE(InfoExtractor): video_id = self._search_regex( r"flashvars.pvg_id=\"(\d+)\";", webpage, 'ID') - + json_url = ('http://static.videos.gouv.fr/brightcovehub/export/json/' - + video_id) + + video_id) info = self._download_webpage(json_url, title, - 'Downloading JSON config') + 'Downloading JSON config') video_url = json.loads(info)['renditions'][0]['url'] - + return {'id': video_id, 'ext': 'mp4', 'url': video_url, diff --git a/youtube_dl/extractor/discovery.py b/youtube_dl/extractor/discovery.py index 554df6735..52c2d7ddf 100644 --- a/youtube_dl/extractor/discovery.py +++ b/youtube_dl/extractor/discovery.py @@ -16,9 +16,9 @@ class DiscoveryIE(InfoExtractor): 'ext': 'mp4', 'title': 'MythBusters: Mission Impossible Outtakes', 'description': ('Watch Jamie Hyneman and Adam Savage practice being' - ' each other -- to the point of confusing Jamie\'s dog -- and ' - 'don\'t miss Adam moon-walking as Jamie ... behind Jamie\'s' - ' back.'), + ' each other -- to the point of confusing Jamie\'s dog -- and ' + 'don\'t miss Adam moon-walking as Jamie ... behind Jamie\'s' + ' back.'), 'duration': 156, }, } @@ -29,7 +29,7 @@ class DiscoveryIE(InfoExtractor): webpage = self._download_webpage(url, video_id) video_list_json = self._search_regex(r'var videoListJSON = ({.*?});', - webpage, 'video list', flags=re.DOTALL) + webpage, 'video list', flags=re.DOTALL) video_list = json.loads(video_list_json) info = video_list['clips'][0] formats = [] diff --git a/youtube_dl/extractor/dotsub.py b/youtube_dl/extractor/dotsub.py index 5ae0ad5b6..638bb33cd 100644 --- a/youtube_dl/extractor/dotsub.py +++ b/youtube_dl/extractor/dotsub.py @@ -27,7 +27,7 @@ class DotsubIE(InfoExtractor): video_id = mobj.group('id') info_url = "https://dotsub.com/api/media/%s/metadata" % video_id info = self._download_json(info_url, video_id) - date = time.gmtime(info['dateCreated']/1000) # The timestamp is in miliseconds + date = time.gmtime(info['dateCreated'] / 1000) # The timestamp is in miliseconds return { 'id': video_id, diff --git a/youtube_dl/extractor/dropbox.py b/youtube_dl/extractor/dropbox.py index 5f24ac721..14b6c00b0 100644 --- a/youtube_dl/extractor/dropbox.py +++ b/youtube_dl/extractor/dropbox.py @@ -5,23 +5,24 @@ import os.path import re from .common import InfoExtractor -from ..utils import compat_urllib_parse_unquote, url_basename +from ..compat import compat_urllib_parse_unquote +from ..utils import url_basename class DropboxIE(InfoExtractor): _VALID_URL = r'https?://(?:www\.)?dropbox[.]com/sh?/(?P[a-zA-Z0-9]{15})/.*' - _TESTS = [{ - 'url': 'https://www.dropbox.com/s/nelirfsxnmcfbfh/youtube-dl%20test%20video%20%27%C3%A4%22BaW_jenozKc.mp4?dl=0', - 'info_dict': { - 'id': 'nelirfsxnmcfbfh', - 'ext': 'mp4', - 'title': 'youtube-dl test video \'ä"BaW_jenozKc' - } - }, - { - 'url': 'https://www.dropbox.com/sh/662glsejgzoj9sr/AAByil3FGH9KFNZ13e08eSa1a/Pregame%20Ceremony%20Program%20PA%2020140518.m4v', - 'only_matching': True, - }, + _TESTS = [ + { + 'url': 'https://www.dropbox.com/s/nelirfsxnmcfbfh/youtube-dl%20test%20video%20%27%C3%A4%22BaW_jenozKc.mp4?dl=0', + 'info_dict': { + 'id': 'nelirfsxnmcfbfh', + 'ext': 'mp4', + 'title': 'youtube-dl test video \'ä"BaW_jenozKc' + } + }, { + 'url': 'https://www.dropbox.com/sh/662glsejgzoj9sr/AAByil3FGH9KFNZ13e08eSa1a/Pregame%20Ceremony%20Program%20PA%2020140518.m4v', + 'only_matching': True, + }, ] def _real_extract(self, url): diff --git a/youtube_dl/extractor/drtv.py b/youtube_dl/extractor/drtv.py index 9d6ce1f48..93b3c9f36 100644 --- a/youtube_dl/extractor/drtv.py +++ b/youtube_dl/extractor/drtv.py @@ -1,7 +1,5 @@ from __future__ import unicode_literals -import re - from .subtitles import SubtitlesInfoExtractor from .common import ExtractorError from ..utils import parse_iso8601 @@ -25,8 +23,7 @@ class DRTVIE(SubtitlesInfoExtractor): } def _real_extract(self, url): - mobj = re.match(self._VALID_URL, url) - video_id = mobj.group('id') + video_id = self._match_id(url) programcard = self._download_json( 'http://www.dr.dk/mu/programcard/expanded/%s' % video_id, video_id, 'Downloading video JSON') @@ -35,7 +32,7 @@ class DRTVIE(SubtitlesInfoExtractor): title = data['Title'] description = data['Description'] - timestamp = parse_iso8601(data['CreatedTime'][:-5]) + timestamp = parse_iso8601(data['CreatedTime']) thumbnail = None duration = None diff --git a/youtube_dl/extractor/ebaumsworld.py b/youtube_dl/extractor/ebaumsworld.py index 63c2549d3..b6bfd2b2d 100644 --- a/youtube_dl/extractor/ebaumsworld.py +++ b/youtube_dl/extractor/ebaumsworld.py @@ -1,7 +1,5 @@ from __future__ import unicode_literals -import re - from .common import InfoExtractor @@ -20,8 +18,7 @@ class EbaumsWorldIE(InfoExtractor): } def _real_extract(self, url): - mobj = re.match(self._VALID_URL, url) - video_id = mobj.group('id') + video_id = self._match_id(url) config = self._download_xml( 'http://www.ebaumsworld.com/video/player/%s' % video_id, video_id) video_url = config.find('file').text diff --git a/youtube_dl/extractor/ehow.py b/youtube_dl/extractor/ehow.py index f8f49a013..9cb1bf301 100644 --- a/youtube_dl/extractor/ehow.py +++ b/youtube_dl/extractor/ehow.py @@ -1,8 +1,6 @@ from __future__ import unicode_literals -import re - -from ..utils import ( +from ..compat import ( compat_urllib_parse, ) from .common import InfoExtractor @@ -24,11 +22,10 @@ class EHowIE(InfoExtractor): } def _real_extract(self, url): - mobj = re.match(self._VALID_URL, url) - video_id = mobj.group('id') + video_id = self._match_id(url) webpage = self._download_webpage(url, video_id) - video_url = self._search_regex(r'(?:file|source)=(http[^\'"&]*)', - webpage, 'video URL') + video_url = self._search_regex( + r'(?:file|source)=(http[^\'"&]*)', webpage, 'video URL') final_url = compat_urllib_parse.unquote(video_url) uploader = self._html_search_meta('uploader', webpage) title = self._og_search_title(webpage).replace(' | eHow', '') diff --git a/youtube_dl/extractor/eighttracks.py b/youtube_dl/extractor/eighttracks.py index c1b4c729e..a30a1f330 100644 --- a/youtube_dl/extractor/eighttracks.py +++ b/youtube_dl/extractor/eighttracks.py @@ -6,7 +6,7 @@ import random import re from .common import InfoExtractor -from ..utils import ( +from ..compat import ( compat_str, ) @@ -125,7 +125,7 @@ class EightTracksIE(InfoExtractor): info = { 'id': compat_str(track_data['id']), 'url': track_data['track_file_stream_url'], - 'title': track_data['performer'] + u' - ' + track_data['name'], + 'title': track_data['performer'] + ' - ' + track_data['name'], 'raw_title': track_data['name'], 'uploader_id': data['user']['login'], 'ext': 'm4a', diff --git a/youtube_dl/extractor/engadget.py b/youtube_dl/extractor/engadget.py index 92ada81d2..4ea37ebd9 100644 --- a/youtube_dl/extractor/engadget.py +++ b/youtube_dl/extractor/engadget.py @@ -3,7 +3,6 @@ from __future__ import unicode_literals import re from .common import InfoExtractor -from .fivemin import FiveMinIE from ..utils import ( url_basename, ) @@ -27,11 +26,10 @@ class EngadgetIE(InfoExtractor): } def _real_extract(self, url): - mobj = re.match(self._VALID_URL, url) - video_id = mobj.group('id') + video_id = self._match_id(url) if video_id is not None: - return FiveMinIE._build_result(video_id) + return self.url_result('5min:%s' % video_id) else: title = url_basename(url) webpage = self._download_webpage(url, title) @@ -39,5 +37,5 @@ class EngadgetIE(InfoExtractor): return { '_type': 'playlist', 'title': title, - 'entries': [FiveMinIE._build_result(id) for id in ids] + 'entries': [self.url_result('5min:%s' % vid) for vid in ids] } diff --git a/youtube_dl/extractor/eporner.py b/youtube_dl/extractor/eporner.py index bb231ecb1..4de8d4bc5 100644 --- a/youtube_dl/extractor/eporner.py +++ b/youtube_dl/extractor/eporner.py @@ -20,7 +20,7 @@ class EpornerIE(InfoExtractor): 'display_id': 'Infamous-Tiffany-Teen-Strip-Tease-Video', 'ext': 'mp4', 'title': 'Infamous Tiffany Teen Strip Tease Video', - 'duration': 194, + 'duration': 1838, 'view_count': int, 'age_limit': 18, } @@ -57,9 +57,7 @@ class EpornerIE(InfoExtractor): formats.append(fmt) self._sort_formats(formats) - duration = parse_duration(self._search_regex( - r'class="mbtim">([0-9:]+)', webpage, 'duration', - fatal=False)) + duration = parse_duration(self._html_search_meta('duration', webpage)) view_count = str_to_int(self._search_regex( r'id="cinemaviews">\s*([0-9,]+)\s*views', webpage, 'view count', fatal=False)) diff --git a/youtube_dl/extractor/escapist.py b/youtube_dl/extractor/escapist.py index 476fc22b9..e240cb859 100644 --- a/youtube_dl/extractor/escapist.py +++ b/youtube_dl/extractor/escapist.py @@ -3,9 +3,10 @@ from __future__ import unicode_literals import re from .common import InfoExtractor -from ..utils import ( +from ..compat import ( compat_urllib_parse, - +) +from ..utils import ( ExtractorError, ) diff --git a/youtube_dl/extractor/everyonesmixtape.py b/youtube_dl/extractor/everyonesmixtape.py index d237a8281..d872d828f 100644 --- a/youtube_dl/extractor/everyonesmixtape.py +++ b/youtube_dl/extractor/everyonesmixtape.py @@ -3,8 +3,10 @@ from __future__ import unicode_literals import re from .common import InfoExtractor -from ..utils import ( +from ..compat import ( compat_urllib_request, +) +from ..utils import ( ExtractorError, ) diff --git a/youtube_dl/extractor/extremetube.py b/youtube_dl/extractor/extremetube.py index aacbf1414..36ba33128 100644 --- a/youtube_dl/extractor/extremetube.py +++ b/youtube_dl/extractor/extremetube.py @@ -3,16 +3,18 @@ from __future__ import unicode_literals import re from .common import InfoExtractor -from ..utils import ( +from ..compat import ( compat_urllib_parse_urlparse, compat_urllib_request, compat_urllib_parse, +) +from ..utils import ( str_to_int, ) class ExtremeTubeIE(InfoExtractor): - _VALID_URL = r'^(?:https?://)?(?:www\.)?(?Pextremetube\.com/.*?video/.+?(?P[0-9]+))(?:[/?&]|$)' + _VALID_URL = r'https?://(?:www\.)?(?Pextremetube\.com/.*?video/.+?(?P[0-9]+))(?:[/?&]|$)' _TESTS = [{ 'url': 'http://www.extremetube.com/video/music-video-14-british-euro-brit-european-cumshots-swallow-652431', 'md5': '1fb9228f5e3332ec8c057d6ac36f33e0', @@ -31,7 +33,7 @@ class ExtremeTubeIE(InfoExtractor): def _real_extract(self, url): mobj = re.match(self._VALID_URL, url) - video_id = mobj.group('videoid') + video_id = mobj.group('id') url = 'http://www.' + mobj.group('url') req = compat_urllib_request.Request(url) diff --git a/youtube_dl/extractor/facebook.py b/youtube_dl/extractor/facebook.py index 3ad993751..1ad4e77a8 100644 --- a/youtube_dl/extractor/facebook.py +++ b/youtube_dl/extractor/facebook.py @@ -5,15 +5,18 @@ import re import socket from .common import InfoExtractor -from ..utils import ( +from ..compat import ( compat_http_client, compat_str, compat_urllib_error, compat_urllib_parse, compat_urllib_request, - urlencode_postdata, +) +from ..utils import ( ExtractorError, + int_or_none, limit_length, + urlencode_postdata, ) @@ -34,7 +37,6 @@ class FacebookIE(InfoExtractor): 'info_dict': { 'id': '637842556329505', 'ext': 'mp4', - 'duration': 38, 'title': 're:Did you know Kei Nishikori is the first Asian man to ever reach a Grand Slam', } }, { @@ -58,8 +60,8 @@ class FacebookIE(InfoExtractor): login_page_req = compat_urllib_request.Request(self._LOGIN_URL) login_page_req.add_header('Cookie', 'locale=en_US') login_page = self._download_webpage(login_page_req, None, - note='Downloading login page', - errnote='Unable to download login page') + note='Downloading login page', + errnote='Unable to download login page') lsd = self._search_regex( r'', login_results) is not None: self._downloader.report_warning('unable to log in: bad username/password, or exceded login rate limit (~3/min). Check credentials or wait.') return @@ -94,7 +96,7 @@ class FacebookIE(InfoExtractor): check_req = compat_urllib_request.Request(self._CHECKPOINT_URL, urlencode_postdata(check_form)) check_req.add_header('Content-Type', 'application/x-www-form-urlencoded') check_response = self._download_webpage(check_req, None, - note='Confirming login') + note='Confirming login') if re.search(r'id="checkpointSubmitButton"', check_response) is not None: self._downloader.report_warning('Unable to confirm login, you have to login in your brower and authorize the login.') except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err: @@ -105,9 +107,7 @@ class FacebookIE(InfoExtractor): self._login() def _real_extract(self, url): - mobj = re.match(self._VALID_URL, url) - video_id = mobj.group('id') - + video_id = self._match_id(url) url = 'https://www.facebook.com/video/video.php?v=%s' % video_id webpage = self._download_webpage(url, video_id) @@ -147,6 +147,6 @@ class FacebookIE(InfoExtractor): 'id': video_id, 'title': video_title, 'url': video_url, - 'duration': int(video_data['video_duration']), - 'thumbnail': video_data['thumbnail_src'], + 'duration': int_or_none(video_data.get('video_duration')), + 'thumbnail': video_data.get('thumbnail_src'), } diff --git a/youtube_dl/extractor/faz.py b/youtube_dl/extractor/faz.py index c6ab6952e..3c39ca451 100644 --- a/youtube_dl/extractor/faz.py +++ b/youtube_dl/extractor/faz.py @@ -1,49 +1,48 @@ # encoding: utf-8 -import re +from __future__ import unicode_literals from .common import InfoExtractor -from ..utils import ( - determine_ext, -) class FazIE(InfoExtractor): - IE_NAME = u'faz.net' + IE_NAME = 'faz.net' _VALID_URL = r'https?://www\.faz\.net/multimedia/videos/.*?-(?P\d+)\.html' _TEST = { - u'url': u'http://www.faz.net/multimedia/videos/stockholm-chemie-nobelpreis-fuer-drei-amerikanische-forscher-12610585.html', - u'file': u'12610585.mp4', - u'info_dict': { - u'title': u'Stockholm: Chemie-Nobelpreis für drei amerikanische Forscher', - u'description': u'md5:1453fbf9a0d041d985a47306192ea253', + 'url': 'http://www.faz.net/multimedia/videos/stockholm-chemie-nobelpreis-fuer-drei-amerikanische-forscher-12610585.html', + 'info_dict': { + 'id': '12610585', + 'ext': 'mp4', + 'title': 'Stockholm: Chemie-Nobelpreis für drei amerikanische Forscher', + 'description': 'md5:1453fbf9a0d041d985a47306192ea253', }, } def _real_extract(self, url): - mobj = re.match(self._VALID_URL, url) - video_id = mobj.group('id') - self.to_screen(video_id) + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) - config_xml_url = self._search_regex(r'writeFLV\(\'(.+?)\',', webpage, - u'config xml url') - config = self._download_xml(config_xml_url, video_id, - u'Downloading config xml') + config_xml_url = self._search_regex( + r'writeFLV\(\'(.+?)\',', webpage, 'config xml url') + config = self._download_xml( + config_xml_url, video_id, 'Downloading config xml') encodings = config.find('ENCODINGS') formats = [] - for code in ['LOW', 'HIGH', 'HQ']: + for pref, code in enumerate(['LOW', 'HIGH', 'HQ']): encoding = encodings.find(code) if encoding is None: continue encoding_url = encoding.find('FILENAME').text formats.append({ 'url': encoding_url, - 'ext': determine_ext(encoding_url), 'format_id': code.lower(), + 'quality': pref, }) + self._sort_formats(formats) - descr = self._html_search_regex(r'

    (.*?)

    ', webpage, u'description') + descr = self._html_search_regex( + r'

    (.*?)

    ', webpage, 'description', fatal=False) return { 'id': video_id, 'title': self._og_search_title(webpage), diff --git a/youtube_dl/extractor/fc2.py b/youtube_dl/extractor/fc2.py index c663a0f81..81ceace53 100644 --- a/youtube_dl/extractor/fc2.py +++ b/youtube_dl/extractor/fc2.py @@ -1,19 +1,20 @@ #! -*- coding: utf-8 -*- from __future__ import unicode_literals -import re import hashlib from .common import InfoExtractor -from ..utils import ( - ExtractorError, +from ..compat import ( compat_urllib_request, compat_urlparse, ) +from ..utils import ( + ExtractorError, +) class FC2IE(InfoExtractor): - _VALID_URL = r'^http://video\.fc2\.com/((?P[^/]+)/)?content/(?P[^/]+)' + _VALID_URL = r'^http://video\.fc2\.com/(?:[^/]+/)?content/(?P[^/]+)' IE_NAME = 'fc2' _TEST = { 'url': 'http://video.fc2.com/en/content/20121103kUan1KHs', @@ -26,9 +27,7 @@ class FC2IE(InfoExtractor): } def _real_extract(self, url): - mobj = re.match(self._VALID_URL, url) - video_id = mobj.group('id') - + video_id = self._match_id(url) webpage = self._download_webpage(url, video_id) self._downloader.cookiejar.clear_session_cookies() # must clear @@ -40,7 +39,7 @@ class FC2IE(InfoExtractor): info_url = ( "http://video.fc2.com/ginfo.php?mimi={1:s}&href={2:s}&v={0:s}&fversion=WIN%2011%2C6%2C602%2C180&from=2&otag=0&upid={0:s}&tk=null&". - format(video_id, mimi, compat_urllib_request.quote(refer, safe='').replace('.','%2E'))) + format(video_id, mimi, compat_urllib_request.quote(refer, safe='').replace('.', '%2E'))) info_webpage = self._download_webpage( info_url, video_id, note='Downloading info page') diff --git a/youtube_dl/extractor/firedrive.py b/youtube_dl/extractor/firedrive.py index af439ccfe..3191116d9 100644 --- a/youtube_dl/extractor/firedrive.py +++ b/youtube_dl/extractor/firedrive.py @@ -4,11 +4,13 @@ from __future__ import unicode_literals import re from .common import InfoExtractor -from ..utils import ( - ExtractorError, +from ..compat import ( compat_urllib_parse, compat_urllib_request, ) +from ..utils import ( + ExtractorError, +) class FiredriveIE(InfoExtractor): @@ -28,11 +30,8 @@ class FiredriveIE(InfoExtractor): }] def _real_extract(self, url): - mobj = re.match(self._VALID_URL, url) - video_id = mobj.group('id') - + video_id = self._match_id(url) url = 'http://firedrive.com/file/%s' % video_id - webpage = self._download_webpage(url, video_id) if re.search(self._FILE_DELETED_REGEX, webpage) is not None: diff --git a/youtube_dl/extractor/firsttv.py b/youtube_dl/extractor/firsttv.py index c2e987ff7..08ceee4ed 100644 --- a/youtube_dl/extractor/firsttv.py +++ b/youtube_dl/extractor/firsttv.py @@ -44,9 +44,9 @@ class FirstTVIE(InfoExtractor): duration = self._og_search_property('video:duration', webpage, 'video duration', fatal=False) like_count = self._html_search_regex(r'title="Понравилось".*?/> \[(\d+)\]', - webpage, 'like count', fatal=False) + webpage, 'like count', fatal=False) dislike_count = self._html_search_regex(r'title="Не понравилось".*?/> \[(\d+)\]', - webpage, 'dislike count', fatal=False) + webpage, 'dislike count', fatal=False) return { 'id': video_id, @@ -57,4 +57,4 @@ class FirstTVIE(InfoExtractor): 'duration': int_or_none(duration), 'like_count': int_or_none(like_count), 'dislike_count': int_or_none(dislike_count), - } \ No newline at end of file + } diff --git a/youtube_dl/extractor/fivemin.py b/youtube_dl/extractor/fivemin.py index 3a50bab5c..5b24b921c 100644 --- a/youtube_dl/extractor/fivemin.py +++ b/youtube_dl/extractor/fivemin.py @@ -1,11 +1,11 @@ from __future__ import unicode_literals -import re - from .common import InfoExtractor -from ..utils import ( +from ..compat import ( compat_str, compat_urllib_parse, +) +from ..utils import ( ExtractorError, ) @@ -13,7 +13,7 @@ from ..utils import ( class FiveMinIE(InfoExtractor): IE_NAME = '5min' _VALID_URL = r'''(?x) - (?:https?://[^/]*?5min\.com/Scripts/PlayerSeed\.js\?(.*?&)?playList=| + (?:https?://[^/]*?5min\.com/Scripts/PlayerSeed\.js\?(?:.*?&)?playList=| 5min:) (?P\d+) ''' @@ -41,16 +41,11 @@ class FiveMinIE(InfoExtractor): }, ] - @classmethod - def _build_result(cls, video_id): - return cls.url_result('5min:%s' % video_id, cls.ie_key()) - def _real_extract(self, url): - mobj = re.match(self._VALID_URL, url) - video_id = mobj.group('id') + video_id = self._match_id(url) embed_url = 'https://embed.5min.com/playerseed/?playList=%s' % video_id embed_page = self._download_webpage(embed_url, video_id, - 'Downloading embed page') + 'Downloading embed page') sid = self._search_regex(r'sid=(\d+)', embed_page, 'sid') query = compat_urllib_parse.urlencode({ 'func': 'GetResults', diff --git a/youtube_dl/extractor/fktv.py b/youtube_dl/extractor/fktv.py index d7048c8c1..d09d1c13a 100644 --- a/youtube_dl/extractor/fktv.py +++ b/youtube_dl/extractor/fktv.py @@ -1,25 +1,27 @@ +from __future__ import unicode_literals + import re import random import json from .common import InfoExtractor from ..utils import ( - determine_ext, get_element_by_id, clean_html, ) class FKTVIE(InfoExtractor): - IE_NAME = u'fernsehkritik.tv' - _VALID_URL = r'(?:http://)?(?:www\.)?fernsehkritik\.tv/folge-(?P[0-9]+)(?:/.*)?' + IE_NAME = 'fernsehkritik.tv' + _VALID_URL = r'http://(?:www\.)?fernsehkritik\.tv/folge-(?P[0-9]+)(?:/.*)?' _TEST = { - u'url': u'http://fernsehkritik.tv/folge-1', - u'file': u'00011.flv', - u'info_dict': { - u'title': u'Folge 1 vom 10. April 2007', - u'description': u'md5:fb4818139c7cfe6907d4b83412a6864f', + 'url': 'http://fernsehkritik.tv/folge-1', + 'info_dict': { + 'id': '00011', + 'ext': 'flv', + 'title': 'Folge 1 vom 10. April 2007', + 'description': 'md5:fb4818139c7cfe6907d4b83412a6864f', }, } @@ -30,9 +32,9 @@ class FKTVIE(InfoExtractor): server = random.randint(2, 4) video_thumbnail = 'http://fernsehkritik.tv/images/magazin/folge%d.jpg' % episode start_webpage = self._download_webpage('http://fernsehkritik.tv/folge-%d/Start' % episode, - episode) + episode) playlist = self._search_regex(r'playlist = (\[.*?\]);', start_webpage, - u'playlist', flags=re.DOTALL) + 'playlist', flags=re.DOTALL) files = json.loads(re.sub('{[^{}]*?}', '{}', playlist)) # TODO: return a single multipart video videos = [] @@ -42,7 +44,6 @@ class FKTVIE(InfoExtractor): videos.append({ 'id': video_id, 'url': video_url, - 'ext': determine_ext(video_url), 'title': clean_html(get_element_by_id('eptitle', start_webpage)), 'description': clean_html(get_element_by_id('contentlist', start_webpage)), 'thumbnail': video_thumbnail @@ -51,14 +52,15 @@ class FKTVIE(InfoExtractor): class FKTVPosteckeIE(InfoExtractor): - IE_NAME = u'fernsehkritik.tv:postecke' - _VALID_URL = r'(?:http://)?(?:www\.)?fernsehkritik\.tv/inline-video/postecke\.php\?(.*&)?ep=(?P[0-9]+)(&|$)' + IE_NAME = 'fernsehkritik.tv:postecke' + _VALID_URL = r'http://(?:www\.)?fernsehkritik\.tv/inline-video/postecke\.php\?(.*&)?ep=(?P[0-9]+)(&|$)' _TEST = { - u'url': u'http://fernsehkritik.tv/inline-video/postecke.php?iframe=true&width=625&height=440&ep=120', - u'file': u'0120.flv', - u'md5': u'262f0adbac80317412f7e57b4808e5c4', - u'info_dict': { - u"title": u"Postecke 120" + 'url': 'http://fernsehkritik.tv/inline-video/postecke.php?iframe=true&width=625&height=440&ep=120', + 'md5': '262f0adbac80317412f7e57b4808e5c4', + 'info_dict': { + 'id': '0120', + 'ext': 'flv', + 'title': 'Postecke 120', } } @@ -71,8 +73,7 @@ class FKTVPosteckeIE(InfoExtractor): video_url = 'http://dl%d.fernsehkritik.tv/postecke/postecke%d.flv' % (server, episode) video_title = 'Postecke %d' % episode return { - 'id': video_id, - 'url': video_url, - 'ext': determine_ext(video_url), - 'title': video_title, + 'id': video_id, + 'url': video_url, + 'title': video_title, } diff --git a/youtube_dl/extractor/flickr.py b/youtube_dl/extractor/flickr.py index e09982e88..0c858b654 100644 --- a/youtube_dl/extractor/flickr.py +++ b/youtube_dl/extractor/flickr.py @@ -17,8 +17,8 @@ class FlickrIE(InfoExtractor): 'info_dict': { 'id': '5645318632', 'ext': 'mp4', - "description": "Waterfalls in the Springtime at Dark Hollow Waterfalls. These are located just off of Skyline Drive in Virginia. They are only about 6/10 of a mile hike but it is a pretty steep hill and a good climb back up.", - "uploader_id": "forestwander-nature-pictures", + "description": "Waterfalls in the Springtime at Dark Hollow Waterfalls. These are located just off of Skyline Drive in Virginia. They are only about 6/10 of a mile hike but it is a pretty steep hill and a good climb back up.", + "uploader_id": "forestwander-nature-pictures", "title": "Dark Hollow Waterfalls" } } @@ -37,7 +37,7 @@ class FlickrIE(InfoExtractor): first_xml = self._download_webpage(first_url, video_id, 'Downloading first data webpage') node_id = self._html_search_regex(r'(\d+-\d+)', - first_xml, 'node_id') + first_xml, 'node_id') second_url = 'https://secure.flickr.com/video_playlist.gne?node_id=' + node_id + '&tech=flash&mode=playlist&bitrate=700&secret=' + secret + '&rd=video.yahoo.com&noad=1' second_xml = self._download_webpage(second_url, video_id, 'Downloading second data webpage') diff --git a/youtube_dl/extractor/folketinget.py b/youtube_dl/extractor/folketinget.py new file mode 100644 index 000000000..68e2db943 --- /dev/null +++ b/youtube_dl/extractor/folketinget.py @@ -0,0 +1,75 @@ +# -*- coding: utf-8 -*- +from __future__ import unicode_literals + +from .common import InfoExtractor +from ..compat import compat_parse_qs +from ..utils import ( + int_or_none, + parse_duration, + parse_iso8601, + xpath_text, +) + + +class FolketingetIE(InfoExtractor): + IE_DESC = 'Folketinget (ft.dk; Danish parliament)' + _VALID_URL = r'https?://(?:www\.)?ft\.dk/webtv/video/[^?#]*?\.(?P[0-9]+)\.aspx' + _TEST = { + 'url': 'http://www.ft.dk/webtv/video/20141/eru/td.1165642.aspx?as=1#player', + 'info_dict': { + 'id': '1165642', + 'ext': 'mp4', + 'title': 'Åbent samråd i Erhvervsudvalget', + 'description': 'Åbent samråd med erhvervs- og vækstministeren om regeringens politik på teleområdet', + 'view_count': int, + 'width': 768, + 'height': 432, + 'tbr': 928000, + 'timestamp': 1416493800, + 'upload_date': '20141120', + 'duration': 3960, + }, + 'params': { + 'skip_download': 'rtmpdump required', + } + } + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + + title = self._og_search_title(webpage) + description = self._html_search_regex( + r'(?s)
    ]*>(.*?)<', + webpage, 'description', fatal=False) + + player_params = compat_parse_qs(self._search_regex( + r'\d+)\.shtml' + _TEST = { + 'url': 'http://foxgay.com/videos/fuck-turkish-style-2582.shtml', + 'md5': '80d72beab5d04e1655a56ad37afe6841', + 'info_dict': { + 'id': '2582', + 'ext': 'mp4', + 'title': 'md5:6122f7ae0fc6b21ebdf59c5e083ce25a', + 'description': 'md5:5e51dc4405f1fd315f7927daed2ce5cf', + 'age_limit': 18, + 'thumbnail': 're:https?://.*\.jpg$', + }, + } + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + + title = self._html_search_regex( + r'(?P<title>.*?)', + webpage, 'title', fatal=False) + description = self._html_search_regex( + r'

    (?P.*?)

    ', + webpage, 'description', fatal=False) + + # Find the URL for the iFrame which contains the actual video. + iframe = self._download_webpage( + self._html_search_regex(r'iframe src="(?P.*?)"', webpage, 'video frame'), + video_id) + video_url = self._html_search_regex( + r"v_path = '(?Phttp://.*?)'", iframe, 'url') + thumb_url = self._html_search_regex( + r"t_path = '(?Phttp://.*?)'", iframe, 'thumbnail', fatal=False) + + return { + 'id': video_id, + 'title': title, + 'url': video_url, + 'description': description, + 'thumbnail': thumb_url, + 'age_limit': 18, + } diff --git a/youtube_dl/extractor/foxnews.py b/youtube_dl/extractor/foxnews.py new file mode 100644 index 000000000..917f76b1e --- /dev/null +++ b/youtube_dl/extractor/foxnews.py @@ -0,0 +1,94 @@ +from __future__ import unicode_literals + +from .common import InfoExtractor +from ..utils import ( + parse_iso8601, + int_or_none, +) + + +class FoxNewsIE(InfoExtractor): + _VALID_URL = r'https?://video\.foxnews\.com/v/(?:video-embed\.html\?video_id=)?(?P\d+)' + _TESTS = [ + { + 'url': 'http://video.foxnews.com/v/3937480/frozen-in-time/#sp=show-clips', + 'md5': '32aaded6ba3ef0d1c04e238d01031e5e', + 'info_dict': { + 'id': '3937480', + 'ext': 'flv', + 'title': 'Frozen in Time', + 'description': 'Doctors baffled by 16-year-old girl that is the size of a toddler', + 'duration': 265, + 'timestamp': 1304411491, + 'upload_date': '20110503', + 'thumbnail': 're:^https?://.*\.jpg$', + }, + }, + { + 'url': 'http://video.foxnews.com/v/3922535568001/rep-luis-gutierrez-on-if-obamas-immigration-plan-is-legal/#sp=show-clips', + 'md5': '5846c64a1ea05ec78175421b8323e2df', + 'info_dict': { + 'id': '3922535568001', + 'ext': 'mp4', + 'title': "Rep. Luis Gutierrez on if Obama's immigration plan is legal", + 'description': "Congressman discusses the president's executive action", + 'duration': 292, + 'timestamp': 1417662047, + 'upload_date': '20141204', + 'thumbnail': 're:^https?://.*\.jpg$', + }, + }, + { + 'url': 'http://video.foxnews.com/v/video-embed.html?video_id=3937480&d=video.foxnews.com', + 'only_matching': True, + }, + ] + + def _real_extract(self, url): + video_id = self._match_id(url) + + video = self._download_json( + 'http://video.foxnews.com/v/feed/video/%s.js?template=fox' % video_id, video_id) + + item = video['channel']['item'] + title = item['title'] + description = item['description'] + timestamp = parse_iso8601(item['dc-date']) + + media_group = item['media-group'] + duration = None + formats = [] + for media in media_group['media-content']: + attributes = media['@attributes'] + video_url = attributes['url'] + if video_url.endswith('.f4m'): + formats.extend(self._extract_f4m_formats(video_url + '?hdcore=3.4.0&plugin=aasp-3.4.0.132.124', video_id)) + elif video_url.endswith('.m3u8'): + formats.extend(self._extract_m3u8_formats(video_url, video_id, 'flv')) + elif not video_url.endswith('.smil'): + duration = int_or_none(attributes.get('duration')) + formats.append({ + 'url': video_url, + 'format_id': media['media-category']['@attributes']['label'], + 'preference': 1, + 'vbr': int_or_none(attributes.get('bitrate')), + 'filesize': int_or_none(attributes.get('fileSize')) + }) + self._sort_formats(formats) + + media_thumbnail = media_group['media-thumbnail']['@attributes'] + thumbnails = [{ + 'url': media_thumbnail['url'], + 'width': int_or_none(media_thumbnail.get('width')), + 'height': int_or_none(media_thumbnail.get('height')), + }] if media_thumbnail else [] + + return { + 'id': video_id, + 'title': title, + 'description': description, + 'duration': duration, + 'timestamp': timestamp, + 'formats': formats, + 'thumbnails': thumbnails, + } diff --git a/youtube_dl/extractor/franceculture.py b/youtube_dl/extractor/franceculture.py index 898e0dda7..0c2972162 100644 --- a/youtube_dl/extractor/franceculture.py +++ b/youtube_dl/extractor/franceculture.py @@ -5,7 +5,7 @@ import json import re from .common import InfoExtractor -from ..utils import ( +from ..compat import ( compat_parse_qs, compat_urlparse, ) diff --git a/youtube_dl/extractor/francetv.py b/youtube_dl/extractor/francetv.py index 0b3374d97..bbc760a49 100644 --- a/youtube_dl/extractor/francetv.py +++ b/youtube_dl/extractor/francetv.py @@ -6,13 +6,15 @@ import re import json from .common import InfoExtractor -from ..utils import ( +from ..compat import ( + compat_urllib_parse_urlparse, compat_urlparse, - ExtractorError, +) +from ..utils import ( clean_html, - parse_duration, - compat_urllib_parse_urlparse, + ExtractorError, int_or_none, + parse_duration, ) @@ -26,6 +28,19 @@ class FranceTVBaseInfoExtractor(InfoExtractor): if info.get('status') == 'NOK': raise ExtractorError( '%s returned error: %s' % (self.IE_NAME, info['message']), expected=True) + allowed_countries = info['videos'][0].get('geoblocage') + if allowed_countries: + georestricted = True + geo_info = self._download_json( + 'http://geo.francetv.fr/ws/edgescape.json', video_id, + 'Downloading geo restriction info') + country = geo_info['reponse']['geo_info']['country_code'] + if country not in allowed_countries: + raise ExtractorError( + 'The video is not available from your location', + expected=True) + else: + georestricted = False formats = [] for video in info['videos']: @@ -36,6 +51,10 @@ class FranceTVBaseInfoExtractor(InfoExtractor): continue format_id = video['format'] if video_url.endswith('.f4m'): + if georestricted: + # See https://github.com/rg3/youtube-dl/issues/3963 + # m3u8 urls work fine + continue video_url_parsed = compat_urllib_parse_urlparse(video_url) f4m_url = self._download_webpage( 'http://hdfauth.francetv.fr/esi/urltokengen2.html?url=%s' % video_url_parsed.path, @@ -46,7 +65,7 @@ class FranceTVBaseInfoExtractor(InfoExtractor): f4m_format['preference'] = 1 formats.extend(f4m_formats) elif video_url.endswith('.m3u8'): - formats.extend(self._extract_m3u8_formats(video_url, video_id)) + formats.extend(self._extract_m3u8_formats(video_url, video_id, 'mp4')) elif video_url.startswith('rtmp'): formats.append({ 'url': video_url, @@ -58,7 +77,7 @@ class FranceTVBaseInfoExtractor(InfoExtractor): formats.append({ 'url': video_url, 'format_id': format_id, - 'preference': 2, + 'preference': -1, }) self._sort_formats(formats) @@ -93,7 +112,6 @@ class FranceTvInfoIE(FranceTVBaseInfoExtractor): _TESTS = [{ 'url': 'http://www.francetvinfo.fr/replay-jt/france-3/soir-3/jt-grand-soir-3-lundi-26-aout-2013_393427.html', - 'md5': '9cecf35f99c4079c199e9817882a9a1c', 'info_dict': { 'id': '84981923', 'ext': 'flv', @@ -235,7 +253,7 @@ class GenerationQuoiIE(InfoExtractor): info_json = self._download_webpage(info_url, name) info = json.loads(info_json) return self.url_result('http://www.dailymotion.com/video/%s' % info['id'], - ie='Dailymotion') + ie='Dailymotion') class CultureboxIE(FranceTVBaseInfoExtractor): diff --git a/youtube_dl/extractor/freevideo.py b/youtube_dl/extractor/freevideo.py new file mode 100644 index 000000000..f755e3c4a --- /dev/null +++ b/youtube_dl/extractor/freevideo.py @@ -0,0 +1,38 @@ +from __future__ import unicode_literals + +from .common import InfoExtractor +from ..utils import ExtractorError + + +class FreeVideoIE(InfoExtractor): + _VALID_URL = r'^http://www.freevideo.cz/vase-videa/(?P[^.]+)\.html(?:$|[?#])' + + _TEST = { + 'url': 'http://www.freevideo.cz/vase-videa/vysukany-zadecek-22033.html', + 'info_dict': { + 'id': 'vysukany-zadecek-22033', + 'ext': 'mp4', + "title": "vysukany-zadecek-22033", + "age_limit": 18, + }, + 'skip': 'Blocked outside .cz', + } + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage, handle = self._download_webpage_handle(url, video_id) + if '//www.czechav.com/' in handle.geturl(): + raise ExtractorError( + 'Access to freevideo is blocked from your location', + expected=True) + + video_url = self._search_regex( + r'\s+url: "(http://[a-z0-9-]+.cdn.freevideo.cz/stream/.*?/video.mp4)"', + webpage, 'video URL') + + return { + 'id': video_id, + 'url': video_url, + 'title': video_id, + 'age_limit': 18, + } diff --git a/youtube_dl/extractor/funnyordie.py b/youtube_dl/extractor/funnyordie.py index d966e8403..a49fc1151 100644 --- a/youtube_dl/extractor/funnyordie.py +++ b/youtube_dl/extractor/funnyordie.py @@ -8,7 +8,7 @@ from ..utils import ExtractorError class FunnyOrDieIE(InfoExtractor): - _VALID_URL = r'https?://(?:www\.)?funnyordie\.com/(?Pembed|videos)/(?P[0-9a-f]+)(?:$|[?#/])' + _VALID_URL = r'https?://(?:www\.)?funnyordie\.com/(?Pembed|articles|videos)/(?P[0-9a-f]+)(?:$|[?#/])' _TESTS = [{ 'url': 'http://www.funnyordie.com/videos/0732f586d7/heart-shaped-box-literal-video-version', 'md5': 'bcd81e0c4f26189ee09be362ad6e6ba9', @@ -21,7 +21,6 @@ class FunnyOrDieIE(InfoExtractor): }, }, { 'url': 'http://www.funnyordie.com/embed/e402820827', - 'md5': '29f4c5e5a61ca39dfd7e8348a75d0aad', 'info_dict': { 'id': 'e402820827', 'ext': 'mp4', @@ -29,6 +28,9 @@ class FunnyOrDieIE(InfoExtractor): 'description': 'Please use this to sell something. www.jonlajoie.com', 'thumbnail': 're:^http:.*\.jpg$', }, + }, { + 'url': 'http://www.funnyordie.com/articles/ebf5e34fc8/10-hours-of-walking-in-nyc-as-a-man', + 'only_matching': True, }] def _real_extract(self, url): @@ -37,7 +39,7 @@ class FunnyOrDieIE(InfoExtractor): video_id = mobj.group('id') webpage = self._download_webpage(url, video_id) - links = re.findall(r'\d+)/?' + _VALID_URL = r'(?:http://)?(?:www\.)?gamespot\.com/.*-(?P\d+)/?' _TEST = { 'url': 'http://www.gamespot.com/videos/arma-3-community-guide-sitrep-i/2300-6410818/', 'md5': 'b2a30deaa8654fcccd43713a6b6a4825', @@ -26,10 +27,10 @@ class GameSpotIE(InfoExtractor): } def _real_extract(self, url): - mobj = re.match(self._VALID_URL, url) - page_id = mobj.group('page_id') + page_id = self._match_id(url) webpage = self._download_webpage(url, page_id) - data_video_json = self._search_regex(r'data-video=["\'](.*?)["\']', webpage, 'data video') + data_video_json = self._search_regex( + r'data-video=["\'](.*?)["\']', webpage, 'data video') data_video = json.loads(unescapeHTML(data_video_json)) # Transform the manifest url to a link to the mp4 files @@ -41,7 +42,8 @@ class GameSpotIE(InfoExtractor): http_path = f4m_path[1:].split('/', 1)[1] http_template = re.sub(QUALITIES_RE, r'%s', http_path) http_template = http_template.replace('.csmil/manifest.f4m', '') - http_template = compat_urlparse.urljoin('http://video.gamespotcdn.com/', http_template) + http_template = compat_urlparse.urljoin( + 'http://video.gamespotcdn.com/', http_template) formats = [] for q in qualities: formats.append({ @@ -52,8 +54,9 @@ class GameSpotIE(InfoExtractor): return { 'id': data_video['guid'], + 'display_id': page_id, 'title': compat_urllib_parse.unquote(data_video['title']), 'formats': formats, - 'description': get_meta_content('description', webpage), + 'description': self._html_search_meta('description', webpage), 'thumbnail': self._og_search_thumbnail(webpage), } diff --git a/youtube_dl/extractor/gdcvault.py b/youtube_dl/extractor/gdcvault.py index de14ae1fb..d453ec010 100644 --- a/youtube_dl/extractor/gdcvault.py +++ b/youtube_dl/extractor/gdcvault.py @@ -3,7 +3,7 @@ from __future__ import unicode_literals import re from .common import InfoExtractor -from ..utils import ( +from ..compat import ( compat_urllib_parse, compat_urllib_request, ) diff --git a/youtube_dl/extractor/generic.py b/youtube_dl/extractor/generic.py index dfc2ef4e7..2b4d8c62f 100644 --- a/youtube_dl/extractor/generic.py +++ b/youtube_dl/extractor/generic.py @@ -7,11 +7,12 @@ import re from .common import InfoExtractor from .youtube import YoutubeIE -from ..utils import ( +from ..compat import ( compat_urllib_parse, compat_urlparse, compat_xml_parse_error, - +) +from ..utils import ( determine_ext, ExtractorError, float_or_none, @@ -28,6 +29,7 @@ from .brightcove import BrightcoveIE from .ooyala import OoyalaIE from .rutv import RUTVIE from .smotri import SmotriIE +from .condenast import CondeNastIE class GenericIE(InfoExtractor): @@ -98,6 +100,22 @@ class GenericIE(InfoExtractor): 'uploader': 'Championat', }, }, + { + # https://github.com/rg3/youtube-dl/issues/3541 + 'add_ie': ['Brightcove'], + 'url': 'http://www.kijk.nl/sbs6/leermijvrouwenkennen/videos/jqMiXKAYan2S/aflevering-1', + 'info_dict': { + 'id': '3866516442001', + 'ext': 'mp4', + 'title': 'Leer mij vrouwen kennen: Aflevering 1', + 'description': 'Leer mij vrouwen kennen: Aflevering 1', + 'uploader': 'SBS Broadcasting', + }, + 'skip': 'Restricted to Netherlands', + 'params': { + 'skip_download': True, # m3u8 download + }, + }, # Direct link to a video { 'url': 'http://media.w3.org/2010/05/sintel/trailer.mp4', @@ -324,7 +342,7 @@ class GenericIE(InfoExtractor): 'ext': 'mp4', 'age_limit': 18, 'uploader': 'www.handjobhub.com', - 'title': 'Busty Blonde Siri Tit Fuck While Wank at Handjob Hub', + 'title': 'Busty Blonde Siri Tit Fuck While Wank at HandjobHub.com', } }, # RSS feed @@ -379,6 +397,87 @@ class GenericIE(InfoExtractor): 'uploader': 'education-portal.com', }, }, + { + 'url': 'http://thoughtworks.wistia.com/medias/uxjb0lwrcz', + 'md5': 'baf49c2baa8a7de5f3fc145a8506dcd4', + 'info_dict': { + 'id': 'uxjb0lwrcz', + 'ext': 'mp4', + 'title': 'Conversation about Hexagonal Rails Part 1 - ThoughtWorks', + 'duration': 1715.0, + 'uploader': 'thoughtworks.wistia.com', + }, + }, + # Direct download with broken HEAD + { + 'url': 'http://ai-radio.org:8000/radio.opus', + 'info_dict': { + 'id': 'radio', + 'ext': 'opus', + 'title': 'radio', + }, + 'params': { + 'skip_download': True, # infinite live stream + }, + 'expected_warnings': [ + r'501.*Not Implemented' + ], + }, + # Soundcloud embed + { + 'url': 'http://nakedsecurity.sophos.com/2014/10/29/sscc-171-are-you-sure-that-1234-is-a-bad-password-podcast/', + 'info_dict': { + 'id': '174391317', + 'ext': 'mp3', + 'description': 'md5:ff867d6b555488ad3c52572bb33d432c', + 'uploader': 'Sophos Security', + 'title': 'Chet Chat 171 - Oct 29, 2014', + 'upload_date': '20141029', + } + }, + # Livestream embed + { + 'url': 'http://www.esa.int/Our_Activities/Space_Science/Rosetta/Philae_comet_touch-down_webcast', + 'info_dict': { + 'id': '67864563', + 'ext': 'flv', + 'upload_date': '20141112', + 'title': 'Rosetta #CometLanding webcast HL 10', + } + }, + # LazyYT + { + 'url': 'http://discourse.ubuntu.com/t/unity-8-desktop-mode-windows-on-mir/1986', + 'info_dict': { + 'title': 'Unity 8 desktop-mode windows on Mir! - Ubuntu Discourse', + }, + 'playlist_mincount': 2, + }, + # Direct link with incorrect MIME type + { + 'url': 'http://ftp.nluug.nl/video/nluug/2014-11-20_nj14/zaal-2/5_Lennart_Poettering_-_Systemd.webm', + 'md5': '4ccbebe5f36706d85221f204d7eb5913', + 'info_dict': { + 'url': 'http://ftp.nluug.nl/video/nluug/2014-11-20_nj14/zaal-2/5_Lennart_Poettering_-_Systemd.webm', + 'id': '5_Lennart_Poettering_-_Systemd', + 'ext': 'webm', + 'title': '5_Lennart_Poettering_-_Systemd', + 'upload_date': '20141120', + }, + 'expected_warnings': [ + 'URL could be a direct video link, returning it as such.' + ] + }, + # Cinchcast embed + { + 'url': 'http://undergroundwellness.com/podcasts/306-5-steps-to-permanent-gut-healing/', + 'info_dict': { + 'id': '7141703', + 'ext': 'mp3', + 'upload_date': '20141126', + 'title': 'Jack Tips: 5 Steps to Permanent Gut Healing', + } + }, ] def report_following_redirect(self, new_url): @@ -471,11 +570,12 @@ class GenericIE(InfoExtractor): if default_search in ('error', 'fixup_error'): raise ExtractorError( - ('%r is not a valid URL. ' - 'Set --default-search "ytsearch" (or run youtube-dl "ytsearch:%s" ) to search YouTube' - ) % (url, url), expected=True) + '%r is not a valid URL. ' + 'Set --default-search "ytsearch" (or run youtube-dl "ytsearch:%s" ) to search YouTube' + % (url, url), expected=True) else: - assert ':' in default_search + if ':' not in default_search: + default_search += ':' return self.url_result(default_search + url) url, smuggled_data = unsmuggle_url(url) @@ -490,14 +590,14 @@ class GenericIE(InfoExtractor): self.to_screen('%s: Requesting header' % video_id) head_req = HEADRequest(url) - response = self._request_webpage( + head_response = self._request_webpage( head_req, video_id, note=False, errnote='Could not send HEAD request to %s' % url, fatal=False) - if response is not False: + if head_response is not False: # Check for redirect - new_url = response.geturl() + new_url = head_response.geturl() if url != new_url: self.report_following_redirect(new_url) if force_videoid: @@ -505,33 +605,53 @@ class GenericIE(InfoExtractor): new_url, {'force_videoid': force_videoid}) return self.url_result(new_url) - # Check for direct link to a video - content_type = response.headers.get('Content-Type', '') - m = re.match(r'^(?Paudio|video|application(?=/ogg$))/(?P.+)$', content_type) - if m: - upload_date = response.headers.get('Last-Modified') - if upload_date: - upload_date = unified_strdate(upload_date) - return { - 'id': video_id, - 'title': os.path.splitext(url_basename(url))[0], - 'formats': [{ - 'format_id': m.group('format_id'), - 'url': url, - 'vcodec': 'none' if m.group('type') == 'audio' else None - }], - 'upload_date': upload_date, - } + full_response = None + if head_response is False: + full_response = self._request_webpage(url, video_id) + head_response = full_response + + # Check for direct link to a video + content_type = head_response.headers.get('Content-Type', '') + m = re.match(r'^(?Paudio|video|application(?=/ogg$))/(?P.+)$', content_type) + if m: + upload_date = unified_strdate( + head_response.headers.get('Last-Modified')) + return { + 'id': video_id, + 'title': os.path.splitext(url_basename(url))[0], + 'direct': True, + 'formats': [{ + 'format_id': m.group('format_id'), + 'url': url, + 'vcodec': 'none' if m.group('type') == 'audio' else None + }], + 'upload_date': upload_date, + } if not self._downloader.params.get('test', False) and not is_intentional: self._downloader.report_warning('Falling back on generic information extractor.') - try: - webpage = self._download_webpage(url, video_id) - except ValueError: - # since this is the last-resort InfoExtractor, if - # this error is thrown, it'll be thrown here - raise ExtractorError('Failed to download URL: %s' % url) + if not full_response: + full_response = self._request_webpage(url, video_id) + + # Maybe it's a direct link to a video? + # Be careful not to download the whole thing! + first_bytes = full_response.read(512) + if not re.match(r'^\s*<', first_bytes.decode('utf-8', 'replace')): + self._downloader.report_warning( + 'URL could be a direct video link, returning it as such.') + upload_date = unified_strdate( + head_response.headers.get('Last-Modified')) + return { + 'id': video_id, + 'title': os.path.splitext(url_basename(url))[0], + 'direct': True, + 'url': url, + 'upload_date': upload_date, + } + + webpage = self._webpage_read_content( + full_response, url, video_id, prefix=first_bytes) self.report_extraction(video_id) @@ -608,13 +728,13 @@ class GenericIE(InfoExtractor): if mobj: player_url = unescapeHTML(mobj.group('url')) surl = smuggle_url(player_url, {'Referer': url}) - return self.url_result(surl, 'Vimeo') + return self.url_result(surl) # Look for embedded (swf embed) Vimeo player mobj = re.search( - r']+?src="(https?://(?:www\.)?vimeo\.com/moogaloop\.swf.+?)"', webpage) + r']+?src="((?:https?:)?//(?:www\.)?vimeo\.com/moogaloop\.swf.+?)"', webpage) if mobj: - return self.url_result(mobj.group(1), 'Vimeo') + return self.url_result(mobj.group(1)) # Look for embedded YouTube player matches = re.findall(r'''(?x) @@ -622,7 +742,8 @@ class GenericIE(InfoExtractor): ]+?src=| data-video-url=| ]+?src=| - embedSWF\(?:\s* + embedSWF\(?:\s*| + new\s+SWFObject\( ) (["\']) (?P(?:https?:)?//(?:www\.)?youtube(?:-nocookie)?\.com/ @@ -632,6 +753,12 @@ class GenericIE(InfoExtractor): return _playlist_from_matches( matches, lambda m: unescapeHTML(m[1])) + # Look for lazyYT YouTube embed + matches = re.findall( + r'class="lazyYT" data-youtube-id="([^"]+)"', webpage) + if matches: + return _playlist_from_matches(matches, lambda m: unescapeHTML(m)) + # Look for embedded Dailymotion player matches = re.findall( r']+?src=(["\'])(?P(?:https?:)?//(?:www\.)?dailymotion\.com/embed/video/.+?)\1', webpage) @@ -651,17 +778,20 @@ class GenericIE(InfoExtractor): # Look for embedded Wistia player match = re.search( - r']+?src=(["\'])(?P(?:https?:)?//(?:fast\.)?wistia\.net/embed/iframe/.+?)\1', webpage) + r'<(?:meta[^>]+?content|iframe[^>]+?src)=(["\'])(?P(?:https?:)?//(?:fast\.)?wistia\.net/embed/iframe/.+?)\1', webpage) if match: + embed_url = self._proto_relative_url( + unescapeHTML(match.group('url'))) return { '_type': 'url_transparent', - 'url': unescapeHTML(match.group('url')), + 'url': embed_url, 'ie_key': 'Wistia', 'uploader': video_uploader, 'title': video_title, 'id': video_id, } - match = re.search(r'(?:id=["\']wistia_|data-wistiaid=["\']|Wistia\.embed\(["\'])(?P[^"\']+)', webpage) + + match = re.search(r'(?:id=["\']wistia_|data-wistia-?id=["\']|Wistia\.embed\(["\'])(?P[^"\']+)', webpage) if match: return { '_type': 'url_transparent', @@ -675,7 +805,7 @@ class GenericIE(InfoExtractor): # Look for embedded blip.tv player mobj = re.search(r']*https?://api\.blip\.tv/\w+/redirect/\w+/(\d+)', webpage) if mobj: - return self.url_result('http://blip.tv/a/a-'+mobj.group(1), 'BlipTV') + return self.url_result('http://blip.tv/a/a-' + mobj.group(1), 'BlipTV') mobj = re.search(r'<(?:iframe|embed|object)\s[^>]*(https?://(?:\w+\.)?blip\.tv/(?:play/|api\.swf#)[a-zA-Z0-9_]+)', webpage) if mobj: return self.url_result(mobj.group(1), 'BlipTV') @@ -711,7 +841,7 @@ class GenericIE(InfoExtractor): # Look for Ooyala videos mobj = (re.search(r'player.ooyala.com/[^"?]+\?[^"]*?(?:embedCode|ec)=(?P[^"&]+)', webpage) or - re.search(r'OO.Player.create\([\'"].*?[\'"],\s*[\'"](?P.{32})[\'"]', webpage)) + re.search(r'OO.Player.create\([\'"].*?[\'"],\s*[\'"](?P.{32})[\'"]', webpage)) if mobj is not None: return OoyalaIE._build_url_result(mobj.group('ec')) @@ -805,7 +935,7 @@ class GenericIE(InfoExtractor): # Look for embeded soundcloud player mobj = re.search( - r'