mirror of
https://github.com/yt-dlp/yt-dlp.git
synced 2025-03-09 12:50:23 -05:00
Merge with 'upstream/master'
This commit is contained in:
commit
25b921b02c
170 changed files with 11571 additions and 5069 deletions
18
.github/ISSUE_TEMPLATE/1_broken_site.yml
vendored
18
.github/ISSUE_TEMPLATE/1_broken_site.yml
vendored
|
@ -1,5 +1,5 @@
|
||||||
name: Broken site
|
name: Broken site
|
||||||
description: Report broken or misfunctioning site
|
description: Report error in a supported site
|
||||||
labels: [triage, site-bug]
|
labels: [triage, site-bug]
|
||||||
body:
|
body:
|
||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
|
@ -7,7 +7,7 @@ body:
|
||||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||||
description: Fill all fields even if you think it is irrelevant for the issue
|
description: Fill all fields even if you think it is irrelevant for the issue
|
||||||
options:
|
options:
|
||||||
- label: I understand that I will be **blocked** if I remove or skip any mandatory\* field
|
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
|
||||||
required: true
|
required: true
|
||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
id: checklist
|
id: checklist
|
||||||
|
@ -16,15 +16,15 @@ body:
|
||||||
description: |
|
description: |
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
|
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
|
||||||
options:
|
options:
|
||||||
- label: I'm reporting a broken site
|
- label: I'm reporting that a **supported** site is broken
|
||||||
required: true
|
required: true
|
||||||
- label: I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
- label: I've verified that I'm running yt-dlp version **2023.03.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
||||||
required: true
|
required: true
|
||||||
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
||||||
required: true
|
required: true
|
||||||
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
|
@ -50,6 +50,8 @@ body:
|
||||||
options:
|
options:
|
||||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||||
required: true
|
required: true
|
||||||
|
- label: "If using API, add `'verbose': True` to `YoutubeDL` params instead"
|
||||||
|
required: false
|
||||||
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
||||||
required: true
|
required: true
|
||||||
- type: textarea
|
- type: textarea
|
||||||
|
@ -62,7 +64,7 @@ body:
|
||||||
[debug] Command-line config: ['-vU', 'test:youtube']
|
[debug] Command-line config: ['-vU', 'test:youtube']
|
||||||
[debug] Portable config "yt-dlp.conf": ['-i']
|
[debug] Portable config "yt-dlp.conf": ['-i']
|
||||||
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
||||||
[debug] yt-dlp version 2023.01.06 [9d339c4] (win32_exe)
|
[debug] yt-dlp version 2023.03.04 [9d339c4] (win32_exe)
|
||||||
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
||||||
[debug] Checking exe version: ffmpeg -bsfs
|
[debug] Checking exe version: ffmpeg -bsfs
|
||||||
[debug] Checking exe version: ffprobe -bsfs
|
[debug] Checking exe version: ffprobe -bsfs
|
||||||
|
@ -70,8 +72,8 @@ body:
|
||||||
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
||||||
[debug] Proxy map: {}
|
[debug] Proxy map: {}
|
||||||
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
||||||
Latest version: 2023.01.06, Current version: 2023.01.06
|
Latest version: 2023.03.04, Current version: 2023.03.04
|
||||||
yt-dlp is up to date (2023.01.06)
|
yt-dlp is up to date (2023.03.04)
|
||||||
<more lines>
|
<more lines>
|
||||||
render: shell
|
render: shell
|
||||||
validations:
|
validations:
|
||||||
|
|
|
@ -7,7 +7,7 @@ body:
|
||||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||||
description: Fill all fields even if you think it is irrelevant for the issue
|
description: Fill all fields even if you think it is irrelevant for the issue
|
||||||
options:
|
options:
|
||||||
- label: I understand that I will be **blocked** if I remove or skip any mandatory\* field
|
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
|
||||||
required: true
|
required: true
|
||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
id: checklist
|
id: checklist
|
||||||
|
@ -18,13 +18,13 @@ body:
|
||||||
options:
|
options:
|
||||||
- label: I'm reporting a new site support request
|
- label: I'm reporting a new site support request
|
||||||
required: true
|
required: true
|
||||||
- label: I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
- label: I've verified that I'm running yt-dlp version **2023.03.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
||||||
required: true
|
required: true
|
||||||
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
||||||
required: true
|
required: true
|
||||||
- label: I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
|
- label: I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
|
@ -62,6 +62,8 @@ body:
|
||||||
options:
|
options:
|
||||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||||
required: true
|
required: true
|
||||||
|
- label: "If using API, add `'verbose': True` to `YoutubeDL` params instead"
|
||||||
|
required: false
|
||||||
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
||||||
required: true
|
required: true
|
||||||
- type: textarea
|
- type: textarea
|
||||||
|
@ -74,7 +76,7 @@ body:
|
||||||
[debug] Command-line config: ['-vU', 'test:youtube']
|
[debug] Command-line config: ['-vU', 'test:youtube']
|
||||||
[debug] Portable config "yt-dlp.conf": ['-i']
|
[debug] Portable config "yt-dlp.conf": ['-i']
|
||||||
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
||||||
[debug] yt-dlp version 2023.01.06 [9d339c4] (win32_exe)
|
[debug] yt-dlp version 2023.03.04 [9d339c4] (win32_exe)
|
||||||
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
||||||
[debug] Checking exe version: ffmpeg -bsfs
|
[debug] Checking exe version: ffmpeg -bsfs
|
||||||
[debug] Checking exe version: ffprobe -bsfs
|
[debug] Checking exe version: ffprobe -bsfs
|
||||||
|
@ -82,8 +84,8 @@ body:
|
||||||
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
||||||
[debug] Proxy map: {}
|
[debug] Proxy map: {}
|
||||||
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
||||||
Latest version: 2023.01.06, Current version: 2023.01.06
|
Latest version: 2023.03.04, Current version: 2023.03.04
|
||||||
yt-dlp is up to date (2023.01.06)
|
yt-dlp is up to date (2023.03.04)
|
||||||
<more lines>
|
<more lines>
|
||||||
render: shell
|
render: shell
|
||||||
validations:
|
validations:
|
||||||
|
|
|
@ -7,7 +7,7 @@ body:
|
||||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||||
description: Fill all fields even if you think it is irrelevant for the issue
|
description: Fill all fields even if you think it is irrelevant for the issue
|
||||||
options:
|
options:
|
||||||
- label: I understand that I will be **blocked** if I remove or skip any mandatory\* field
|
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
|
||||||
required: true
|
required: true
|
||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
id: checklist
|
id: checklist
|
||||||
|
@ -18,11 +18,11 @@ body:
|
||||||
options:
|
options:
|
||||||
- label: I'm requesting a site-specific feature
|
- label: I'm requesting a site-specific feature
|
||||||
required: true
|
required: true
|
||||||
- label: I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
- label: I've verified that I'm running yt-dlp version **2023.03.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
||||||
required: true
|
required: true
|
||||||
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
|
@ -58,6 +58,8 @@ body:
|
||||||
options:
|
options:
|
||||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||||
required: true
|
required: true
|
||||||
|
- label: "If using API, add `'verbose': True` to `YoutubeDL` params instead"
|
||||||
|
required: false
|
||||||
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
||||||
required: true
|
required: true
|
||||||
- type: textarea
|
- type: textarea
|
||||||
|
@ -70,7 +72,7 @@ body:
|
||||||
[debug] Command-line config: ['-vU', 'test:youtube']
|
[debug] Command-line config: ['-vU', 'test:youtube']
|
||||||
[debug] Portable config "yt-dlp.conf": ['-i']
|
[debug] Portable config "yt-dlp.conf": ['-i']
|
||||||
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
||||||
[debug] yt-dlp version 2023.01.06 [9d339c4] (win32_exe)
|
[debug] yt-dlp version 2023.03.04 [9d339c4] (win32_exe)
|
||||||
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
||||||
[debug] Checking exe version: ffmpeg -bsfs
|
[debug] Checking exe version: ffmpeg -bsfs
|
||||||
[debug] Checking exe version: ffprobe -bsfs
|
[debug] Checking exe version: ffprobe -bsfs
|
||||||
|
@ -78,8 +80,8 @@ body:
|
||||||
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
||||||
[debug] Proxy map: {}
|
[debug] Proxy map: {}
|
||||||
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
||||||
Latest version: 2023.01.06, Current version: 2023.01.06
|
Latest version: 2023.03.04, Current version: 2023.03.04
|
||||||
yt-dlp is up to date (2023.01.06)
|
yt-dlp is up to date (2023.03.04)
|
||||||
<more lines>
|
<more lines>
|
||||||
render: shell
|
render: shell
|
||||||
validations:
|
validations:
|
||||||
|
|
14
.github/ISSUE_TEMPLATE/4_bug_report.yml
vendored
14
.github/ISSUE_TEMPLATE/4_bug_report.yml
vendored
|
@ -7,7 +7,7 @@ body:
|
||||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||||
description: Fill all fields even if you think it is irrelevant for the issue
|
description: Fill all fields even if you think it is irrelevant for the issue
|
||||||
options:
|
options:
|
||||||
- label: I understand that I will be **blocked** if I remove or skip any mandatory\* field
|
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
|
||||||
required: true
|
required: true
|
||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
id: checklist
|
id: checklist
|
||||||
|
@ -18,13 +18,13 @@ body:
|
||||||
options:
|
options:
|
||||||
- label: I'm reporting a bug unrelated to a specific site
|
- label: I'm reporting a bug unrelated to a specific site
|
||||||
required: true
|
required: true
|
||||||
- label: I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
- label: I've verified that I'm running yt-dlp version **2023.03.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
||||||
required: true
|
required: true
|
||||||
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
||||||
required: true
|
required: true
|
||||||
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
|
@ -43,6 +43,8 @@ body:
|
||||||
options:
|
options:
|
||||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||||
required: true
|
required: true
|
||||||
|
- label: "If using API, add `'verbose': True` to `YoutubeDL` params instead"
|
||||||
|
required: false
|
||||||
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
||||||
required: true
|
required: true
|
||||||
- type: textarea
|
- type: textarea
|
||||||
|
@ -55,7 +57,7 @@ body:
|
||||||
[debug] Command-line config: ['-vU', 'test:youtube']
|
[debug] Command-line config: ['-vU', 'test:youtube']
|
||||||
[debug] Portable config "yt-dlp.conf": ['-i']
|
[debug] Portable config "yt-dlp.conf": ['-i']
|
||||||
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
||||||
[debug] yt-dlp version 2023.01.06 [9d339c4] (win32_exe)
|
[debug] yt-dlp version 2023.03.04 [9d339c4] (win32_exe)
|
||||||
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
||||||
[debug] Checking exe version: ffmpeg -bsfs
|
[debug] Checking exe version: ffmpeg -bsfs
|
||||||
[debug] Checking exe version: ffprobe -bsfs
|
[debug] Checking exe version: ffprobe -bsfs
|
||||||
|
@ -63,8 +65,8 @@ body:
|
||||||
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
||||||
[debug] Proxy map: {}
|
[debug] Proxy map: {}
|
||||||
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
||||||
Latest version: 2023.01.06, Current version: 2023.01.06
|
Latest version: 2023.03.04, Current version: 2023.03.04
|
||||||
yt-dlp is up to date (2023.01.06)
|
yt-dlp is up to date (2023.03.04)
|
||||||
<more lines>
|
<more lines>
|
||||||
render: shell
|
render: shell
|
||||||
validations:
|
validations:
|
||||||
|
|
14
.github/ISSUE_TEMPLATE/5_feature_request.yml
vendored
14
.github/ISSUE_TEMPLATE/5_feature_request.yml
vendored
|
@ -7,7 +7,7 @@ body:
|
||||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||||
description: Fill all fields even if you think it is irrelevant for the issue
|
description: Fill all fields even if you think it is irrelevant for the issue
|
||||||
options:
|
options:
|
||||||
- label: I understand that I will be **blocked** if I remove or skip any mandatory\* field
|
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
|
||||||
required: true
|
required: true
|
||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
id: checklist
|
id: checklist
|
||||||
|
@ -20,9 +20,9 @@ body:
|
||||||
required: true
|
required: true
|
||||||
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
|
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
|
||||||
required: true
|
required: true
|
||||||
- label: I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
- label: I've verified that I'm running yt-dlp version **2023.03.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
|
@ -40,6 +40,8 @@ body:
|
||||||
label: Provide verbose output that clearly demonstrates the problem
|
label: Provide verbose output that clearly demonstrates the problem
|
||||||
options:
|
options:
|
||||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||||
|
- label: "If using API, add `'verbose': True` to `YoutubeDL` params instead"
|
||||||
|
required: false
|
||||||
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: log
|
id: log
|
||||||
|
@ -51,7 +53,7 @@ body:
|
||||||
[debug] Command-line config: ['-vU', 'test:youtube']
|
[debug] Command-line config: ['-vU', 'test:youtube']
|
||||||
[debug] Portable config "yt-dlp.conf": ['-i']
|
[debug] Portable config "yt-dlp.conf": ['-i']
|
||||||
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
||||||
[debug] yt-dlp version 2023.01.06 [9d339c4] (win32_exe)
|
[debug] yt-dlp version 2023.03.04 [9d339c4] (win32_exe)
|
||||||
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
||||||
[debug] Checking exe version: ffmpeg -bsfs
|
[debug] Checking exe version: ffmpeg -bsfs
|
||||||
[debug] Checking exe version: ffprobe -bsfs
|
[debug] Checking exe version: ffprobe -bsfs
|
||||||
|
@ -59,7 +61,7 @@ body:
|
||||||
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
||||||
[debug] Proxy map: {}
|
[debug] Proxy map: {}
|
||||||
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
||||||
Latest version: 2023.01.06, Current version: 2023.01.06
|
Latest version: 2023.03.04, Current version: 2023.03.04
|
||||||
yt-dlp is up to date (2023.01.06)
|
yt-dlp is up to date (2023.03.04)
|
||||||
<more lines>
|
<more lines>
|
||||||
render: shell
|
render: shell
|
||||||
|
|
14
.github/ISSUE_TEMPLATE/6_question.yml
vendored
14
.github/ISSUE_TEMPLATE/6_question.yml
vendored
|
@ -7,7 +7,7 @@ body:
|
||||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||||
description: Fill all fields even if you think it is irrelevant for the issue
|
description: Fill all fields even if you think it is irrelevant for the issue
|
||||||
options:
|
options:
|
||||||
- label: I understand that I will be **blocked** if I remove or skip any mandatory\* field
|
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
|
||||||
required: true
|
required: true
|
||||||
- type: markdown
|
- type: markdown
|
||||||
attributes:
|
attributes:
|
||||||
|
@ -26,9 +26,9 @@ body:
|
||||||
required: true
|
required: true
|
||||||
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
|
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
|
||||||
required: true
|
required: true
|
||||||
- label: I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
- label: I've verified that I'm running yt-dlp version **2023.03.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
|
@ -46,6 +46,8 @@ body:
|
||||||
label: Provide verbose output that clearly demonstrates the problem
|
label: Provide verbose output that clearly demonstrates the problem
|
||||||
options:
|
options:
|
||||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||||
|
- label: "If using API, add `'verbose': True` to `YoutubeDL` params instead"
|
||||||
|
required: false
|
||||||
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: log
|
id: log
|
||||||
|
@ -57,7 +59,7 @@ body:
|
||||||
[debug] Command-line config: ['-vU', 'test:youtube']
|
[debug] Command-line config: ['-vU', 'test:youtube']
|
||||||
[debug] Portable config "yt-dlp.conf": ['-i']
|
[debug] Portable config "yt-dlp.conf": ['-i']
|
||||||
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
|
||||||
[debug] yt-dlp version 2023.01.06 [9d339c4] (win32_exe)
|
[debug] yt-dlp version 2023.03.04 [9d339c4] (win32_exe)
|
||||||
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
|
||||||
[debug] Checking exe version: ffmpeg -bsfs
|
[debug] Checking exe version: ffmpeg -bsfs
|
||||||
[debug] Checking exe version: ffprobe -bsfs
|
[debug] Checking exe version: ffprobe -bsfs
|
||||||
|
@ -65,7 +67,7 @@ body:
|
||||||
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
|
||||||
[debug] Proxy map: {}
|
[debug] Proxy map: {}
|
||||||
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
|
||||||
Latest version: 2023.01.06, Current version: 2023.01.06
|
Latest version: 2023.03.04, Current version: 2023.03.04
|
||||||
yt-dlp is up to date (2023.01.06)
|
yt-dlp is up to date (2023.03.04)
|
||||||
<more lines>
|
<more lines>
|
||||||
render: shell
|
render: shell
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
name: Broken site
|
name: Broken site
|
||||||
description: Report broken or misfunctioning site
|
description: Report error in a supported site
|
||||||
labels: [triage, site-bug]
|
labels: [triage, site-bug]
|
||||||
body:
|
body:
|
||||||
%(no_skip)s
|
%(no_skip)s
|
||||||
|
@ -10,7 +10,7 @@ body:
|
||||||
description: |
|
description: |
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
|
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
|
||||||
options:
|
options:
|
||||||
- label: I'm reporting a broken site
|
- label: I'm reporting that a **supported** site is broken
|
||||||
required: true
|
required: true
|
||||||
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
||||||
required: true
|
required: true
|
||||||
|
@ -18,7 +18,7 @@ body:
|
||||||
required: true
|
required: true
|
||||||
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
|
|
|
@ -18,7 +18,7 @@ body:
|
||||||
required: true
|
required: true
|
||||||
- label: I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
|
- label: I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
|
|
|
@ -16,7 +16,7 @@ body:
|
||||||
required: true
|
required: true
|
||||||
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
|
|
2
.github/ISSUE_TEMPLATE_tmpl/4_bug_report.yml
vendored
2
.github/ISSUE_TEMPLATE_tmpl/4_bug_report.yml
vendored
|
@ -18,7 +18,7 @@ body:
|
||||||
required: true
|
required: true
|
||||||
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
|
|
|
@ -16,7 +16,7 @@ body:
|
||||||
required: true
|
required: true
|
||||||
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
|
|
2
.github/ISSUE_TEMPLATE_tmpl/6_question.yml
vendored
2
.github/ISSUE_TEMPLATE_tmpl/6_question.yml
vendored
|
@ -22,7 +22,7 @@ body:
|
||||||
required: true
|
required: true
|
||||||
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
|
||||||
required: true
|
required: true
|
||||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
|
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
|
||||||
required: true
|
required: true
|
||||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||||
required: true
|
required: true
|
||||||
|
|
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
|
@ -30,7 +30,7 @@ ### Before submitting a *pull request* make sure you have:
|
||||||
- [ ] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
|
- [ ] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
|
||||||
- [ ] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions)
|
- [ ] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions)
|
||||||
|
|
||||||
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
|
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check all of the following options that apply:
|
||||||
- [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
|
- [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
|
||||||
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
|
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
|
||||||
|
|
||||||
|
|
567
.github/workflows/build.yml
vendored
567
.github/workflows/build.yml
vendored
|
@ -1,393 +1,356 @@
|
||||||
name: Build
|
name: Build Artifacts
|
||||||
on: workflow_dispatch
|
on:
|
||||||
|
workflow_call:
|
||||||
|
inputs:
|
||||||
|
version:
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
channel:
|
||||||
|
required: false
|
||||||
|
default: stable
|
||||||
|
type: string
|
||||||
|
unix:
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
linux_arm:
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
macos:
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
macos_legacy:
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
windows:
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
windows32:
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
meta_files:
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
secrets:
|
||||||
|
GPG_SIGNING_KEY:
|
||||||
|
required: false
|
||||||
|
|
||||||
|
workflow_dispatch:
|
||||||
|
inputs:
|
||||||
|
version:
|
||||||
|
description: Version tag (YYYY.MM.DD[.REV])
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
channel:
|
||||||
|
description: Update channel (stable/nightly)
|
||||||
|
required: true
|
||||||
|
default: stable
|
||||||
|
type: string
|
||||||
|
unix:
|
||||||
|
description: yt-dlp, yt-dlp.tar.gz, yt-dlp_linux, yt-dlp_linux.zip
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
linux_arm:
|
||||||
|
description: yt-dlp_linux_aarch64, yt-dlp_linux_armv7l
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
macos:
|
||||||
|
description: yt-dlp_macos, yt-dlp_macos.zip
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
macos_legacy:
|
||||||
|
description: yt-dlp_macos_legacy
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
windows:
|
||||||
|
description: yt-dlp.exe, yt-dlp_min.exe, yt-dlp_win.zip
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
windows32:
|
||||||
|
description: yt-dlp_x86.exe
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
meta_files:
|
||||||
|
description: SHA2-256SUMS, SHA2-512SUMS, _update_spec
|
||||||
|
default: true
|
||||||
|
type: boolean
|
||||||
|
|
||||||
permissions:
|
permissions:
|
||||||
contents: read
|
contents: read
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
prepare:
|
unix:
|
||||||
permissions:
|
if: inputs.unix
|
||||||
contents: write # for push_release
|
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
outputs:
|
|
||||||
version_suffix: ${{ steps.version_suffix.outputs.version_suffix }}
|
|
||||||
ytdlp_version: ${{ steps.bump_version.outputs.ytdlp_version }}
|
|
||||||
head_sha: ${{ steps.push_release.outputs.head_sha }}
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
with:
|
- uses: actions/setup-python@v4
|
||||||
fetch-depth: 0
|
with:
|
||||||
- uses: actions/setup-python@v4
|
python-version: "3.10"
|
||||||
with:
|
- uses: conda-incubator/setup-miniconda@v2
|
||||||
python-version: '3.10'
|
with:
|
||||||
|
|
||||||
- name: Set version suffix
|
|
||||||
id: version_suffix
|
|
||||||
env:
|
|
||||||
PUSH_VERSION_COMMIT: ${{ secrets.PUSH_VERSION_COMMIT }}
|
|
||||||
if: "env.PUSH_VERSION_COMMIT == ''"
|
|
||||||
run: echo "version_suffix=$(date -u +"%H%M%S")" >> "$GITHUB_OUTPUT"
|
|
||||||
- name: Bump version
|
|
||||||
id: bump_version
|
|
||||||
run: |
|
|
||||||
python devscripts/update-version.py ${{ steps.version_suffix.outputs.version_suffix }}
|
|
||||||
make issuetemplates
|
|
||||||
|
|
||||||
- name: Push to release
|
|
||||||
id: push_release
|
|
||||||
run: |
|
|
||||||
git config --global user.name github-actions
|
|
||||||
git config --global user.email github-actions@example.com
|
|
||||||
git add -u
|
|
||||||
git commit -m "[version] update" -m "Created by: ${{ github.event.sender.login }}" -m ":ci skip all :ci run dl"
|
|
||||||
git push origin --force ${{ github.event.ref }}:release
|
|
||||||
echo "head_sha=$(git rev-parse HEAD)" >> "$GITHUB_OUTPUT"
|
|
||||||
- name: Update master
|
|
||||||
env:
|
|
||||||
PUSH_VERSION_COMMIT: ${{ secrets.PUSH_VERSION_COMMIT }}
|
|
||||||
if: "env.PUSH_VERSION_COMMIT != ''"
|
|
||||||
run: git push origin ${{ github.event.ref }}
|
|
||||||
|
|
||||||
|
|
||||||
build_unix:
|
|
||||||
needs: prepare
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
- uses: actions/setup-python@v4
|
|
||||||
with:
|
|
||||||
python-version: '3.10'
|
|
||||||
- uses: conda-incubator/setup-miniconda@v2
|
|
||||||
with:
|
|
||||||
miniforge-variant: Mambaforge
|
miniforge-variant: Mambaforge
|
||||||
use-mamba: true
|
use-mamba: true
|
||||||
channels: conda-forge
|
channels: conda-forge
|
||||||
auto-update-conda: true
|
auto-update-conda: true
|
||||||
activate-environment: ''
|
activate-environment: ""
|
||||||
auto-activate-base: false
|
auto-activate-base: false
|
||||||
- name: Install Requirements
|
- name: Install Requirements
|
||||||
run: |
|
run: |
|
||||||
sudo apt-get -y install zip pandoc man sed
|
sudo apt-get -y install zip pandoc man sed
|
||||||
python -m pip install -U pip setuptools wheel twine
|
python -m pip install -U pip setuptools wheel
|
||||||
python -m pip install -U Pyinstaller -r requirements.txt
|
python -m pip install -U Pyinstaller -r requirements.txt
|
||||||
reqs=$(mktemp)
|
reqs=$(mktemp)
|
||||||
echo -e 'python=3.10.*\npyinstaller' >$reqs
|
cat > $reqs << EOF
|
||||||
sed 's/^brotli.*/brotli-python/' <requirements.txt >>$reqs
|
python=3.10.*
|
||||||
|
pyinstaller
|
||||||
|
cffi
|
||||||
|
brotli-python
|
||||||
|
EOF
|
||||||
|
sed '/^brotli.*/d' requirements.txt >> $reqs
|
||||||
mamba create -n build --file $reqs
|
mamba create -n build --file $reqs
|
||||||
|
|
||||||
- name: Prepare
|
- name: Prepare
|
||||||
run: |
|
run: |
|
||||||
python devscripts/update-version.py ${{ needs.prepare.outputs.version_suffix }}
|
python devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
|
||||||
python devscripts/make_lazy_extractors.py
|
python devscripts/make_lazy_extractors.py
|
||||||
- name: Build Unix platform-independent binary
|
- name: Build Unix platform-independent binary
|
||||||
run: |
|
run: |
|
||||||
make all tar
|
make all tar
|
||||||
- name: Build Unix standalone binary
|
- name: Build Unix standalone binary
|
||||||
shell: bash -l {0}
|
shell: bash -l {0}
|
||||||
run: |
|
run: |
|
||||||
unset LD_LIBRARY_PATH # Harmful; set by setup-python
|
unset LD_LIBRARY_PATH # Harmful; set by setup-python
|
||||||
conda activate build
|
conda activate build
|
||||||
python pyinst.py --onedir
|
python pyinst.py --onedir
|
||||||
(cd ./dist/yt-dlp_linux && zip -r ../yt-dlp_linux.zip .)
|
(cd ./dist/yt-dlp_linux && zip -r ../yt-dlp_linux.zip .)
|
||||||
python pyinst.py
|
python pyinst.py
|
||||||
|
mv ./dist/yt-dlp_linux ./yt-dlp_linux
|
||||||
|
mv ./dist/yt-dlp_linux.zip ./yt-dlp_linux.zip
|
||||||
|
|
||||||
- name: Upload artifacts
|
- name: Upload artifacts
|
||||||
uses: actions/upload-artifact@v3
|
uses: actions/upload-artifact@v3
|
||||||
with:
|
with:
|
||||||
path: |
|
path: |
|
||||||
yt-dlp
|
yt-dlp
|
||||||
yt-dlp.tar.gz
|
yt-dlp.tar.gz
|
||||||
dist/yt-dlp_linux
|
yt-dlp_linux
|
||||||
dist/yt-dlp_linux.zip
|
yt-dlp_linux.zip
|
||||||
|
|
||||||
- name: Build and publish on PyPi
|
linux_arm:
|
||||||
env:
|
if: inputs.linux_arm
|
||||||
TWINE_USERNAME: __token__
|
|
||||||
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
|
|
||||||
if: "env.TWINE_PASSWORD != ''"
|
|
||||||
run: |
|
|
||||||
rm -rf dist/*
|
|
||||||
python devscripts/set-variant.py pip -M "You installed yt-dlp with pip or using the wheel from PyPi; Use that to update"
|
|
||||||
python setup.py sdist bdist_wheel
|
|
||||||
twine upload dist/*
|
|
||||||
|
|
||||||
- name: Install SSH private key for Homebrew
|
|
||||||
env:
|
|
||||||
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
|
|
||||||
if: "env.BREW_TOKEN != ''"
|
|
||||||
uses: yt-dlp/ssh-agent@v0.5.3
|
|
||||||
with:
|
|
||||||
ssh-private-key: ${{ env.BREW_TOKEN }}
|
|
||||||
- name: Update Homebrew Formulae
|
|
||||||
env:
|
|
||||||
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
|
|
||||||
if: "env.BREW_TOKEN != ''"
|
|
||||||
run: |
|
|
||||||
git clone git@github.com:yt-dlp/homebrew-taps taps/
|
|
||||||
python devscripts/update-formulae.py taps/Formula/yt-dlp.rb "${{ needs.prepare.outputs.ytdlp_version }}"
|
|
||||||
git -C taps/ config user.name github-actions
|
|
||||||
git -C taps/ config user.email github-actions@example.com
|
|
||||||
git -C taps/ commit -am 'yt-dlp: ${{ needs.prepare.outputs.ytdlp_version }}'
|
|
||||||
git -C taps/ push
|
|
||||||
|
|
||||||
|
|
||||||
build_linux_arm:
|
|
||||||
permissions:
|
permissions:
|
||||||
packages: write # for Creating cache
|
contents: read
|
||||||
|
packages: write # for creating cache
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
needs: prepare
|
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
architecture:
|
architecture:
|
||||||
- armv7
|
- armv7
|
||||||
- aarch64
|
- aarch64
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
path: ./repo
|
path: ./repo
|
||||||
- name: Virtualized Install, Prepare & Build
|
- name: Virtualized Install, Prepare & Build
|
||||||
uses: yt-dlp/run-on-arch-action@v2
|
uses: yt-dlp/run-on-arch-action@v2
|
||||||
with:
|
with:
|
||||||
githubToken: ${{ github.token }} # To cache image
|
# Ref: https://github.com/uraimo/run-on-arch-action/issues/55
|
||||||
arch: ${{ matrix.architecture }}
|
env: |
|
||||||
distro: ubuntu18.04 # Standalone executable should be built on minimum supported OS
|
GITHUB_WORKFLOW: build
|
||||||
dockerRunArgs: --volume "${PWD}/repo:/repo"
|
githubToken: ${{ github.token }} # To cache image
|
||||||
install: | # Installing Python 3.10 from the Deadsnakes repo raises errors
|
arch: ${{ matrix.architecture }}
|
||||||
apt update
|
distro: ubuntu18.04 # Standalone executable should be built on minimum supported OS
|
||||||
apt -y install zlib1g-dev python3.8 python3.8-dev python3.8-distutils python3-pip
|
dockerRunArgs: --volume "${PWD}/repo:/repo"
|
||||||
python3.8 -m pip install -U pip setuptools wheel
|
install: | # Installing Python 3.10 from the Deadsnakes repo raises errors
|
||||||
# Cannot access requirements.txt from the repo directory at this stage
|
apt update
|
||||||
python3.8 -m pip install -U Pyinstaller mutagen pycryptodomex websockets brotli certifi
|
apt -y install zlib1g-dev python3.8 python3.8-dev python3.8-distutils python3-pip
|
||||||
|
python3.8 -m pip install -U pip setuptools wheel
|
||||||
|
# Cannot access requirements.txt from the repo directory at this stage
|
||||||
|
python3.8 -m pip install -U Pyinstaller mutagen pycryptodomex websockets brotli certifi
|
||||||
|
|
||||||
run: |
|
run: |
|
||||||
cd repo
|
cd repo
|
||||||
python3.8 -m pip install -U Pyinstaller -r requirements.txt # Cached version may be out of date
|
python3.8 -m pip install -U Pyinstaller -r requirements.txt # Cached version may be out of date
|
||||||
python3.8 devscripts/update-version.py ${{ needs.prepare.outputs.version_suffix }}
|
python3.8 devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
|
||||||
python3.8 devscripts/make_lazy_extractors.py
|
python3.8 devscripts/make_lazy_extractors.py
|
||||||
python3.8 pyinst.py
|
python3.8 pyinst.py
|
||||||
|
|
||||||
- name: Upload artifacts
|
- name: Upload artifacts
|
||||||
uses: actions/upload-artifact@v3
|
uses: actions/upload-artifact@v3
|
||||||
with:
|
with:
|
||||||
path: | # run-on-arch-action designates armv7l as armv7
|
path: | # run-on-arch-action designates armv7l as armv7
|
||||||
repo/dist/yt-dlp_linux_${{ (matrix.architecture == 'armv7' && 'armv7l') || matrix.architecture }}
|
repo/dist/yt-dlp_linux_${{ (matrix.architecture == 'armv7' && 'armv7l') || matrix.architecture }}
|
||||||
|
|
||||||
|
macos:
|
||||||
build_macos:
|
if: inputs.macos
|
||||||
runs-on: macos-11
|
runs-on: macos-11
|
||||||
needs: prepare
|
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
# NB: In order to create a universal2 application, the version of python3 in /usr/bin has to be used
|
# NB: In order to create a universal2 application, the version of python3 in /usr/bin has to be used
|
||||||
- name: Install Requirements
|
- name: Install Requirements
|
||||||
run: |
|
run: |
|
||||||
brew install coreutils
|
brew install coreutils
|
||||||
/usr/bin/python3 -m pip install -U --user pip Pyinstaller -r requirements.txt
|
/usr/bin/python3 -m pip install -U --user pip Pyinstaller==5.8 -r requirements.txt
|
||||||
|
|
||||||
- name: Prepare
|
- name: Prepare
|
||||||
run: |
|
run: |
|
||||||
/usr/bin/python3 devscripts/update-version.py ${{ needs.prepare.outputs.version_suffix }}
|
/usr/bin/python3 devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
|
||||||
/usr/bin/python3 devscripts/make_lazy_extractors.py
|
/usr/bin/python3 devscripts/make_lazy_extractors.py
|
||||||
- name: Build
|
- name: Build
|
||||||
run: |
|
run: |
|
||||||
/usr/bin/python3 pyinst.py --target-architecture universal2 --onedir
|
/usr/bin/python3 pyinst.py --target-architecture universal2 --onedir
|
||||||
(cd ./dist/yt-dlp_macos && zip -r ../yt-dlp_macos.zip .)
|
(cd ./dist/yt-dlp_macos && zip -r ../yt-dlp_macos.zip .)
|
||||||
/usr/bin/python3 pyinst.py --target-architecture universal2
|
/usr/bin/python3 pyinst.py --target-architecture universal2
|
||||||
|
|
||||||
- name: Upload artifacts
|
- name: Upload artifacts
|
||||||
uses: actions/upload-artifact@v3
|
uses: actions/upload-artifact@v3
|
||||||
with:
|
with:
|
||||||
path: |
|
path: |
|
||||||
dist/yt-dlp_macos
|
dist/yt-dlp_macos
|
||||||
dist/yt-dlp_macos.zip
|
dist/yt-dlp_macos.zip
|
||||||
|
|
||||||
|
macos_legacy:
|
||||||
build_macos_legacy:
|
if: inputs.macos_legacy
|
||||||
runs-on: macos-latest
|
runs-on: macos-latest
|
||||||
needs: prepare
|
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
- name: Install Python
|
- name: Install Python
|
||||||
# We need the official Python, because the GA ones only support newer macOS versions
|
# We need the official Python, because the GA ones only support newer macOS versions
|
||||||
env:
|
env:
|
||||||
PYTHON_VERSION: 3.10.5
|
PYTHON_VERSION: 3.10.5
|
||||||
MACOSX_DEPLOYMENT_TARGET: 10.9 # Used up by the Python build tools
|
MACOSX_DEPLOYMENT_TARGET: 10.9 # Used up by the Python build tools
|
||||||
run: |
|
run: |
|
||||||
# Hack to get the latest patch version. Uncomment if needed
|
# Hack to get the latest patch version. Uncomment if needed
|
||||||
#brew install python@3.10
|
#brew install python@3.10
|
||||||
#export PYTHON_VERSION=$( $(brew --prefix)/opt/python@3.10/bin/python3 --version | cut -d ' ' -f 2 )
|
#export PYTHON_VERSION=$( $(brew --prefix)/opt/python@3.10/bin/python3 --version | cut -d ' ' -f 2 )
|
||||||
curl https://www.python.org/ftp/python/${PYTHON_VERSION}/python-${PYTHON_VERSION}-macos11.pkg -o "python.pkg"
|
curl https://www.python.org/ftp/python/${PYTHON_VERSION}/python-${PYTHON_VERSION}-macos11.pkg -o "python.pkg"
|
||||||
sudo installer -pkg python.pkg -target /
|
sudo installer -pkg python.pkg -target /
|
||||||
python3 --version
|
python3 --version
|
||||||
- name: Install Requirements
|
- name: Install Requirements
|
||||||
run: |
|
run: |
|
||||||
brew install coreutils
|
brew install coreutils
|
||||||
python3 -m pip install -U --user pip Pyinstaller -r requirements.txt
|
python3 -m pip install -U --user pip Pyinstaller -r requirements.txt
|
||||||
|
|
||||||
- name: Prepare
|
- name: Prepare
|
||||||
run: |
|
run: |
|
||||||
python3 devscripts/update-version.py ${{ needs.prepare.outputs.version_suffix }}
|
python3 devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
|
||||||
python3 devscripts/make_lazy_extractors.py
|
python3 devscripts/make_lazy_extractors.py
|
||||||
- name: Build
|
- name: Build
|
||||||
run: |
|
run: |
|
||||||
python3 pyinst.py
|
python3 pyinst.py
|
||||||
mv dist/yt-dlp_macos dist/yt-dlp_macos_legacy
|
mv dist/yt-dlp_macos dist/yt-dlp_macos_legacy
|
||||||
|
|
||||||
- name: Upload artifacts
|
- name: Upload artifacts
|
||||||
uses: actions/upload-artifact@v3
|
uses: actions/upload-artifact@v3
|
||||||
with:
|
with:
|
||||||
path: |
|
path: |
|
||||||
dist/yt-dlp_macos_legacy
|
dist/yt-dlp_macos_legacy
|
||||||
|
|
||||||
|
windows:
|
||||||
build_windows:
|
if: inputs.windows
|
||||||
runs-on: windows-latest
|
runs-on: windows-latest
|
||||||
needs: prepare
|
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
- uses: actions/setup-python@v4
|
- uses: actions/setup-python@v4
|
||||||
with: # 3.8 is used for Win7 support
|
with: # 3.8 is used for Win7 support
|
||||||
python-version: '3.8'
|
python-version: "3.8"
|
||||||
- name: Install Requirements
|
- name: Install Requirements
|
||||||
run: | # Custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds
|
run: | # Custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds
|
||||||
python -m pip install -U pip setuptools wheel py2exe
|
python -m pip install -U pip setuptools wheel py2exe
|
||||||
pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-5.3-py3-none-any.whl" -r requirements.txt
|
pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-5.8.0-py3-none-any.whl" -r requirements.txt
|
||||||
|
|
||||||
- name: Prepare
|
- name: Prepare
|
||||||
run: |
|
run: |
|
||||||
python devscripts/update-version.py ${{ needs.prepare.outputs.version_suffix }}
|
python devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
|
||||||
python devscripts/make_lazy_extractors.py
|
python devscripts/make_lazy_extractors.py
|
||||||
- name: Build
|
- name: Build
|
||||||
run: |
|
run: |
|
||||||
python setup.py py2exe
|
python setup.py py2exe
|
||||||
Move-Item ./dist/yt-dlp.exe ./dist/yt-dlp_min.exe
|
Move-Item ./dist/yt-dlp.exe ./dist/yt-dlp_min.exe
|
||||||
python pyinst.py
|
python pyinst.py
|
||||||
python pyinst.py --onedir
|
python pyinst.py --onedir
|
||||||
Compress-Archive -Path ./dist/yt-dlp/* -DestinationPath ./dist/yt-dlp_win.zip
|
Compress-Archive -Path ./dist/yt-dlp/* -DestinationPath ./dist/yt-dlp_win.zip
|
||||||
|
|
||||||
- name: Upload artifacts
|
- name: Upload artifacts
|
||||||
uses: actions/upload-artifact@v3
|
uses: actions/upload-artifact@v3
|
||||||
with:
|
with:
|
||||||
path: |
|
path: |
|
||||||
dist/yt-dlp.exe
|
dist/yt-dlp.exe
|
||||||
dist/yt-dlp_min.exe
|
dist/yt-dlp_min.exe
|
||||||
dist/yt-dlp_win.zip
|
dist/yt-dlp_win.zip
|
||||||
|
|
||||||
|
windows32:
|
||||||
build_windows32:
|
if: inputs.windows32
|
||||||
runs-on: windows-latest
|
runs-on: windows-latest
|
||||||
needs: prepare
|
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
- uses: actions/setup-python@v4
|
- uses: actions/setup-python@v4
|
||||||
with: # 3.7 is used for Vista support. See https://github.com/yt-dlp/yt-dlp/issues/390
|
with: # 3.7 is used for Vista support. See https://github.com/yt-dlp/yt-dlp/issues/390
|
||||||
python-version: '3.7'
|
python-version: "3.7"
|
||||||
architecture: 'x86'
|
architecture: "x86"
|
||||||
- name: Install Requirements
|
- name: Install Requirements
|
||||||
run: |
|
run: |
|
||||||
python -m pip install -U pip setuptools wheel
|
python -m pip install -U pip setuptools wheel
|
||||||
pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-5.3-py3-none-any.whl" -r requirements.txt
|
pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-5.8.0-py3-none-any.whl" -r requirements.txt
|
||||||
|
|
||||||
- name: Prepare
|
- name: Prepare
|
||||||
run: |
|
run: |
|
||||||
python devscripts/update-version.py ${{ needs.prepare.outputs.version_suffix }}
|
python devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
|
||||||
python devscripts/make_lazy_extractors.py
|
python devscripts/make_lazy_extractors.py
|
||||||
- name: Build
|
- name: Build
|
||||||
run: |
|
run: |
|
||||||
python pyinst.py
|
python pyinst.py
|
||||||
|
|
||||||
- name: Upload artifacts
|
- name: Upload artifacts
|
||||||
uses: actions/upload-artifact@v3
|
uses: actions/upload-artifact@v3
|
||||||
with:
|
with:
|
||||||
path: |
|
path: |
|
||||||
dist/yt-dlp_x86.exe
|
dist/yt-dlp_x86.exe
|
||||||
|
|
||||||
|
meta_files:
|
||||||
publish_release:
|
if: inputs.meta_files && always()
|
||||||
permissions:
|
needs:
|
||||||
contents: write # for action-gh-release
|
- unix
|
||||||
|
- linux_arm
|
||||||
|
- macos
|
||||||
|
- macos_legacy
|
||||||
|
- windows
|
||||||
|
- windows32
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
needs: [prepare, build_unix, build_linux_arm, build_windows, build_windows32, build_macos, build_macos_legacy]
|
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/download-artifact@v3
|
||||||
- uses: actions/download-artifact@v3
|
|
||||||
|
|
||||||
- name: Get Changelog
|
- name: Make SHA2-SUMS files
|
||||||
run: |
|
run: |
|
||||||
changelog=$(grep -oPz '(?s)(?<=### ${{ needs.prepare.outputs.ytdlp_version }}\n{2}).+?(?=\n{2,3}###)' Changelog.md) || true
|
cd ./artifact/
|
||||||
echo "changelog<<EOF" >> $GITHUB_ENV
|
sha256sum * > ../SHA2-256SUMS
|
||||||
echo "$changelog" >> $GITHUB_ENV
|
sha512sum * > ../SHA2-512SUMS
|
||||||
echo "EOF" >> $GITHUB_ENV
|
|
||||||
- name: Make Update spec
|
|
||||||
run: |
|
|
||||||
echo "# This file is used for regulating self-update" >> _update_spec
|
|
||||||
echo "lock 2022.07.18 .+ Python 3.6" >> _update_spec
|
|
||||||
- name: Make SHA2-SUMS files
|
|
||||||
run: |
|
|
||||||
sha256sum artifact/yt-dlp | awk '{print $1 " yt-dlp"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp.tar.gz | awk '{print $1 " yt-dlp.tar.gz"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp.exe | awk '{print $1 " yt-dlp.exe"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp_win.zip | awk '{print $1 " yt-dlp_win.zip"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp_min.exe | awk '{print $1 " yt-dlp_min.exe"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp_x86.exe | awk '{print $1 " yt-dlp_x86.exe"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp_macos | awk '{print $1 " yt-dlp_macos"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp_macos.zip | awk '{print $1 " yt-dlp_macos.zip"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp_macos_legacy | awk '{print $1 " yt-dlp_macos_legacy"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp_linux_armv7l | awk '{print $1 " yt-dlp_linux_armv7l"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/yt-dlp_linux_aarch64 | awk '{print $1 " yt-dlp_linux_aarch64"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/dist/yt-dlp_linux | awk '{print $1 " yt-dlp_linux"}' >> SHA2-256SUMS
|
|
||||||
sha256sum artifact/dist/yt-dlp_linux.zip | awk '{print $1 " yt-dlp_linux.zip"}' >> SHA2-256SUMS
|
|
||||||
sha512sum artifact/yt-dlp | awk '{print $1 " yt-dlp"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp.tar.gz | awk '{print $1 " yt-dlp.tar.gz"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp.exe | awk '{print $1 " yt-dlp.exe"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp_win.zip | awk '{print $1 " yt-dlp_win.zip"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp_min.exe | awk '{print $1 " yt-dlp_min.exe"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp_x86.exe | awk '{print $1 " yt-dlp_x86.exe"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp_macos | awk '{print $1 " yt-dlp_macos"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp_macos.zip | awk '{print $1 " yt-dlp_macos.zip"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp_macos_legacy | awk '{print $1 " yt-dlp_macos_legacy"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp_linux_armv7l | awk '{print $1 " yt-dlp_linux_armv7l"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/yt-dlp_linux_aarch64 | awk '{print $1 " yt-dlp_linux_aarch64"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/dist/yt-dlp_linux | awk '{print $1 " yt-dlp_linux"}' >> SHA2-512SUMS
|
|
||||||
sha512sum artifact/dist/yt-dlp_linux.zip | awk '{print $1 " yt-dlp_linux.zip"}' >> SHA2-512SUMS
|
|
||||||
|
|
||||||
- name: Publish Release
|
- name: Make Update spec
|
||||||
uses: yt-dlp/action-gh-release@v1
|
run: |
|
||||||
with:
|
cat >> _update_spec << EOF
|
||||||
tag_name: ${{ needs.prepare.outputs.ytdlp_version }}
|
# This file is used for regulating self-update
|
||||||
name: yt-dlp ${{ needs.prepare.outputs.ytdlp_version }}
|
lock 2022.08.18.36 .+ Python 3.6
|
||||||
target_commitish: ${{ needs.prepare.outputs.head_sha }}
|
EOF
|
||||||
body: |
|
|
||||||
#### [A description of the various files]((https://github.com/yt-dlp/yt-dlp#release-files)) are in the README
|
|
||||||
|
|
||||||
---
|
- name: Sign checksum files
|
||||||
<details open><summary><h3>Changelog</summary>
|
env:
|
||||||
<p>
|
GPG_SIGNING_KEY: ${{ secrets.GPG_SIGNING_KEY }}
|
||||||
|
if: env.GPG_SIGNING_KEY != ''
|
||||||
|
run: |
|
||||||
|
gpg --batch --import <<< "${{ secrets.GPG_SIGNING_KEY }}"
|
||||||
|
for signfile in ./SHA*SUMS; do
|
||||||
|
gpg --batch --detach-sign "$signfile"
|
||||||
|
done
|
||||||
|
|
||||||
${{ env.changelog }}
|
- name: Upload artifacts
|
||||||
|
uses: actions/upload-artifact@v3
|
||||||
</p>
|
with:
|
||||||
</details>
|
path: |
|
||||||
files: |
|
SHA*SUMS*
|
||||||
SHA2-256SUMS
|
_update_spec
|
||||||
SHA2-512SUMS
|
|
||||||
artifact/yt-dlp
|
|
||||||
artifact/yt-dlp.tar.gz
|
|
||||||
artifact/yt-dlp.exe
|
|
||||||
artifact/yt-dlp_win.zip
|
|
||||||
artifact/yt-dlp_min.exe
|
|
||||||
artifact/yt-dlp_x86.exe
|
|
||||||
artifact/yt-dlp_macos
|
|
||||||
artifact/yt-dlp_macos.zip
|
|
||||||
artifact/yt-dlp_macos_legacy
|
|
||||||
artifact/yt-dlp_linux_armv7l
|
|
||||||
artifact/yt-dlp_linux_aarch64
|
|
||||||
artifact/dist/yt-dlp_linux
|
|
||||||
artifact/dist/yt-dlp_linux.zip
|
|
||||||
_update_spec
|
|
||||||
|
|
81
.github/workflows/publish.yml
vendored
Normal file
81
.github/workflows/publish.yml
vendored
Normal file
|
@ -0,0 +1,81 @@
|
||||||
|
name: Publish
|
||||||
|
on:
|
||||||
|
workflow_call:
|
||||||
|
inputs:
|
||||||
|
nightly:
|
||||||
|
default: false
|
||||||
|
required: false
|
||||||
|
type: boolean
|
||||||
|
version:
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
target_commitish:
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
secrets:
|
||||||
|
ARCHIVE_REPO_TOKEN:
|
||||||
|
required: false
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
contents: write
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
publish:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
- uses: actions/download-artifact@v3
|
||||||
|
- uses: actions/setup-python@v4
|
||||||
|
with:
|
||||||
|
python-version: "3.10"
|
||||||
|
|
||||||
|
- name: Generate release notes
|
||||||
|
run: |
|
||||||
|
cat >> ./RELEASE_NOTES << EOF
|
||||||
|
#### A description of the various files are in the [README](https://github.com/yt-dlp/yt-dlp#release-files)
|
||||||
|
---
|
||||||
|
<details><summary><h3>Changelog</h3></summary>
|
||||||
|
$(python ./devscripts/make_changelog.py -vv)
|
||||||
|
</details>
|
||||||
|
EOF
|
||||||
|
echo "**This is an automated nightly pre-release build**" >> ./PRERELEASE_NOTES
|
||||||
|
cat ./RELEASE_NOTES >> ./PRERELEASE_NOTES
|
||||||
|
echo "Generated from: https://github.com/${{ github.repository }}/commit/${{ inputs.target_commitish }}" >> ./ARCHIVE_NOTES
|
||||||
|
cat ./RELEASE_NOTES >> ./ARCHIVE_NOTES
|
||||||
|
|
||||||
|
- name: Archive nightly release
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.ARCHIVE_REPO_TOKEN }}
|
||||||
|
GH_REPO: ${{ vars.ARCHIVE_REPO }}
|
||||||
|
if: |
|
||||||
|
inputs.nightly && env.GH_TOKEN != '' && env.GH_REPO != ''
|
||||||
|
run: |
|
||||||
|
gh release create \
|
||||||
|
--notes-file ARCHIVE_NOTES \
|
||||||
|
--title "yt-dlp nightly ${{ inputs.version }}" \
|
||||||
|
${{ inputs.version }} \
|
||||||
|
artifact/*
|
||||||
|
|
||||||
|
- name: Prune old nightly release
|
||||||
|
if: inputs.nightly && !vars.ARCHIVE_REPO
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ github.token }}
|
||||||
|
run: |
|
||||||
|
gh release delete --yes --cleanup-tag "nightly" || true
|
||||||
|
git tag --delete "nightly" || true
|
||||||
|
sleep 5 # Enough time to cover deletion race condition
|
||||||
|
|
||||||
|
- name: Publish release${{ inputs.nightly && ' (nightly)' || '' }}
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ github.token }}
|
||||||
|
if: (inputs.nightly && !vars.ARCHIVE_REPO) || !inputs.nightly
|
||||||
|
run: |
|
||||||
|
gh release create \
|
||||||
|
--notes-file ${{ inputs.nightly && 'PRE' || '' }}RELEASE_NOTES \
|
||||||
|
--target ${{ inputs.target_commitish }} \
|
||||||
|
--title "yt-dlp ${{ inputs.nightly && 'nightly ' || '' }}${{ inputs.version }}" \
|
||||||
|
${{ inputs.nightly && '--prerelease "nightly"' || inputs.version }} \
|
||||||
|
artifact/*
|
51
.github/workflows/release-nightly.yml
vendored
Normal file
51
.github/workflows/release-nightly.yml
vendored
Normal file
|
@ -0,0 +1,51 @@
|
||||||
|
name: Release (nightly)
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- master
|
||||||
|
paths:
|
||||||
|
- "yt_dlp/**.py"
|
||||||
|
- "!yt_dlp/version.py"
|
||||||
|
concurrency:
|
||||||
|
group: release-nightly
|
||||||
|
cancel-in-progress: true
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
prepare:
|
||||||
|
if: vars.BUILD_NIGHTLY != ''
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
outputs:
|
||||||
|
version: ${{ steps.get_version.outputs.version }}
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Get version
|
||||||
|
id: get_version
|
||||||
|
run: |
|
||||||
|
python devscripts/update-version.py "$(date -u +"%H%M%S")" | grep -Po "version=\d+(\.\d+){3}" >> "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
|
build:
|
||||||
|
needs: prepare
|
||||||
|
uses: ./.github/workflows/build.yml
|
||||||
|
with:
|
||||||
|
version: ${{ needs.prepare.outputs.version }}
|
||||||
|
channel: nightly
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
packages: write # For package cache
|
||||||
|
secrets:
|
||||||
|
GPG_SIGNING_KEY: ${{ secrets.GPG_SIGNING_KEY }}
|
||||||
|
|
||||||
|
publish:
|
||||||
|
needs: [prepare, build]
|
||||||
|
uses: ./.github/workflows/publish.yml
|
||||||
|
secrets:
|
||||||
|
ARCHIVE_REPO_TOKEN: ${{ secrets.ARCHIVE_REPO_TOKEN }}
|
||||||
|
permissions:
|
||||||
|
contents: write
|
||||||
|
with:
|
||||||
|
nightly: true
|
||||||
|
version: ${{ needs.prepare.outputs.version }}
|
||||||
|
target_commitish: ${{ github.sha }}
|
129
.github/workflows/release.yml
vendored
Normal file
129
.github/workflows/release.yml
vendored
Normal file
|
@ -0,0 +1,129 @@
|
||||||
|
name: Release
|
||||||
|
on: workflow_dispatch
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
prepare:
|
||||||
|
permissions:
|
||||||
|
contents: write
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
outputs:
|
||||||
|
version: ${{ steps.update_version.outputs.version }}
|
||||||
|
head_sha: ${{ steps.push_release.outputs.head_sha }}
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- uses: actions/setup-python@v4
|
||||||
|
with:
|
||||||
|
python-version: "3.10"
|
||||||
|
|
||||||
|
- name: Update version
|
||||||
|
id: update_version
|
||||||
|
run: |
|
||||||
|
python devscripts/update-version.py ${{ vars.PUSH_VERSION_COMMIT == '' && '"$(date -u +"%H%M%S")"' || '' }} | \
|
||||||
|
grep -Po "version=\d+\.\d+\.\d+(\.\d+)?" >> "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
|
- name: Update documentation
|
||||||
|
run: |
|
||||||
|
make doc
|
||||||
|
sed '/### /Q' Changelog.md >> ./CHANGELOG
|
||||||
|
echo '### ${{ steps.update_version.outputs.version }}' >> ./CHANGELOG
|
||||||
|
python ./devscripts/make_changelog.py -vv -c >> ./CHANGELOG
|
||||||
|
echo >> ./CHANGELOG
|
||||||
|
grep -Poz '(?s)### \d+\.\d+\.\d+.+' 'Changelog.md' | head -n -1 >> ./CHANGELOG
|
||||||
|
cat ./CHANGELOG > Changelog.md
|
||||||
|
|
||||||
|
- name: Push to release
|
||||||
|
id: push_release
|
||||||
|
run: |
|
||||||
|
git config --global user.name github-actions
|
||||||
|
git config --global user.email github-actions@example.com
|
||||||
|
git add -u
|
||||||
|
git commit -m "Release ${{ steps.update_version.outputs.version }}" \
|
||||||
|
-m "Created by: ${{ github.event.sender.login }}" -m ":ci skip all :ci run dl"
|
||||||
|
git push origin --force ${{ github.event.ref }}:release
|
||||||
|
echo "head_sha=$(git rev-parse HEAD)" >> "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
|
- name: Update master
|
||||||
|
if: vars.PUSH_VERSION_COMMIT != ''
|
||||||
|
run: git push origin ${{ github.event.ref }}
|
||||||
|
|
||||||
|
publish_pypi_homebrew:
|
||||||
|
needs: prepare
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- uses: actions/setup-python@v4
|
||||||
|
with:
|
||||||
|
python-version: "3.10"
|
||||||
|
|
||||||
|
- name: Install Requirements
|
||||||
|
run: |
|
||||||
|
sudo apt-get -y install pandoc man
|
||||||
|
python -m pip install -U pip setuptools wheel twine
|
||||||
|
python -m pip install -U -r requirements.txt
|
||||||
|
|
||||||
|
- name: Prepare
|
||||||
|
run: |
|
||||||
|
python devscripts/update-version.py ${{ needs.prepare.outputs.version }}
|
||||||
|
python devscripts/make_lazy_extractors.py
|
||||||
|
|
||||||
|
- name: Build and publish on PyPI
|
||||||
|
env:
|
||||||
|
TWINE_USERNAME: __token__
|
||||||
|
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
|
||||||
|
if: env.TWINE_PASSWORD != ''
|
||||||
|
run: |
|
||||||
|
rm -rf dist/*
|
||||||
|
make pypi-files
|
||||||
|
python devscripts/set-variant.py pip -M "You installed yt-dlp with pip or using the wheel from PyPi; Use that to update"
|
||||||
|
python setup.py sdist bdist_wheel
|
||||||
|
twine upload dist/*
|
||||||
|
|
||||||
|
- name: Checkout Homebrew repository
|
||||||
|
env:
|
||||||
|
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
|
||||||
|
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
|
||||||
|
if: env.BREW_TOKEN != '' && env.PYPI_TOKEN != ''
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
repository: yt-dlp/homebrew-taps
|
||||||
|
path: taps
|
||||||
|
ssh-key: ${{ secrets.BREW_TOKEN }}
|
||||||
|
|
||||||
|
- name: Update Homebrew Formulae
|
||||||
|
env:
|
||||||
|
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
|
||||||
|
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
|
||||||
|
if: env.BREW_TOKEN != '' && env.PYPI_TOKEN != ''
|
||||||
|
run: |
|
||||||
|
python devscripts/update-formulae.py taps/Formula/yt-dlp.rb "${{ needs.prepare.outputs.version }}"
|
||||||
|
git -C taps/ config user.name github-actions
|
||||||
|
git -C taps/ config user.email github-actions@example.com
|
||||||
|
git -C taps/ commit -am 'yt-dlp: ${{ needs.prepare.outputs.version }}'
|
||||||
|
git -C taps/ push
|
||||||
|
|
||||||
|
build:
|
||||||
|
needs: prepare
|
||||||
|
uses: ./.github/workflows/build.yml
|
||||||
|
with:
|
||||||
|
version: ${{ needs.prepare.outputs.version }}
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
packages: write # For package cache
|
||||||
|
secrets:
|
||||||
|
GPG_SIGNING_KEY: ${{ secrets.GPG_SIGNING_KEY }}
|
||||||
|
|
||||||
|
publish:
|
||||||
|
needs: [prepare, build]
|
||||||
|
uses: ./.github/workflows/publish.yml
|
||||||
|
permissions:
|
||||||
|
contents: write
|
||||||
|
with:
|
||||||
|
version: ${{ needs.prepare.outputs.version }}
|
||||||
|
target_commitish: ${{ needs.prepare.outputs.head_sha }}
|
|
@ -127,7 +127,7 @@ ### Are you willing to share account details if needed?
|
||||||
|
|
||||||
### Is the website primarily used for piracy?
|
### Is the website primarily used for piracy?
|
||||||
|
|
||||||
We follow [youtube-dl's policy](https://github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free) to not support services that is primarily used for infringing copyright. Additionally, it has been decided to not to support porn sites that specialize in deep fake. We also cannot support any service that serves only [DRM protected content](https://en.wikipedia.org/wiki/Digital_rights_management).
|
We follow [youtube-dl's policy](https://github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free) to not support services that is primarily used for infringing copyright. Additionally, it has been decided to not to support porn sites that specialize in fakes. We also cannot support any service that serves only [DRM protected content](https://en.wikipedia.org/wiki/Digital_rights_management).
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
30
CONTRIBUTORS
30
CONTRIBUTORS
|
@ -4,6 +4,7 @@ coletdjnz/colethedj (collaborator)
|
||||||
Ashish0804 (collaborator)
|
Ashish0804 (collaborator)
|
||||||
nao20010128nao/Lesmiscore (collaborator)
|
nao20010128nao/Lesmiscore (collaborator)
|
||||||
bashonly (collaborator)
|
bashonly (collaborator)
|
||||||
|
Grub4K (collaborator)
|
||||||
h-h-h-h
|
h-h-h-h
|
||||||
pauldubois98
|
pauldubois98
|
||||||
nixxo
|
nixxo
|
||||||
|
@ -319,7 +320,6 @@ columndeeply
|
||||||
DoubleCouponDay
|
DoubleCouponDay
|
||||||
Fabi019
|
Fabi019
|
||||||
GautamMKGarg
|
GautamMKGarg
|
||||||
Grub4K
|
|
||||||
itachi-19
|
itachi-19
|
||||||
jeroenj
|
jeroenj
|
||||||
josanabr
|
josanabr
|
||||||
|
@ -381,3 +381,31 @@ gschizas
|
||||||
JC-Chung
|
JC-Chung
|
||||||
mzhou
|
mzhou
|
||||||
OndrejBakan
|
OndrejBakan
|
||||||
|
ab4cbef
|
||||||
|
aionescu
|
||||||
|
amra
|
||||||
|
ByteDream
|
||||||
|
carusocr
|
||||||
|
chexxor
|
||||||
|
felixonmars
|
||||||
|
FrankZ85
|
||||||
|
FriedrichRehren
|
||||||
|
gregsadetsky
|
||||||
|
LeoniePhiline
|
||||||
|
LowSuggestion912
|
||||||
|
Matumo
|
||||||
|
OIRNOIR
|
||||||
|
OMEGARAZER
|
||||||
|
oxamun
|
||||||
|
pmitchell86
|
||||||
|
qbnu
|
||||||
|
qulaz
|
||||||
|
rebane2001
|
||||||
|
road-master
|
||||||
|
rohieb
|
||||||
|
sdht0
|
||||||
|
seproDev
|
||||||
|
Hill-98
|
||||||
|
LXYan2333
|
||||||
|
mushbite
|
||||||
|
venkata-krishnas
|
||||||
|
|
203
Changelog.md
203
Changelog.md
|
@ -1,19 +1,202 @@
|
||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
# Instuctions for creating release
|
# To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master
|
||||||
|
|
||||||
* Run `make doc`
|
|
||||||
* Update Changelog.md and CONTRIBUTORS
|
|
||||||
* Change "Based on ytdl" version in Readme.md if needed
|
|
||||||
* Commit as `Release <version>` and push to master
|
|
||||||
* Dispatch the workflow https://github.com/yt-dlp/yt-dlp/actions/workflows/build.yml on master
|
|
||||||
-->
|
-->
|
||||||
|
|
||||||
|
### 2023.03.04
|
||||||
|
|
||||||
|
#### Extractor changes
|
||||||
|
- bilibili
|
||||||
|
- [Fix for downloading wrong subtitles](https://github.com/yt-dlp/yt-dlp/commit/8a83baaf218ab89e6e7faa76b7c7be3a2ec19e3a) ([#6358](https://github.com/yt-dlp/yt-dlp/issues/6358)) by [LXYan2333](https://github.com/LXYan2333)
|
||||||
|
- ESPNcricinfo
|
||||||
|
- [Handle new URL pattern](https://github.com/yt-dlp/yt-dlp/commit/640c934823fc2d1ec77ec932566078014058635f) ([#6321](https://github.com/yt-dlp/yt-dlp/issues/6321)) by [venkata-krishnas](https://github.com/venkata-krishnas)
|
||||||
|
- lefigaro
|
||||||
|
- [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/eb8fd6d044e8926532772b72be0645c6b8ecb3aa) ([#6309](https://github.com/yt-dlp/yt-dlp/issues/6309)) by [elyse0](https://github.com/elyse0)
|
||||||
|
- lumni
|
||||||
|
- [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/1f8489cccbdc6e96027ef527b88717458f0900e8) ([#6302](https://github.com/yt-dlp/yt-dlp/issues/6302)) by [carusocr](https://github.com/carusocr)
|
||||||
|
- Prankcast
|
||||||
|
- [Fix tags](https://github.com/yt-dlp/yt-dlp/commit/ed4cc4ea793314c50ae3f82e98248c1de1c25694) ([#6316](https://github.com/yt-dlp/yt-dlp/issues/6316)) by [columndeeply](https://github.com/columndeeply)
|
||||||
|
- rutube
|
||||||
|
- [Extract chapters from description](https://github.com/yt-dlp/yt-dlp/commit/22ccd5420b3eb0782776071f12cccd1fedaa1fd0) ([#6345](https://github.com/yt-dlp/yt-dlp/issues/6345)) by [mushbite](https://github.com/mushbite)
|
||||||
|
- SportDeutschland
|
||||||
|
- [Rewrite extractor](https://github.com/yt-dlp/yt-dlp/commit/45db357289b4e1eec09093c8bc5446520378f426) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- telecaribe
|
||||||
|
- [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/b40471282286bd2b09c485bf79afd271d229272c) ([#6311](https://github.com/yt-dlp/yt-dlp/issues/6311)) by [elyse0](https://github.com/elyse0)
|
||||||
|
- tubetugraz
|
||||||
|
- [Support `--twofactor` (#6424)](https://github.com/yt-dlp/yt-dlp/commit/f44cb4e77bb9be8be291d02ab6f79dc0b4c0d4a1) ([#6427](https://github.com/yt-dlp/yt-dlp/issues/6427)) by [Ferdi265](https://github.com/Ferdi265)
|
||||||
|
- tunein
|
||||||
|
- [Fix extractors](https://github.com/yt-dlp/yt-dlp/commit/46580ced56c90b559885aded6aa8f46f20a9cdce) ([#6310](https://github.com/yt-dlp/yt-dlp/issues/6310)) by [elyse0](https://github.com/elyse0)
|
||||||
|
- twitch
|
||||||
|
- [Update for GraphQL API changes](https://github.com/yt-dlp/yt-dlp/commit/4a6272c6d1bff89969b67cd22b26ebe6d7e72279) ([#6318](https://github.com/yt-dlp/yt-dlp/issues/6318)) by [elyse0](https://github.com/elyse0)
|
||||||
|
- twitter
|
||||||
|
- [Fix retweet extraction](https://github.com/yt-dlp/yt-dlp/commit/cf605226521e99c89fc8dff26a319025810e63a0) ([#6422](https://github.com/yt-dlp/yt-dlp/issues/6422)) by [selfisekai](https://github.com/selfisekai)
|
||||||
|
- xvideos
|
||||||
|
- quickies: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/283a0b5bc511f3b350eead4488158f50c20ec526) ([#6414](https://github.com/yt-dlp/yt-dlp/issues/6414)) by [Yakabuff](https://github.com/Yakabuff)
|
||||||
|
|
||||||
|
#### Misc. changes
|
||||||
|
- build
|
||||||
|
- [Fix publishing to PyPI and homebrew](https://github.com/yt-dlp/yt-dlp/commit/55676fe498345a389a2539d8baaba958d6d61c3e) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [Only archive if `vars.ARCHIVE_REPO` is set](https://github.com/yt-dlp/yt-dlp/commit/08ff6d59f97b5f5f0128f6bf6fbef56fd836cc52) by [Grub4K](https://github.com/Grub4K)
|
||||||
|
- cleanup
|
||||||
|
- Miscellaneous: [392389b](https://github.com/yt-dlp/yt-dlp/commit/392389b7df7b818f794b231f14dc396d4875fbad) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- devscripts
|
||||||
|
- `make_changelog`: [Stop at `Release ...` commit](https://github.com/yt-dlp/yt-dlp/commit/7accdd9845fe7ce9d0aa5a9d16faaa489c1294eb) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
|
||||||
|
### 2023.03.03
|
||||||
|
|
||||||
|
#### Important changes
|
||||||
|
- **A new release type has been added!**
|
||||||
|
* [`nightly`](https://github.com/yt-dlp/yt-dlp/releases/tag/nightly) builds will be made after each push, containing the latest fixes (but also possibly bugs).
|
||||||
|
* When using `--update`/`-U`, a release binary will only update to its current channel (either `stable` or `nightly`).
|
||||||
|
* The `--update-to` option has been added allowing the user more control over program upgrades (or downgrades).
|
||||||
|
* `--update-to` can change the release channel (`stable`, `nightly`) and also upgrade or downgrade to specific tags.
|
||||||
|
* **Usage**: `--update-to CHANNEL`, `--update-to TAG`, `--update-to CHANNEL@TAG`
|
||||||
|
- **YouTube throttling fixes!**
|
||||||
|
|
||||||
|
#### Core changes
|
||||||
|
- [Add option `--break-match-filters`](https://github.com/yt-dlp/yt-dlp/commit/fe2ce85aff0aa03735fc0152bb8cb9c3d4ef0753) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- [Fix `--break-on-existing` with `--lazy-playlist`](https://github.com/yt-dlp/yt-dlp/commit/d21056f4cf0a1623daa107f9181074f5725ac436) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- dependencies
|
||||||
|
- [Simplify `Cryptodome`](https://github.com/yt-dlp/yt-dlp/commit/65f6e807804d2af5e00f2aecd72bfc43af19324a) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- jsinterp
|
||||||
|
- [Handle `Date` at epoch 0](https://github.com/yt-dlp/yt-dlp/commit/9acf1ee25f7ad3920ede574a9de95b8c18626af4) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- plugins
|
||||||
|
- [Don't look in `.egg` directories](https://github.com/yt-dlp/yt-dlp/commit/b059188383eee4fa336ef728dda3ff4bb7335625) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- update
|
||||||
|
- [Add option `--update-to`, including to nightly](https://github.com/yt-dlp/yt-dlp/commit/77df20f14cc9ed41dfe3a1fe2d77fd27f5365a94) ([#6220](https://github.com/yt-dlp/yt-dlp/issues/6220)) by [bashonly](https://github.com/bashonly), [Grub4K](https://github.com/Grub4K), [pukkandan](https://github.com/pukkandan)
|
||||||
|
- utils
|
||||||
|
- `LenientJSONDecoder`: [Parse unclosed objects](https://github.com/yt-dlp/yt-dlp/commit/cc09083636ce21e58ff74f45eac2dbda507462b0) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- `Popen`: [Shim undocumented `text_mode` property](https://github.com/yt-dlp/yt-dlp/commit/da8e2912b165005f76779a115a071cd6132ceedf) by [Grub4K](https://github.com/Grub4K)
|
||||||
|
|
||||||
|
#### Extractor changes
|
||||||
|
- [Fix DRM detection in m3u8](https://github.com/yt-dlp/yt-dlp/commit/43a3eaf96393b712d60cbcf5c6cb1e90ed7f42f5) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- generic
|
||||||
|
- [Detect manifest links via extension](https://github.com/yt-dlp/yt-dlp/commit/b38cae49e6f4849c8ee2a774bdc3c1c647ae5f0e) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [Handle basic-auth when checking redirects](https://github.com/yt-dlp/yt-dlp/commit/8e9fe43cd393e69fa49b3d842aa3180c1d105b8f) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- GoogleDrive
|
||||||
|
- [Fix some audio](https://github.com/yt-dlp/yt-dlp/commit/4d248e29d20d983ededab0b03d4fe69dff9eb4ed) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- iprima
|
||||||
|
- [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/9fddc12ab022a31754e0eaa358fc4e1dfa974587) ([#6291](https://github.com/yt-dlp/yt-dlp/issues/6291)) by [std-move](https://github.com/std-move)
|
||||||
|
- mediastream
|
||||||
|
- [Improve WinSports support](https://github.com/yt-dlp/yt-dlp/commit/2d5a8c5db2bd4ff1c2e45e00cd890a10f8ffca9e) ([#6401](https://github.com/yt-dlp/yt-dlp/issues/6401)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- ntvru
|
||||||
|
- [Extract HLS and DASH formats](https://github.com/yt-dlp/yt-dlp/commit/77d6d136468d0c23c8e79bc937898747804f585a) ([#6403](https://github.com/yt-dlp/yt-dlp/issues/6403)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- tencent
|
||||||
|
- [Add more formats and info](https://github.com/yt-dlp/yt-dlp/commit/18d295c9e0f95adc179eef345b7af64d6372db78) ([#5950](https://github.com/yt-dlp/yt-dlp/issues/5950)) by [Hill-98](https://github.com/Hill-98)
|
||||||
|
- yle_areena
|
||||||
|
- [Extract non-Kaltura videos](https://github.com/yt-dlp/yt-dlp/commit/40d77d89027cd0e0ce31d22aec81db3e1d433900) ([#6402](https://github.com/yt-dlp/yt-dlp/issues/6402)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- youtube
|
||||||
|
- [Construct dash formats with `range` query](https://github.com/yt-dlp/yt-dlp/commit/5038f6d713303e0967d002216e7a88652401c22a) by [pukkandan](https://github.com/pukkandan) (With fixes in [f34804b](https://github.com/yt-dlp/yt-dlp/commit/f34804b2f920f62a6e893a14a9e2a2144b14dd23) by [bashonly](https://github.com/bashonly), [coletdjnz](https://github.com/coletdjnz))
|
||||||
|
- [Detect and break on looping comments](https://github.com/yt-dlp/yt-dlp/commit/7f51861b1820c37b157a239b1fe30628d907c034) ([#6301](https://github.com/yt-dlp/yt-dlp/issues/6301)) by [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
- [Extract channel `view_count` when `/about` tab is passed](https://github.com/yt-dlp/yt-dlp/commit/31e183557fcd1b937582f9429f29207c1261f501) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
|
||||||
|
#### Misc. changes
|
||||||
|
- build
|
||||||
|
- [Add `cffi` as a dependency for `yt_dlp_linux`](https://github.com/yt-dlp/yt-dlp/commit/776d1c3f0c9b00399896dd2e40e78e9a43218109) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [Automated builds and nightly releases](https://github.com/yt-dlp/yt-dlp/commit/29cb20bd563c02671b31dd840139e93dd37150a1) ([#6220](https://github.com/yt-dlp/yt-dlp/issues/6220)) by [bashonly](https://github.com/bashonly), [Grub4K](https://github.com/Grub4K) (With fixes in [bfc861a](https://github.com/yt-dlp/yt-dlp/commit/bfc861a91ee65c9b0ac169754f512e052c6827cf) by [pukkandan](https://github.com/pukkandan))
|
||||||
|
- [Sign SHA files and release public key](https://github.com/yt-dlp/yt-dlp/commit/12647e03d417feaa9ea6a458bea5ebd747494a53) by [Grub4K](https://github.com/Grub4K)
|
||||||
|
- cleanup
|
||||||
|
- [Fix `Changelog`](https://github.com/yt-dlp/yt-dlp/commit/17ca19ab60a6a13eb8a629c51442b5248b0d8394) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- jsinterp: [Give functions names to help debugging](https://github.com/yt-dlp/yt-dlp/commit/b2e0343ba0fc5d8702e90f6ba2b71358e2677e0b) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- Miscellaneous: [4815bbf](https://github.com/yt-dlp/yt-dlp/commit/4815bbfc41cf641e4a0650289dbff968cb3bde76), [5b28cef](https://github.com/yt-dlp/yt-dlp/commit/5b28cef72db3b531680d89c121631c73ae05354f) by [pukkandan](https://github.com/pukkandan)
|
||||||
|
- devscripts
|
||||||
|
- [Script to generate changelog](https://github.com/yt-dlp/yt-dlp/commit/d400e261cf029a3f20d364113b14de973be75404) ([#6220](https://github.com/yt-dlp/yt-dlp/issues/6220)) by [Grub4K](https://github.com/Grub4K) (With fixes in [9344964](https://github.com/yt-dlp/yt-dlp/commit/93449642815a6973a4b09b289982ca7e1f961b5f))
|
||||||
|
|
||||||
|
### 2023.02.17
|
||||||
|
|
||||||
|
* Merge youtube-dl: Upto [commit/2dd6c6e](https://github.com/ytdl-org/youtube-dl/commit/2dd6c6e)
|
||||||
|
* Fix `--concat-playlist`
|
||||||
|
* Imply `--no-progress` when `--print`
|
||||||
|
* Improve default subtitle language selection by [sdht0](https://github.com/sdht0)
|
||||||
|
* Make `title` completely non-fatal
|
||||||
|
* Sanitize formats before sorting by [pukkandan](https://github.com/pukkandan)
|
||||||
|
* Support module level `__bool__` and `property`
|
||||||
|
* [dependencies] Standardize `Cryptodome` imports
|
||||||
|
* [hls] Allow extractors to provide AES key by [Grub4K](https://github.com/Grub4K), [bashonly](https://github.com/bashonly)
|
||||||
|
* [ExtractAudio] Handle outtmpl without ext by [carusocr](https://github.com/carusocr)
|
||||||
|
* [extractor/common] Fix `_search_nuxt_data` by [LowSuggestion912](https://github.com/LowSuggestion912)
|
||||||
|
* [extractor/generic] Avoid catastrophic backtracking in KVS regex by [bashonly](https://github.com/bashonly)
|
||||||
|
* [jsinterp] Support `if` statements
|
||||||
|
* [plugins] Fix zip search paths
|
||||||
|
* [utils] `traverse_obj`: Various improvements by [Grub4K](https://github.com/Grub4K)
|
||||||
|
* [utils] `traverse_obj`: Fix more bugs
|
||||||
|
* [utils] `traverse_obj`: Fix several behavioral problems by [Grub4K](https://github.com/Grub4K)
|
||||||
|
* [utils] Don't use Content-length with encoding by [felixonmars](https://github.com/felixonmars)
|
||||||
|
* [utils] Fix `time_seconds` to use the provided TZ by [Grub4K](https://github.com/Grub4K), [Lesmiscore](https://github.com/Lesmiscore)
|
||||||
|
* [utils] Fix race condition in `make_dir` by [aionescu](https://github.com/aionescu)
|
||||||
|
* [utils] Use local kernel32 for file locking on Windows by [Grub4K](https://github.com/Grub4K)
|
||||||
|
* [compat_utils] Improve `passthrough_module`
|
||||||
|
* [compat_utils] Simplify `EnhancedModule`
|
||||||
|
* [build] Update pyinstaller
|
||||||
|
* [pyinst] Fix for pyinstaller 5.8
|
||||||
|
* [devscripts] Provide `pyinstaller` hooks
|
||||||
|
* [devscripts/pyinstaller] Analyze sub-modules of `Cryptodome`
|
||||||
|
* [cleanup] Misc fixes and cleanup
|
||||||
|
* [extractor/anchorfm] Add episode extractor by [HobbyistDev](https://github.com/HobbyistDev), [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/boxcast] Add extractor by [HobbyistDev](https://github.com/HobbyistDev)
|
||||||
|
* [extractor/ebay] Add extractor by [JChris246](https://github.com/JChris246)
|
||||||
|
* [extractor/hypergryph] Add extractor by [HobbyistDev](https://github.com/HobbyistDev), [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/NZOnScreen] Add extractor by [gregsadetsky](https://github.com/gregsadetsky), [pukkandan](https://github.com/pukkandan)
|
||||||
|
* [extractor/rozhlas] Add extractor RozhlasVltavaIE by [amra](https://github.com/amra)
|
||||||
|
* [extractor/tempo] Add IVXPlayer extractor by [HobbyistDev](https://github.com/HobbyistDev)
|
||||||
|
* [extractor/txxx] Add extractors by [chio0hai](https://github.com/chio0hai)
|
||||||
|
* [extractor/vocaroo] Add extractor by [SuperSonicHub1](https://github.com/SuperSonicHub1), [qbnu](https://github.com/qbnu)
|
||||||
|
* [extractor/wrestleuniverse] Add extractors by [Grub4K](https://github.com/Grub4K), [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/yappy] Add extractor by [HobbyistDev](https://github.com/HobbyistDev), [dirkf](https://github.com/dirkf)
|
||||||
|
* [extractor/youtube] **Fix `uploader_id` extraction** by [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/youtube] Add hyperpipe instances by [Generator](https://github.com/Generator)
|
||||||
|
* [extractor/youtube] Handle `consent.youtube`
|
||||||
|
* [extractor/youtube] Support `/live/` URL
|
||||||
|
* [extractor/youtube] Update invidious and piped instances by [rohieb](https://github.com/rohieb)
|
||||||
|
* [extractor/91porn] Fix title and comment extraction by [pmitchell86](https://github.com/pmitchell86)
|
||||||
|
* [extractor/AbemaTV] Cache user token whenever appropriate by [Lesmiscore](https://github.com/Lesmiscore)
|
||||||
|
* [extractor/bfmtv] Support `rmc` prefix by [carusocr](https://github.com/carusocr)
|
||||||
|
* [extractor/biliintl] Add intro and ending chapters by [HobbyistDev](https://github.com/HobbyistDev)
|
||||||
|
* [extractor/clyp] Support `wav` by [qulaz](https://github.com/qulaz)
|
||||||
|
* [extractor/crunchyroll] Add intro chapter by [ByteDream](https://github.com/ByteDream)
|
||||||
|
* [extractor/crunchyroll] Better message for premium videos
|
||||||
|
* [extractor/crunchyroll] Fix incorrect premium-only error by [Grub4K](https://github.com/Grub4K)
|
||||||
|
* [extractor/DouyuTV] Use new API by [hatienl0i261299](https://github.com/hatienl0i261299)
|
||||||
|
* [extractor/embedly] Embedded links may be for other extractors
|
||||||
|
* [extractor/freesound] Workaround invalid URL in webpage by [rebane2001](https://github.com/rebane2001)
|
||||||
|
* [extractor/GoPlay] Use new API by [jeroenj](https://github.com/jeroenj)
|
||||||
|
* [extractor/Hidive] Fix subtitles and age-restriction by [chexxor](https://github.com/chexxor)
|
||||||
|
* [extractor/huya] Support HD streams by [felixonmars](https://github.com/felixonmars)
|
||||||
|
* [extractor/moviepilot] Fix extractor by [panatexxa](https://github.com/panatexxa)
|
||||||
|
* [extractor/nbc] Fix `NBC` and `NBCStations` extractors by [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/nbc] Fix XML parsing by [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/nebula] Remove broken cookie support by [hheimbuerger](https://github.com/hheimbuerger)
|
||||||
|
* [extractor/nfl] Add `NFLPlus` extractors by [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/niconico] Add support for like history by [Matumo](https://github.com/Matumo), [pukkandan](https://github.com/pukkandan)
|
||||||
|
* [extractor/nitter] Update instance list by [OIRNOIR](https://github.com/OIRNOIR)
|
||||||
|
* [extractor/npo] Fix extractor and add HD support by [seproDev](https://github.com/seproDev)
|
||||||
|
* [extractor/odkmedia] Add `OnDemandChinaEpisodeIE` by [HobbyistDev](https://github.com/HobbyistDev), [pukkandan](https://github.com/pukkandan)
|
||||||
|
* [extractor/pornez] Handle relative URLs in iframe by [JChris246](https://github.com/JChris246)
|
||||||
|
* [extractor/radiko] Fix format sorting for Time Free by [road-master](https://github.com/road-master)
|
||||||
|
* [extractor/rcs] Fix extractors by [nixxo](https://github.com/nixxo), [pukkandan](https://github.com/pukkandan)
|
||||||
|
* [extractor/reddit] Support user posts by [OMEGARAZER](https://github.com/OMEGARAZER)
|
||||||
|
* [extractor/rumble] Fix format sorting by [pukkandan](https://github.com/pukkandan)
|
||||||
|
* [extractor/servus] Rewrite extractor by [Ashish0804](https://github.com/Ashish0804), [FrankZ85](https://github.com/FrankZ85), [StefanLobbenmeier](https://github.com/StefanLobbenmeier)
|
||||||
|
* [extractor/slideslive] Fix slides and chapters/duration by [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/SportDeutschland] Fix extractor by [FriedrichRehren](https://github.com/FriedrichRehren)
|
||||||
|
* [extractor/Stripchat] Fix extractor by [JChris246](https://github.com/JChris246), [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/tnaflix] Fix extractor by [bashonly](https://github.com/bashonly), [oxamun](https://github.com/oxamun)
|
||||||
|
* [extractor/tvp] Support `stream.tvp.pl` by [selfisekai](https://github.com/selfisekai)
|
||||||
|
* [extractor/twitter] Fix `--no-playlist` and add media `view_count` when using GraphQL by [Grub4K](https://github.com/Grub4K)
|
||||||
|
* [extractor/twitter] Fix graphql extraction on some tweets by [selfisekai](https://github.com/selfisekai)
|
||||||
|
* [extractor/vimeo] Fix `playerConfig` extraction by [LeoniePhiline](https://github.com/LeoniePhiline), [bashonly](https://github.com/bashonly)
|
||||||
|
* [extractor/viu] Add `ViuOTTIndonesiaIE` extractor by [HobbyistDev](https://github.com/HobbyistDev)
|
||||||
|
* [extractor/vk] Fix playlists for new API by [the-marenga](https://github.com/the-marenga)
|
||||||
|
* [extractor/vlive] Replace with `VLiveWebArchiveIE` by [seproDev](https://github.com/seproDev)
|
||||||
|
* [extractor/ximalaya] Update album `_VALID_URL` by [carusocr](https://github.com/carusocr)
|
||||||
|
* [extractor/zdf] Use android API endpoint for UHD downloads by [seproDev](https://github.com/seproDev)
|
||||||
|
* [extractor/drtv] Fix bug in [ab4cbef](https://github.com/yt-dlp/yt-dlp/commit/ab4cbef) by [bashonly](https://github.com/bashonly)
|
||||||
|
|
||||||
|
|
||||||
### 2023.01.06
|
### 2023.01.06
|
||||||
|
|
||||||
* Fix config locations by [Grub4k](https://github.com/Grub4k), [coletdjnz](https://github.com/coletdjnz), [pukkandan](https://github.com/pukkandan)
|
* Fix config locations by [Grub4K](https://github.com/Grub4K), [coletdjnz](https://github.com/coletdjnz), [pukkandan](https://github.com/pukkandan)
|
||||||
* [downloader/aria2c] Disable native progress
|
* [downloader/aria2c] Disable native progress
|
||||||
* [utils] `mimetype2ext`: `weba` is not standard
|
* [utils] `mimetype2ext`: `weba` is not standard
|
||||||
* [utils] `windows_enable_vt_mode`: Better error handling
|
* [utils] `windows_enable_vt_mode`: Better error handling
|
||||||
|
@ -40,7 +223,7 @@ ### 2023.01.02
|
||||||
* Add `--compat-options 2021,2022`
|
* Add `--compat-options 2021,2022`
|
||||||
* This allows devs to change defaults and make other potentially breaking changes more easily. If you need everything to work exactly as-is, put Use `--compat 2022` in your config to guard against future compat changes.
|
* This allows devs to change defaults and make other potentially breaking changes more easily. If you need everything to work exactly as-is, put Use `--compat 2022` in your config to guard against future compat changes.
|
||||||
* [downloader/aria2c] Native progress for aria2c via RPC by [Lesmiscore](https://github.com/Lesmiscore), [pukkandan](https://github.com/pukkandan)
|
* [downloader/aria2c] Native progress for aria2c via RPC by [Lesmiscore](https://github.com/Lesmiscore), [pukkandan](https://github.com/pukkandan)
|
||||||
* Merge youtube-dl: Upto [commit/195f22f](https://github.com/ytdl-org/youtube-dl/commit/195f22f6) by [Grub4k](https://github.com/Grub4k), [pukkandan](https://github.com/pukkandan)
|
* Merge youtube-dl: Upto [commit/195f22f](https://github.com/ytdl-org/youtube-dl/commit/195f22f6) by [Grub4K](https://github.com/Grub4K), [pukkandan](https://github.com/pukkandan)
|
||||||
* Add pre-processor stage `video`
|
* Add pre-processor stage `video`
|
||||||
* Let `--parse/replace-in-metadata` run at any post-processing stage
|
* Let `--parse/replace-in-metadata` run at any post-processing stage
|
||||||
* Add `--enable-file-urls` by [coletdjnz](https://github.com/coletdjnz)
|
* Add `--enable-file-urls` by [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
@ -155,7 +338,7 @@ ### 2023.01.02
|
||||||
* [extractor/udemy] Fix lectures that have no URL and detect DRM
|
* [extractor/udemy] Fix lectures that have no URL and detect DRM
|
||||||
* [extractor/unsupported] Add more URLs
|
* [extractor/unsupported] Add more URLs
|
||||||
* [extractor/urplay] Support for audio-only formats by [barsnick](https://github.com/barsnick)
|
* [extractor/urplay] Support for audio-only formats by [barsnick](https://github.com/barsnick)
|
||||||
* [extractor/wistia] Improve extension detection by [Grub4k](https://github.com/Grub4k), [bashonly](https://github.com/bashonly), [pukkandan](https://github.com/pukkandan)
|
* [extractor/wistia] Improve extension detection by [Grub4K](https://github.com/Grub4K), [bashonly](https://github.com/bashonly), [pukkandan](https://github.com/pukkandan)
|
||||||
* [extractor/yle_areena] Support restricted videos by [docbender](https://github.com/docbender)
|
* [extractor/yle_areena] Support restricted videos by [docbender](https://github.com/docbender)
|
||||||
* [extractor/youku] Fix extractor by [KurtBestor](https://github.com/KurtBestor)
|
* [extractor/youku] Fix extractor by [KurtBestor](https://github.com/KurtBestor)
|
||||||
* [extractor/youporn] Fix metadata by [marieell](https://github.com/marieell)
|
* [extractor/youporn] Fix metadata by [marieell](https://github.com/marieell)
|
||||||
|
|
|
@ -8,6 +8,7 @@ # Collaborators
|
||||||
## [pukkandan](https://github.com/pukkandan)
|
## [pukkandan](https://github.com/pukkandan)
|
||||||
|
|
||||||
[](https://ko-fi.com/pukkandan)
|
[](https://ko-fi.com/pukkandan)
|
||||||
|
[](https://github.com/sponsors/pukkandan)
|
||||||
|
|
||||||
* Owner of the fork
|
* Owner of the fork
|
||||||
|
|
||||||
|
@ -25,8 +26,9 @@ ## [shirt](https://github.com/shirt-dev)
|
||||||
|
|
||||||
## [coletdjnz](https://github.com/coletdjnz)
|
## [coletdjnz](https://github.com/coletdjnz)
|
||||||
|
|
||||||
[](https://github.com/sponsors/coletdjnz)
|
[](https://github.com/sponsors/coletdjnz)
|
||||||
|
|
||||||
|
* Improved plugin architecture
|
||||||
* YouTube improvements including: age-gate bypass, private playlists, multiple-clients (to avoid throttling) and a lot of under-the-hood improvements
|
* YouTube improvements including: age-gate bypass, private playlists, multiple-clients (to avoid throttling) and a lot of under-the-hood improvements
|
||||||
* Added support for new websites YoutubeWebArchive, MainStreaming, PRX, nzherald, Mediaklikk, StarTV etc
|
* Added support for new websites YoutubeWebArchive, MainStreaming, PRX, nzherald, Mediaklikk, StarTV etc
|
||||||
* Improved/fixed support for Patreon, panopto, gfycat, itv, pbs, SouthParkDE etc
|
* Improved/fixed support for Patreon, panopto, gfycat, itv, pbs, SouthParkDE etc
|
||||||
|
@ -54,6 +56,16 @@ ## [Lesmiscore](https://github.com/Lesmiscore) <sub><sup>(nao20010128nao)</sup><
|
||||||
|
|
||||||
## [bashonly](https://github.com/bashonly)
|
## [bashonly](https://github.com/bashonly)
|
||||||
|
|
||||||
|
* `--update-to`, automated release, nightly builds
|
||||||
* `--cookies-from-browser` support for Firefox containers
|
* `--cookies-from-browser` support for Firefox containers
|
||||||
* Added support for new websites Genius, Kick, NBCStations, Triller, VideoKen etc
|
* Added support for new websites Genius, Kick, NBCStations, Triller, VideoKen etc
|
||||||
* Improved/fixed support for Anvato, Brightcove, Instagram, ParamountPlus, Reddit, SlidesLive, TikTok, Twitter, Vimeo etc
|
* Improved/fixed support for Anvato, Brightcove, Instagram, ParamountPlus, Reddit, SlidesLive, TikTok, Twitter, Vimeo etc
|
||||||
|
|
||||||
|
|
||||||
|
## [Grub4K](https://github.com/Grub4K)
|
||||||
|
|
||||||
|
[](https://ko-fi.com/Grub4K) [](https://github.com/sponsors/Grub4K)
|
||||||
|
|
||||||
|
* `--update-to`, automated release, nightly builds
|
||||||
|
* Rework internals like `traverse_obj`, various core refactors and bugs fixes
|
||||||
|
* Helped fix crunchyroll, Twitter, wrestleuniverse, wistia, slideslive etc
|
||||||
|
|
2
Makefile
2
Makefile
|
@ -74,7 +74,7 @@ offlinetest: codetest
|
||||||
$(PYTHON) -m pytest -k "not download"
|
$(PYTHON) -m pytest -k "not download"
|
||||||
|
|
||||||
# XXX: This is hard to maintain
|
# XXX: This is hard to maintain
|
||||||
CODE_FOLDERS = yt_dlp yt_dlp/downloader yt_dlp/extractor yt_dlp/postprocessor yt_dlp/compat
|
CODE_FOLDERS = yt_dlp yt_dlp/downloader yt_dlp/extractor yt_dlp/postprocessor yt_dlp/compat yt_dlp/dependencies
|
||||||
yt-dlp: yt_dlp/*.py yt_dlp/*/*.py
|
yt-dlp: yt_dlp/*.py yt_dlp/*/*.py
|
||||||
mkdir -p zip
|
mkdir -p zip
|
||||||
for d in $(CODE_FOLDERS) ; do \
|
for d in $(CODE_FOLDERS) ; do \
|
||||||
|
|
80
README.md
80
README.md
|
@ -76,7 +76,7 @@
|
||||||
|
|
||||||
# NEW FEATURES
|
# NEW FEATURES
|
||||||
|
|
||||||
* Merged with **youtube-dl v2021.12.17+ [commit/195f22f](https://github.com/ytdl-org/youtube-dl/commit/195f22f)** <!--([exceptions](https://github.com/yt-dlp/yt-dlp/issues/21))--> and **youtube-dlc v2020.11.11-3+ [commit/f9401f2](https://github.com/blackjack4494/yt-dlc/commit/f9401f2a91987068139c5f757b12fc711d4c0cee)**: You get all the features and patches of [youtube-dlc](https://github.com/blackjack4494/yt-dlc) in addition to the latest [youtube-dl](https://github.com/ytdl-org/youtube-dl)
|
* Merged with **youtube-dl v2021.12.17+ [commit/2dd6c6e](https://github.com/ytdl-org/youtube-dl/commit/2dd6c6e)** ([exceptions](https://github.com/yt-dlp/yt-dlp/issues/21)) and **youtube-dlc v2020.11.11-3+ [commit/f9401f2](https://github.com/blackjack4494/yt-dlc/commit/f9401f2a91987068139c5f757b12fc711d4c0cee)**: You get all the features and patches of [youtube-dlc](https://github.com/blackjack4494/yt-dlc) in addition to the latest [youtube-dl](https://github.com/ytdl-org/youtube-dl)
|
||||||
|
|
||||||
* **[SponsorBlock Integration](#sponsorblock-options)**: You can mark/remove sponsor sections in YouTube videos by utilizing the [SponsorBlock](https://sponsor.ajay.app) API
|
* **[SponsorBlock Integration](#sponsorblock-options)**: You can mark/remove sponsor sections in YouTube videos by utilizing the [SponsorBlock](https://sponsor.ajay.app) API
|
||||||
|
|
||||||
|
@ -114,13 +114,15 @@ # NEW FEATURES
|
||||||
|
|
||||||
* **Output template improvements**: Output templates can now have date-time formatting, numeric offsets, object traversal etc. See [output template](#output-template) for details. Even more advanced operations can also be done with the help of `--parse-metadata` and `--replace-in-metadata`
|
* **Output template improvements**: Output templates can now have date-time formatting, numeric offsets, object traversal etc. See [output template](#output-template) for details. Even more advanced operations can also be done with the help of `--parse-metadata` and `--replace-in-metadata`
|
||||||
|
|
||||||
* **Other new options**: Many new options have been added such as `--alias`, `--print`, `--concat-playlist`, `--wait-for-video`, `--retry-sleep`, `--sleep-requests`, `--convert-thumbnails`, `--force-download-archive`, `--force-overwrites`, `--break-on-reject` etc
|
* **Other new options**: Many new options have been added such as `--alias`, `--print`, `--concat-playlist`, `--wait-for-video`, `--retry-sleep`, `--sleep-requests`, `--convert-thumbnails`, `--force-download-archive`, `--force-overwrites`, `--break-match-filter` etc
|
||||||
|
|
||||||
* **Improvements**: Regex and other operators in `--format`/`--match-filter`, multiple `--postprocessor-args` and `--downloader-args`, faster archive checking, more [format selection options](#format-selection), merge multi-video/audio, multiple `--config-locations`, `--exec` at different stages, etc
|
* **Improvements**: Regex and other operators in `--format`/`--match-filter`, multiple `--postprocessor-args` and `--downloader-args`, faster archive checking, more [format selection options](#format-selection), merge multi-video/audio, multiple `--config-locations`, `--exec` at different stages, etc
|
||||||
|
|
||||||
* **Plugins**: Extractors and PostProcessors can be loaded from an external file. See [plugins](#plugins) for details
|
* **Plugins**: Extractors and PostProcessors can be loaded from an external file. See [plugins](#plugins) for details
|
||||||
|
|
||||||
* **Self-updater**: The releases can be updated using `yt-dlp -U`
|
* **Self updater**: The releases can be updated using `yt-dlp -U`, and downgraded using `--update-to` if required
|
||||||
|
|
||||||
|
* **Nightly builds**: [Automated nightly builds](#update-channels) can be used with `--update-to nightly`
|
||||||
|
|
||||||
See [changelog](Changelog.md) or [commits](https://github.com/yt-dlp/yt-dlp/commits) for the full list of changes
|
See [changelog](Changelog.md) or [commits](https://github.com/yt-dlp/yt-dlp/commits) for the full list of changes
|
||||||
|
|
||||||
|
@ -130,6 +132,7 @@ ### Differences in default behavior
|
||||||
|
|
||||||
Some of yt-dlp's default options are different from that of youtube-dl and youtube-dlc:
|
Some of yt-dlp's default options are different from that of youtube-dl and youtube-dlc:
|
||||||
|
|
||||||
|
* yt-dlp supports only [Python 3.7+](## "Windows 7"), and *may* remove support for more versions as they [become EOL](https://devguide.python.org/versions/#python-release-cycle); while [youtube-dl still supports Python 2.6+ and 3.2+](https://github.com/ytdl-org/youtube-dl/issues/30568#issue-1118238743)
|
||||||
* The options `--auto-number` (`-A`), `--title` (`-t`) and `--literal` (`-l`), no longer work. See [removed options](#Removed) for details
|
* The options `--auto-number` (`-A`), `--title` (`-t`) and `--literal` (`-l`), no longer work. See [removed options](#Removed) for details
|
||||||
* `avconv` is not supported as an alternative to `ffmpeg`
|
* `avconv` is not supported as an alternative to `ffmpeg`
|
||||||
* yt-dlp stores config files in slightly different locations to youtube-dl. See [CONFIGURATION](#configuration) for a list of correct locations
|
* yt-dlp stores config files in slightly different locations to youtube-dl. See [CONFIGURATION](#configuration) for a list of correct locations
|
||||||
|
@ -180,12 +183,25 @@ # INSTALLATION
|
||||||
|
|
||||||
|
|
||||||
## UPDATE
|
## UPDATE
|
||||||
You can use `yt-dlp -U` to update if you are [using the release binaries](#release-files)
|
You can use `yt-dlp -U` to update if you are using the [release binaries](#release-files)
|
||||||
|
|
||||||
If you [installed with PIP](https://github.com/yt-dlp/yt-dlp/wiki/Installation#with-pip), simply re-run the same command that was used to install the program
|
If you [installed with PIP](https://github.com/yt-dlp/yt-dlp/wiki/Installation#with-pip), simply re-run the same command that was used to install the program
|
||||||
|
|
||||||
For other third-party package managers, see [the wiki](https://github.com/yt-dlp/yt-dlp/wiki/Installation#third-party-package-managers) or refer their documentation
|
For other third-party package managers, see [the wiki](https://github.com/yt-dlp/yt-dlp/wiki/Installation#third-party-package-managers) or refer their documentation
|
||||||
|
|
||||||
|
<a id="update-channels"/>
|
||||||
|
|
||||||
|
There are currently two release channels for binaries, `stable` and `nightly`.
|
||||||
|
`stable` is the default channel, and many of its changes have been tested by users of the nightly channel.
|
||||||
|
The `nightly` channel has releases built after each push to the master branch, and will have the most recent fixes and additions, but also have more risk of regressions. They are available in [their own repo](https://github.com/yt-dlp/yt-dlp-nightly-builds/releases).
|
||||||
|
|
||||||
|
When using `--update`/`-U`, a release binary will only update to its current channel.
|
||||||
|
This release channel can be changed by using the `--update-to` option. `--update-to` can also be used to upgrade or downgrade to specific tags from a channel.
|
||||||
|
|
||||||
|
Example usage:
|
||||||
|
* `yt-dlp --update-to nightly` change to `nightly` channel and update to its latest release
|
||||||
|
* `yt-dlp --update-to stable@2023.02.17` upgrade/downgrade to release to `stable` channel tag `2023.02.17`
|
||||||
|
* `yt-dlp --update-to 2023.01.06` upgrade/downgrade to tag `2023.01.06` if it exists on the current channel
|
||||||
|
|
||||||
<!-- MANPAGE: BEGIN EXCLUDED SECTION -->
|
<!-- MANPAGE: BEGIN EXCLUDED SECTION -->
|
||||||
## RELEASE FILES
|
## RELEASE FILES
|
||||||
|
@ -218,11 +234,20 @@ #### Misc
|
||||||
:---|:---
|
:---|:---
|
||||||
[yt-dlp.tar.gz](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.tar.gz)|Source tarball
|
[yt-dlp.tar.gz](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.tar.gz)|Source tarball
|
||||||
[SHA2-512SUMS](https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-512SUMS)|GNU-style SHA512 sums
|
[SHA2-512SUMS](https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-512SUMS)|GNU-style SHA512 sums
|
||||||
|
[SHA2-512SUMS.sig](https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-512SUMS.sig)|GPG signature file for SHA512 sums
|
||||||
[SHA2-256SUMS](https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-256SUMS)|GNU-style SHA256 sums
|
[SHA2-256SUMS](https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-256SUMS)|GNU-style SHA256 sums
|
||||||
|
[SHA2-256SUMS.sig](https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-256SUMS.sig)|GPG signature file for SHA256 sums
|
||||||
|
|
||||||
|
The public key that can be used to verify the GPG signatures is [available here](https://github.com/yt-dlp/yt-dlp/blob/master/public.key)
|
||||||
|
Example usage:
|
||||||
|
```
|
||||||
|
curl -L https://github.com/yt-dlp/yt-dlp/raw/master/public.key | gpg --import
|
||||||
|
gpg --verify SHA2-256SUMS.sig SHA2-256SUMS
|
||||||
|
gpg --verify SHA2-512SUMS.sig SHA2-512SUMS
|
||||||
|
```
|
||||||
<!-- MANPAGE: END EXCLUDED SECTION -->
|
<!-- MANPAGE: END EXCLUDED SECTION -->
|
||||||
|
|
||||||
|
**Note**: The manpages, shell completion files etc. are available inside the [source tarball](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.tar.gz)
|
||||||
**Note**: The manpages, shell completion files etc. are available in the [source tarball](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.tar.gz)
|
|
||||||
|
|
||||||
## DEPENDENCIES
|
## DEPENDENCIES
|
||||||
Python versions 3.7+ (CPython and PyPy) are supported. Other versions and implementations may or may not work correctly.
|
Python versions 3.7+ (CPython and PyPy) are supported. Other versions and implementations may or may not work correctly.
|
||||||
|
@ -310,11 +335,15 @@ ### Standalone Py2Exe Builds (Windows)
|
||||||
|
|
||||||
### Related scripts
|
### Related scripts
|
||||||
|
|
||||||
* **`devscripts/update-version.py [revision]`** - Update the version number based on current date
|
* **`devscripts/update-version.py`** - Update the version number based on current date.
|
||||||
* **`devscripts/set-variant.py variant [-M update_message]`** - Set the build variant of the executable
|
* **`devscripts/set-variant.py`** - Set the build variant of the executable.
|
||||||
|
* **`devscripts/make_changelog.py`** - Create a markdown changelog using short commit messages and update `CONTRIBUTORS` file.
|
||||||
* **`devscripts/make_lazy_extractors.py`** - Create lazy extractors. Running this before building the binaries (any variant) will improve their startup performance. Set the environment variable `YTDLP_NO_LAZY_EXTRACTORS=1` if you wish to forcefully disable lazy extractor loading.
|
* **`devscripts/make_lazy_extractors.py`** - Create lazy extractors. Running this before building the binaries (any variant) will improve their startup performance. Set the environment variable `YTDLP_NO_LAZY_EXTRACTORS=1` if you wish to forcefully disable lazy extractor loading.
|
||||||
|
|
||||||
You can also fork the project on GitHub and run your fork's [build workflow](.github/workflows/build.yml) to automatically build a full release
|
Note: See their `--help` for more info.
|
||||||
|
|
||||||
|
### Forking the project
|
||||||
|
If you fork the project on GitHub, you can run your fork's [build workflow](.github/workflows/build.yml) to automatically build the selected version(s) as artifacts. Alternatively, you can run the [release workflow](.github/workflows/release.yml) or enable the [nightly workflow](.github/workflows/release-nightly.yml) to create full (pre-)releases.
|
||||||
|
|
||||||
# USAGE AND OPTIONS
|
# USAGE AND OPTIONS
|
||||||
|
|
||||||
|
@ -330,6 +359,11 @@ ## General Options:
|
||||||
--version Print program version and exit
|
--version Print program version and exit
|
||||||
-U, --update Update this program to the latest version
|
-U, --update Update this program to the latest version
|
||||||
--no-update Do not check for updates (default)
|
--no-update Do not check for updates (default)
|
||||||
|
--update-to [CHANNEL]@[TAG] Upgrade/downgrade to a specific version.
|
||||||
|
CHANNEL and TAG defaults to "stable" and
|
||||||
|
"latest" respectively if omitted; See
|
||||||
|
"UPDATE" for details. Supported channels:
|
||||||
|
stable, nightly
|
||||||
-i, --ignore-errors Ignore download and postprocessing errors.
|
-i, --ignore-errors Ignore download and postprocessing errors.
|
||||||
The download will be considered successful
|
The download will be considered successful
|
||||||
even if the postprocessing fails
|
even if the postprocessing fails
|
||||||
|
@ -456,9 +490,8 @@ ## Video Selection:
|
||||||
--date DATE Download only videos uploaded on this date.
|
--date DATE Download only videos uploaded on this date.
|
||||||
The date can be "YYYYMMDD" or in the format
|
The date can be "YYYYMMDD" or in the format
|
||||||
[now|today|yesterday][-N[day|week|month|year]].
|
[now|today|yesterday][-N[day|week|month|year]].
|
||||||
E.g. "--date today-2weeks" downloads
|
E.g. "--date today-2weeks" downloads only
|
||||||
only videos uploaded on the same day two
|
videos uploaded on the same day two weeks ago
|
||||||
weeks ago
|
|
||||||
--datebefore DATE Download only videos uploaded on or before
|
--datebefore DATE Download only videos uploaded on or before
|
||||||
this date. The date formats accepted is the
|
this date. The date formats accepted is the
|
||||||
same as --date
|
same as --date
|
||||||
|
@ -485,7 +518,10 @@ ## Video Selection:
|
||||||
dogs" (caseless). Use "--match-filter -" to
|
dogs" (caseless). Use "--match-filter -" to
|
||||||
interactively ask whether to download each
|
interactively ask whether to download each
|
||||||
video
|
video
|
||||||
--no-match-filter Do not use generic video filter (default)
|
--no-match-filter Do not use any --match-filter (default)
|
||||||
|
--break-match-filters FILTER Same as "--match-filters" but stops the
|
||||||
|
download process when a video is rejected
|
||||||
|
--no-break-match-filters Do not use any --break-match-filters (default)
|
||||||
--no-playlist Download only the video, if the URL refers
|
--no-playlist Download only the video, if the URL refers
|
||||||
to a video and a playlist
|
to a video and a playlist
|
||||||
--yes-playlist Download the playlist, if the URL refers to
|
--yes-playlist Download the playlist, if the URL refers to
|
||||||
|
@ -499,11 +535,9 @@ ## Video Selection:
|
||||||
--max-downloads NUMBER Abort after downloading NUMBER files
|
--max-downloads NUMBER Abort after downloading NUMBER files
|
||||||
--break-on-existing Stop the download process when encountering
|
--break-on-existing Stop the download process when encountering
|
||||||
a file that is in the archive
|
a file that is in the archive
|
||||||
--break-on-reject Stop the download process when encountering
|
|
||||||
a file that has been filtered out
|
|
||||||
--break-per-input Alters --max-downloads, --break-on-existing,
|
--break-per-input Alters --max-downloads, --break-on-existing,
|
||||||
--break-on-reject, and autonumber to reset
|
--break-match-filter, and autonumber to
|
||||||
per input URL
|
reset per input URL
|
||||||
--no-break-per-input --break-on-existing and similar options
|
--no-break-per-input --break-on-existing and similar options
|
||||||
terminates the entire download queue
|
terminates the entire download queue
|
||||||
--skip-playlist-after-errors N Number of allowed failures until the rest of
|
--skip-playlist-after-errors N Number of allowed failures until the rest of
|
||||||
|
@ -788,7 +822,7 @@ ## Workarounds:
|
||||||
--prefer-insecure Use an unencrypted connection to retrieve
|
--prefer-insecure Use an unencrypted connection to retrieve
|
||||||
information about the video (Currently
|
information about the video (Currently
|
||||||
supported only for YouTube)
|
supported only for YouTube)
|
||||||
--add-header FIELD:VALUE Specify a custom HTTP header and its value,
|
--add-headers FIELD:VALUE Specify a custom HTTP header and its value,
|
||||||
separated by a colon ":". You can use this
|
separated by a colon ":". You can use this
|
||||||
option multiple times
|
option multiple times
|
||||||
--bidi-workaround Work around terminals that lack
|
--bidi-workaround Work around terminals that lack
|
||||||
|
@ -1227,7 +1261,7 @@ # OUTPUT TEMPLATE
|
||||||
|
|
||||||
Additionally, you can set different output templates for the various metadata files separately from the general output template by specifying the type of file followed by the template separated by a colon `:`. The different file types supported are `subtitle`, `thumbnail`, `description`, `annotation` (deprecated), `infojson`, `link`, `pl_thumbnail`, `pl_description`, `pl_infojson`, `chapter`, `pl_video`. E.g. `-o "%(title)s.%(ext)s" -o "thumbnail:%(title)s\%(title)s.%(ext)s"` will put the thumbnails in a folder with the same name as the video. If any of the templates is empty, that type of file will not be written. E.g. `--write-thumbnail -o "thumbnail:"` will write thumbnails only for playlists and not for video.
|
Additionally, you can set different output templates for the various metadata files separately from the general output template by specifying the type of file followed by the template separated by a colon `:`. The different file types supported are `subtitle`, `thumbnail`, `description`, `annotation` (deprecated), `infojson`, `link`, `pl_thumbnail`, `pl_description`, `pl_infojson`, `chapter`, `pl_video`. E.g. `-o "%(title)s.%(ext)s" -o "thumbnail:%(title)s\%(title)s.%(ext)s"` will put the thumbnails in a folder with the same name as the video. If any of the templates is empty, that type of file will not be written. E.g. `--write-thumbnail -o "thumbnail:"` will write thumbnails only for playlists and not for video.
|
||||||
|
|
||||||
<a id="outtmpl-postprocess-note"></a>
|
<a id="outtmpl-postprocess-note"/>
|
||||||
|
|
||||||
**Note**: Due to post-processing (i.e. merging etc.), the actual output filename might differ. Use `--print after_move:filepath` to get the name after all post-processing is complete.
|
**Note**: Due to post-processing (i.e. merging etc.), the actual output filename might differ. Use `--print after_move:filepath` to get the name after all post-processing is complete.
|
||||||
|
|
||||||
|
@ -1511,7 +1545,7 @@ ## Sorting Formats
|
||||||
- `source`: The preference of the source
|
- `source`: The preference of the source
|
||||||
- `proto`: Protocol used for download (`https`/`ftps` > `http`/`ftp` > `m3u8_native`/`m3u8` > `http_dash_segments`> `websocket_frag` > `mms`/`rtsp` > `f4f`/`f4m`)
|
- `proto`: Protocol used for download (`https`/`ftps` > `http`/`ftp` > `m3u8_native`/`m3u8` > `http_dash_segments`> `websocket_frag` > `mms`/`rtsp` > `f4f`/`f4m`)
|
||||||
- `vcodec`: Video Codec (`av01` > `vp9.2` > `vp9` > `h265` > `h264` > `vp8` > `h263` > `theora` > other)
|
- `vcodec`: Video Codec (`av01` > `vp9.2` > `vp9` > `h265` > `h264` > `vp8` > `h263` > `theora` > other)
|
||||||
- `acodec`: Audio Codec (`flac`/`alac` > `wav`/`aiff` > `opus` > `vorbis` > `aac` > `mp4a` > `mp3` `ac4` > > `eac3` > `ac3` > `dts` > other)
|
- `acodec`: Audio Codec (`flac`/`alac` > `wav`/`aiff` > `opus` > `vorbis` > `aac` > `mp4a` > `mp3` > `ac4` > `eac3` > `ac3` > `dts` > other)
|
||||||
- `codec`: Equivalent to `vcodec,acodec`
|
- `codec`: Equivalent to `vcodec,acodec`
|
||||||
- `vext`: Video Extension (`mp4` > `mov` > `webm` > `flv` > other). If `--prefer-free-formats` is used, `webm` is preferred.
|
- `vext`: Video Extension (`mp4` > `mov` > `webm` > `flv` > other). If `--prefer-free-formats` is used, `webm` is preferred.
|
||||||
- `aext`: Audio Extension (`m4a` > `aac` > `mp3` > `ogg` > `opus` > `webm` > other). If `--prefer-free-formats` is used, the order changes to `ogg` > `opus` > `webm` > `mp3` > `m4a` > `aac`
|
- `aext`: Audio Extension (`m4a` > `aac` > `mp3` > `ogg` > `opus` > `webm` > other). If `--prefer-free-formats` is used, the order changes to `ogg` > `opus` > `webm` > `mp3` > `m4a` > `aac`
|
||||||
|
@ -1741,6 +1775,8 @@ # EXTRACTOR ARGUMENTS
|
||||||
|
|
||||||
Some extractors accept additional arguments which can be passed using `--extractor-args KEY:ARGS`. `ARGS` is a `;` (semicolon) separated string of `ARG=VAL1,VAL2`. E.g. `--extractor-args "youtube:player-client=android_embedded,web;include_live_dash" --extractor-args "funimation:version=uncut"`
|
Some extractors accept additional arguments which can be passed using `--extractor-args KEY:ARGS`. `ARGS` is a `;` (semicolon) separated string of `ARG=VAL1,VAL2`. E.g. `--extractor-args "youtube:player-client=android_embedded,web;include_live_dash" --extractor-args "funimation:version=uncut"`
|
||||||
|
|
||||||
|
Note: In CLI, `ARG` can use `-` instead of `_`; e.g. `youtube:player-client"` becomes `youtube:player_client"`
|
||||||
|
|
||||||
The following extractors use this feature:
|
The following extractors use this feature:
|
||||||
|
|
||||||
#### youtube
|
#### youtube
|
||||||
|
@ -1751,6 +1787,7 @@ #### youtube
|
||||||
* `comment_sort`: `top` or `new` (default) - choose comment sorting mode (on YouTube's side)
|
* `comment_sort`: `top` or `new` (default) - choose comment sorting mode (on YouTube's side)
|
||||||
* `max_comments`: Limit the amount of comments to gather. Comma-separated list of integers representing `max-comments,max-parents,max-replies,max-replies-per-thread`. Default is `all,all,all,all`
|
* `max_comments`: Limit the amount of comments to gather. Comma-separated list of integers representing `max-comments,max-parents,max-replies,max-replies-per-thread`. Default is `all,all,all,all`
|
||||||
* E.g. `all,all,1000,10` will get a maximum of 1000 replies total, with up to 10 replies per thread. `1000,all,100` will get a maximum of 1000 comments, with a maximum of 100 replies total
|
* E.g. `all,all,1000,10` will get a maximum of 1000 replies total, with up to 10 replies per thread. `1000,all,100` will get a maximum of 1000 comments, with a maximum of 100 replies total
|
||||||
|
* `include_duplicate_formats`: Extract formats with identical content but different URLs or protocol. This is useful if some of the formats are unavailable or throttled.
|
||||||
* `include_incomplete_formats`: Extract formats that cannot be downloaded completely (live dash and post-live m3u8)
|
* `include_incomplete_formats`: Extract formats that cannot be downloaded completely (live dash and post-live m3u8)
|
||||||
* `innertube_host`: Innertube API host to use for all API requests; e.g. `studio.youtube.com`, `youtubei.googleapis.com`. Note that cookies exported from one subdomain will not work on others
|
* `innertube_host`: Innertube API host to use for all API requests; e.g. `studio.youtube.com`, `youtubei.googleapis.com`. Note that cookies exported from one subdomain will not work on others
|
||||||
* `innertube_key`: Innertube API key to use for all API requests
|
* `innertube_key`: Innertube API key to use for all API requests
|
||||||
|
@ -1887,7 +1924,7 @@ # EMBEDDING YT-DLP
|
||||||
ydl.download(URLS)
|
ydl.download(URLS)
|
||||||
```
|
```
|
||||||
|
|
||||||
Most likely, you'll want to use various options. For a list of options available, have a look at [`yt_dlp/YoutubeDL.py`](yt_dlp/YoutubeDL.py#L180).
|
Most likely, you'll want to use various options. For a list of options available, have a look at [`yt_dlp/YoutubeDL.py`](yt_dlp/YoutubeDL.py#L184).
|
||||||
|
|
||||||
**Tip**: If you are porting your code from youtube-dl to yt-dlp, one important point to look out for is that we do not guarantee the return value of `YoutubeDL.extract_info` to be json serializable, or even be a dictionary. It will be dictionary-like, but if you want to ensure it is a serializable dictionary, pass it through `YoutubeDL.sanitize_info` as shown in the [example below](#extracting-information)
|
**Tip**: If you are porting your code from youtube-dl to yt-dlp, one important point to look out for is that we do not guarantee the return value of `YoutubeDL.extract_info` to be json serializable, or even be a dictionary. It will be dictionary-like, but if you want to ensure it is a serializable dictionary, pass it through `YoutubeDL.sanitize_info` as shown in the [example below](#extracting-information)
|
||||||
|
|
||||||
|
@ -2097,6 +2134,7 @@ #### Redundant options
|
||||||
--reject-title REGEX --match-filter "title !~= (?i)REGEX"
|
--reject-title REGEX --match-filter "title !~= (?i)REGEX"
|
||||||
--min-views COUNT --match-filter "view_count >=? COUNT"
|
--min-views COUNT --match-filter "view_count >=? COUNT"
|
||||||
--max-views COUNT --match-filter "view_count <=? COUNT"
|
--max-views COUNT --match-filter "view_count <=? COUNT"
|
||||||
|
--break-on-reject Use --break-match-filter
|
||||||
--user-agent UA --add-header "User-Agent:UA"
|
--user-agent UA --add-header "User-Agent:UA"
|
||||||
--referer URL --add-header "Referer:URL"
|
--referer URL --add-header "Referer:URL"
|
||||||
--playlist-start NUMBER -I NUMBER:
|
--playlist-start NUMBER -I NUMBER:
|
||||||
|
|
12
devscripts/changelog_override.json
Normal file
12
devscripts/changelog_override.json
Normal file
|
@ -0,0 +1,12 @@
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"action": "add",
|
||||||
|
"when": "776d1c3f0c9b00399896dd2e40e78e9a43218109",
|
||||||
|
"short": "[priority] **A new release type has been added!**\n * [`nightly`](https://github.com/yt-dlp/yt-dlp/releases/tag/nightly) builds will be made after each push, containing the latest fixes (but also possibly bugs).\n * When using `--update`/`-U`, a release binary will only update to its current channel (either `stable` or `nightly`).\n * The `--update-to` option has been added allowing the user more control over program upgrades (or downgrades).\n * `--update-to` can change the release channel (`stable`, `nightly`) and also upgrade or downgrade to specific tags.\n * **Usage**: `--update-to CHANNEL`, `--update-to TAG`, `--update-to CHANNEL@TAG`"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"action": "add",
|
||||||
|
"when": "776d1c3f0c9b00399896dd2e40e78e9a43218109",
|
||||||
|
"short": "[priority] **YouTube throttling fixes!**"
|
||||||
|
}
|
||||||
|
]
|
96
devscripts/changelog_override.schema.json
Normal file
96
devscripts/changelog_override.schema.json
Normal file
|
@ -0,0 +1,96 @@
|
||||||
|
{
|
||||||
|
"$schema": "http://json-schema.org/draft/2020-12/schema",
|
||||||
|
"type": "array",
|
||||||
|
"uniqueItems": true,
|
||||||
|
"items": {
|
||||||
|
"type": "object",
|
||||||
|
"oneOf": [
|
||||||
|
{
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"action": {
|
||||||
|
"enum": [
|
||||||
|
"add"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"when": {
|
||||||
|
"type": "string",
|
||||||
|
"pattern": "^([0-9a-f]{40}|\\d{4}\\.\\d{2}\\.\\d{2})$"
|
||||||
|
},
|
||||||
|
"hash": {
|
||||||
|
"type": "string",
|
||||||
|
"pattern": "^[0-9a-f]{40}$"
|
||||||
|
},
|
||||||
|
"short": {
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
"authors": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {
|
||||||
|
"type": "string"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": [
|
||||||
|
"action",
|
||||||
|
"short"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"action": {
|
||||||
|
"enum": [
|
||||||
|
"remove"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"when": {
|
||||||
|
"type": "string",
|
||||||
|
"pattern": "^([0-9a-f]{40}|\\d{4}\\.\\d{2}\\.\\d{2})$"
|
||||||
|
},
|
||||||
|
"hash": {
|
||||||
|
"type": "string",
|
||||||
|
"pattern": "^[0-9a-f]{40}$"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": [
|
||||||
|
"action",
|
||||||
|
"hash"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"action": {
|
||||||
|
"enum": [
|
||||||
|
"change"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"when": {
|
||||||
|
"type": "string",
|
||||||
|
"pattern": "^([0-9a-f]{40}|\\d{4}\\.\\d{2}\\.\\d{2})$"
|
||||||
|
},
|
||||||
|
"hash": {
|
||||||
|
"type": "string",
|
||||||
|
"pattern": "^[0-9a-f]{40}$"
|
||||||
|
},
|
||||||
|
"short": {
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
"authors": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {
|
||||||
|
"type": "string"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": [
|
||||||
|
"action",
|
||||||
|
"hash",
|
||||||
|
"short",
|
||||||
|
"authors"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
470
devscripts/make_changelog.py
Normal file
470
devscripts/make_changelog.py
Normal file
|
@ -0,0 +1,470 @@
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
# Allow direct execution
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
|
||||||
|
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
|
||||||
|
import enum
|
||||||
|
import itertools
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import re
|
||||||
|
from collections import defaultdict
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from functools import lru_cache
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from devscripts.utils import read_file, run_process, write_file
|
||||||
|
|
||||||
|
BASE_URL = 'https://github.com'
|
||||||
|
LOCATION_PATH = Path(__file__).parent
|
||||||
|
HASH_LENGTH = 7
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class CommitGroup(enum.Enum):
|
||||||
|
UPSTREAM = None
|
||||||
|
PRIORITY = 'Important'
|
||||||
|
CORE = 'Core'
|
||||||
|
EXTRACTOR = 'Extractor'
|
||||||
|
DOWNLOADER = 'Downloader'
|
||||||
|
POSTPROCESSOR = 'Postprocessor'
|
||||||
|
MISC = 'Misc.'
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
@lru_cache
|
||||||
|
def commit_lookup(cls):
|
||||||
|
return {
|
||||||
|
name: group
|
||||||
|
for group, names in {
|
||||||
|
cls.PRIORITY: {''},
|
||||||
|
cls.UPSTREAM: {'upstream'},
|
||||||
|
cls.CORE: {
|
||||||
|
'aes',
|
||||||
|
'cache',
|
||||||
|
'compat_utils',
|
||||||
|
'compat',
|
||||||
|
'cookies',
|
||||||
|
'core',
|
||||||
|
'dependencies',
|
||||||
|
'jsinterp',
|
||||||
|
'outtmpl',
|
||||||
|
'plugins',
|
||||||
|
'update',
|
||||||
|
'utils',
|
||||||
|
},
|
||||||
|
cls.MISC: {
|
||||||
|
'build',
|
||||||
|
'cleanup',
|
||||||
|
'devscripts',
|
||||||
|
'docs',
|
||||||
|
'misc',
|
||||||
|
'test',
|
||||||
|
},
|
||||||
|
cls.EXTRACTOR: {'extractor', 'extractors'},
|
||||||
|
cls.DOWNLOADER: {'downloader'},
|
||||||
|
cls.POSTPROCESSOR: {'postprocessor'},
|
||||||
|
}.items()
|
||||||
|
for name in names
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def get(cls, value):
|
||||||
|
result = cls.commit_lookup().get(value)
|
||||||
|
if result:
|
||||||
|
logger.debug(f'Mapped {value!r} => {result.name}')
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Commit:
|
||||||
|
hash: str | None
|
||||||
|
short: str
|
||||||
|
authors: list[str]
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
result = f'{self.short!r}'
|
||||||
|
|
||||||
|
if self.hash:
|
||||||
|
result += f' ({self.hash[:HASH_LENGTH]})'
|
||||||
|
|
||||||
|
if self.authors:
|
||||||
|
authors = ', '.join(self.authors)
|
||||||
|
result += f' by {authors}'
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class CommitInfo:
|
||||||
|
details: str | None
|
||||||
|
sub_details: tuple[str, ...]
|
||||||
|
message: str
|
||||||
|
issues: list[str]
|
||||||
|
commit: Commit
|
||||||
|
fixes: list[Commit]
|
||||||
|
|
||||||
|
def key(self):
|
||||||
|
return ((self.details or '').lower(), self.sub_details, self.message)
|
||||||
|
|
||||||
|
|
||||||
|
class Changelog:
|
||||||
|
MISC_RE = re.compile(r'(?:^|\b)(?:lint(?:ing)?|misc|format(?:ting)?|fixes)(?:\b|$)', re.IGNORECASE)
|
||||||
|
|
||||||
|
def __init__(self, groups, repo):
|
||||||
|
self._groups = groups
|
||||||
|
self._repo = repo
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
return '\n'.join(self._format_groups(self._groups)).replace('\t', ' ')
|
||||||
|
|
||||||
|
def _format_groups(self, groups):
|
||||||
|
for item in CommitGroup:
|
||||||
|
group = groups[item]
|
||||||
|
if group:
|
||||||
|
yield self.format_module(item.value, group)
|
||||||
|
|
||||||
|
def format_module(self, name, group):
|
||||||
|
result = f'\n#### {name} changes\n' if name else '\n'
|
||||||
|
return result + '\n'.join(self._format_group(group))
|
||||||
|
|
||||||
|
def _format_group(self, group):
|
||||||
|
sorted_group = sorted(group, key=CommitInfo.key)
|
||||||
|
detail_groups = itertools.groupby(sorted_group, lambda item: (item.details or '').lower())
|
||||||
|
for _, items in detail_groups:
|
||||||
|
items = list(items)
|
||||||
|
details = items[0].details
|
||||||
|
if not details:
|
||||||
|
indent = ''
|
||||||
|
else:
|
||||||
|
yield f'- {details}'
|
||||||
|
indent = '\t'
|
||||||
|
|
||||||
|
if details == 'cleanup':
|
||||||
|
items, cleanup_misc_items = self._filter_cleanup_misc_items(items)
|
||||||
|
|
||||||
|
sub_detail_groups = itertools.groupby(items, lambda item: tuple(map(str.lower, item.sub_details)))
|
||||||
|
for sub_details, entries in sub_detail_groups:
|
||||||
|
if not sub_details:
|
||||||
|
for entry in entries:
|
||||||
|
yield f'{indent}- {self.format_single_change(entry)}'
|
||||||
|
continue
|
||||||
|
|
||||||
|
entries = list(entries)
|
||||||
|
prefix = f'{indent}- {", ".join(entries[0].sub_details)}'
|
||||||
|
if len(entries) == 1:
|
||||||
|
yield f'{prefix}: {self.format_single_change(entries[0])}'
|
||||||
|
continue
|
||||||
|
|
||||||
|
yield prefix
|
||||||
|
for entry in entries:
|
||||||
|
yield f'{indent}\t- {self.format_single_change(entry)}'
|
||||||
|
|
||||||
|
if details == 'cleanup' and cleanup_misc_items:
|
||||||
|
yield from self._format_cleanup_misc_sub_group(cleanup_misc_items)
|
||||||
|
|
||||||
|
def _filter_cleanup_misc_items(self, items):
|
||||||
|
cleanup_misc_items = defaultdict(list)
|
||||||
|
non_misc_items = []
|
||||||
|
for item in items:
|
||||||
|
if self.MISC_RE.search(item.message):
|
||||||
|
cleanup_misc_items[tuple(item.commit.authors)].append(item)
|
||||||
|
else:
|
||||||
|
non_misc_items.append(item)
|
||||||
|
|
||||||
|
return non_misc_items, cleanup_misc_items
|
||||||
|
|
||||||
|
def _format_cleanup_misc_sub_group(self, group):
|
||||||
|
prefix = '\t- Miscellaneous'
|
||||||
|
if len(group) == 1:
|
||||||
|
yield f'{prefix}: {next(self._format_cleanup_misc_items(group))}'
|
||||||
|
return
|
||||||
|
|
||||||
|
yield prefix
|
||||||
|
for message in self._format_cleanup_misc_items(group):
|
||||||
|
yield f'\t\t- {message}'
|
||||||
|
|
||||||
|
def _format_cleanup_misc_items(self, group):
|
||||||
|
for authors, infos in group.items():
|
||||||
|
message = ', '.join(
|
||||||
|
self._format_message_link(None, info.commit.hash)
|
||||||
|
for info in sorted(infos, key=lambda item: item.commit.hash or ''))
|
||||||
|
yield f'{message} by {self._format_authors(authors)}'
|
||||||
|
|
||||||
|
def format_single_change(self, info):
|
||||||
|
message = self._format_message_link(info.message, info.commit.hash)
|
||||||
|
if info.issues:
|
||||||
|
message = f'{message} ({self._format_issues(info.issues)})'
|
||||||
|
|
||||||
|
if info.commit.authors:
|
||||||
|
message = f'{message} by {self._format_authors(info.commit.authors)}'
|
||||||
|
|
||||||
|
if info.fixes:
|
||||||
|
fix_message = ', '.join(f'{self._format_message_link(None, fix.hash)}' for fix in info.fixes)
|
||||||
|
|
||||||
|
authors = sorted({author for fix in info.fixes for author in fix.authors}, key=str.casefold)
|
||||||
|
if authors != info.commit.authors:
|
||||||
|
fix_message = f'{fix_message} by {self._format_authors(authors)}'
|
||||||
|
|
||||||
|
message = f'{message} (With fixes in {fix_message})'
|
||||||
|
|
||||||
|
return message
|
||||||
|
|
||||||
|
def _format_message_link(self, message, hash):
|
||||||
|
assert message or hash, 'Improperly defined commit message or override'
|
||||||
|
message = message if message else hash[:HASH_LENGTH]
|
||||||
|
return f'[{message}]({self.repo_url}/commit/{hash})' if hash else message
|
||||||
|
|
||||||
|
def _format_issues(self, issues):
|
||||||
|
return ', '.join(f'[#{issue}]({self.repo_url}/issues/{issue})' for issue in issues)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _format_authors(authors):
|
||||||
|
return ', '.join(f'[{author}]({BASE_URL}/{author})' for author in authors)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def repo_url(self):
|
||||||
|
return f'{BASE_URL}/{self._repo}'
|
||||||
|
|
||||||
|
|
||||||
|
class CommitRange:
|
||||||
|
COMMAND = 'git'
|
||||||
|
COMMIT_SEPARATOR = '-----'
|
||||||
|
|
||||||
|
AUTHOR_INDICATOR_RE = re.compile(r'Authored by:? ', re.IGNORECASE)
|
||||||
|
MESSAGE_RE = re.compile(r'''
|
||||||
|
(?:\[
|
||||||
|
(?P<prefix>[^\]\/:,]+)
|
||||||
|
(?:/(?P<details>[^\]:,]+))?
|
||||||
|
(?:[:,](?P<sub_details>[^\]]+))?
|
||||||
|
\]\ )?
|
||||||
|
(?:(?P<sub_details_alt>`?[^:`]+`?): )?
|
||||||
|
(?P<message>.+?)
|
||||||
|
(?:\ \((?P<issues>\#\d+(?:,\ \#\d+)*)\))?
|
||||||
|
''', re.VERBOSE | re.DOTALL)
|
||||||
|
EXTRACTOR_INDICATOR_RE = re.compile(r'(?:Fix|Add)\s+Extractors?', re.IGNORECASE)
|
||||||
|
FIXES_RE = re.compile(r'(?i:Fix(?:es)?(?:\s+bugs?)?(?:\s+in|\s+for)?|Revert)\s+([\da-f]{40})')
|
||||||
|
UPSTREAM_MERGE_RE = re.compile(r'Update to ytdl-commit-([\da-f]+)')
|
||||||
|
|
||||||
|
def __init__(self, start, end, default_author=None):
|
||||||
|
self._start, self._end = start, end
|
||||||
|
self._commits, self._fixes = self._get_commits_and_fixes(default_author)
|
||||||
|
self._commits_added = []
|
||||||
|
|
||||||
|
def __iter__(self):
|
||||||
|
return iter(itertools.chain(self._commits.values(), self._commits_added))
|
||||||
|
|
||||||
|
def __len__(self):
|
||||||
|
return len(self._commits) + len(self._commits_added)
|
||||||
|
|
||||||
|
def __contains__(self, commit):
|
||||||
|
if isinstance(commit, Commit):
|
||||||
|
if not commit.hash:
|
||||||
|
return False
|
||||||
|
commit = commit.hash
|
||||||
|
|
||||||
|
return commit in self._commits
|
||||||
|
|
||||||
|
def _get_commits_and_fixes(self, default_author):
|
||||||
|
result = run_process(
|
||||||
|
self.COMMAND, 'log', f'--format=%H%n%s%n%b%n{self.COMMIT_SEPARATOR}',
|
||||||
|
f'{self._start}..{self._end}' if self._start else self._end).stdout
|
||||||
|
|
||||||
|
commits = {}
|
||||||
|
fixes = defaultdict(list)
|
||||||
|
lines = iter(result.splitlines(False))
|
||||||
|
for i, commit_hash in enumerate(lines):
|
||||||
|
short = next(lines)
|
||||||
|
skip = short.startswith('Release ') or short == '[version] update'
|
||||||
|
|
||||||
|
authors = [default_author] if default_author else []
|
||||||
|
for line in iter(lambda: next(lines), self.COMMIT_SEPARATOR):
|
||||||
|
match = self.AUTHOR_INDICATOR_RE.match(line)
|
||||||
|
if match:
|
||||||
|
authors = sorted(map(str.strip, line[match.end():].split(',')), key=str.casefold)
|
||||||
|
|
||||||
|
commit = Commit(commit_hash, short, authors)
|
||||||
|
if skip and (self._start or not i):
|
||||||
|
logger.debug(f'Skipped commit: {commit}')
|
||||||
|
continue
|
||||||
|
elif skip:
|
||||||
|
logger.debug(f'Reached Release commit, breaking: {commit}')
|
||||||
|
break
|
||||||
|
|
||||||
|
fix_match = self.FIXES_RE.search(commit.short)
|
||||||
|
if fix_match:
|
||||||
|
commitish = fix_match.group(1)
|
||||||
|
fixes[commitish].append(commit)
|
||||||
|
|
||||||
|
commits[commit.hash] = commit
|
||||||
|
|
||||||
|
for commitish, fix_commits in fixes.items():
|
||||||
|
if commitish in commits:
|
||||||
|
hashes = ', '.join(commit.hash[:HASH_LENGTH] for commit in fix_commits)
|
||||||
|
logger.info(f'Found fix(es) for {commitish[:HASH_LENGTH]}: {hashes}')
|
||||||
|
for fix_commit in fix_commits:
|
||||||
|
del commits[fix_commit.hash]
|
||||||
|
else:
|
||||||
|
logger.debug(f'Commit with fixes not in changes: {commitish[:HASH_LENGTH]}')
|
||||||
|
|
||||||
|
return commits, fixes
|
||||||
|
|
||||||
|
def apply_overrides(self, overrides):
|
||||||
|
for override in overrides:
|
||||||
|
when = override.get('when')
|
||||||
|
if when and when not in self and when != self._start:
|
||||||
|
logger.debug(f'Ignored {when!r}, not in commits {self._start!r}')
|
||||||
|
continue
|
||||||
|
|
||||||
|
override_hash = override.get('hash')
|
||||||
|
if override['action'] == 'add':
|
||||||
|
commit = Commit(override.get('hash'), override['short'], override.get('authors') or [])
|
||||||
|
logger.info(f'ADD {commit}')
|
||||||
|
self._commits_added.append(commit)
|
||||||
|
|
||||||
|
elif override['action'] == 'remove':
|
||||||
|
if override_hash in self._commits:
|
||||||
|
logger.info(f'REMOVE {self._commits[override_hash]}')
|
||||||
|
del self._commits[override_hash]
|
||||||
|
|
||||||
|
elif override['action'] == 'change':
|
||||||
|
if override_hash not in self._commits:
|
||||||
|
continue
|
||||||
|
commit = Commit(override_hash, override['short'], override['authors'])
|
||||||
|
logger.info(f'CHANGE {self._commits[commit.hash]} -> {commit}')
|
||||||
|
self._commits[commit.hash] = commit
|
||||||
|
|
||||||
|
self._commits = {key: value for key, value in reversed(self._commits.items())}
|
||||||
|
|
||||||
|
def groups(self):
|
||||||
|
groups = defaultdict(list)
|
||||||
|
for commit in self:
|
||||||
|
upstream_re = self.UPSTREAM_MERGE_RE.match(commit.short)
|
||||||
|
if upstream_re:
|
||||||
|
commit.short = f'[upstream] Merge up to youtube-dl {upstream_re.group(1)}'
|
||||||
|
|
||||||
|
match = self.MESSAGE_RE.fullmatch(commit.short)
|
||||||
|
if not match:
|
||||||
|
logger.error(f'Error parsing short commit message: {commit.short!r}')
|
||||||
|
continue
|
||||||
|
|
||||||
|
prefix, details, sub_details, sub_details_alt, message, issues = match.groups()
|
||||||
|
group = None
|
||||||
|
if prefix:
|
||||||
|
if prefix == 'priority':
|
||||||
|
prefix, _, details = (details or '').partition('/')
|
||||||
|
logger.debug(f'Priority: {message!r}')
|
||||||
|
group = CommitGroup.PRIORITY
|
||||||
|
|
||||||
|
if not details and prefix:
|
||||||
|
if prefix not in ('core', 'downloader', 'extractor', 'misc', 'postprocessor', 'upstream'):
|
||||||
|
logger.debug(f'Replaced details with {prefix!r}')
|
||||||
|
details = prefix or None
|
||||||
|
|
||||||
|
if details == 'common':
|
||||||
|
details = None
|
||||||
|
|
||||||
|
if details:
|
||||||
|
details = details.strip()
|
||||||
|
|
||||||
|
else:
|
||||||
|
group = CommitGroup.CORE
|
||||||
|
|
||||||
|
sub_details = f'{sub_details or ""},{sub_details_alt or ""}'.replace(':', ',')
|
||||||
|
sub_details = tuple(filter(None, map(str.strip, sub_details.split(','))))
|
||||||
|
|
||||||
|
issues = [issue.strip()[1:] for issue in issues.split(',')] if issues else []
|
||||||
|
|
||||||
|
if not group:
|
||||||
|
group = CommitGroup.get(prefix.lower())
|
||||||
|
if not group:
|
||||||
|
if self.EXTRACTOR_INDICATOR_RE.search(commit.short):
|
||||||
|
group = CommitGroup.EXTRACTOR
|
||||||
|
else:
|
||||||
|
group = CommitGroup.POSTPROCESSOR
|
||||||
|
logger.warning(f'Failed to map {commit.short!r}, selected {group.name}')
|
||||||
|
|
||||||
|
commit_info = CommitInfo(
|
||||||
|
details, sub_details, message.strip(),
|
||||||
|
issues, commit, self._fixes[commit.hash])
|
||||||
|
logger.debug(f'Resolved {commit.short!r} to {commit_info!r}')
|
||||||
|
groups[group].append(commit_info)
|
||||||
|
|
||||||
|
return groups
|
||||||
|
|
||||||
|
|
||||||
|
def get_new_contributors(contributors_path, commits):
|
||||||
|
contributors = set()
|
||||||
|
if contributors_path.exists():
|
||||||
|
for line in read_file(contributors_path).splitlines():
|
||||||
|
author, _, _ = line.strip().partition(' (')
|
||||||
|
authors = author.split('/')
|
||||||
|
contributors.update(map(str.casefold, authors))
|
||||||
|
|
||||||
|
new_contributors = set()
|
||||||
|
for commit in commits:
|
||||||
|
for author in commit.authors:
|
||||||
|
author_folded = author.casefold()
|
||||||
|
if author_folded not in contributors:
|
||||||
|
contributors.add(author_folded)
|
||||||
|
new_contributors.add(author)
|
||||||
|
|
||||||
|
return sorted(new_contributors, key=str.casefold)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
import argparse
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description='Create a changelog markdown from a git commit range')
|
||||||
|
parser.add_argument(
|
||||||
|
'commitish', default='HEAD', nargs='?',
|
||||||
|
help='The commitish to create the range from (default: %(default)s)')
|
||||||
|
parser.add_argument(
|
||||||
|
'-v', '--verbosity', action='count', default=0,
|
||||||
|
help='increase verbosity (can be used twice)')
|
||||||
|
parser.add_argument(
|
||||||
|
'-c', '--contributors', action='store_true',
|
||||||
|
help='update CONTRIBUTORS file (default: %(default)s)')
|
||||||
|
parser.add_argument(
|
||||||
|
'--contributors-path', type=Path, default=LOCATION_PATH.parent / 'CONTRIBUTORS',
|
||||||
|
help='path to the CONTRIBUTORS file')
|
||||||
|
parser.add_argument(
|
||||||
|
'--no-override', action='store_true',
|
||||||
|
help='skip override json in commit generation (default: %(default)s)')
|
||||||
|
parser.add_argument(
|
||||||
|
'--override-path', type=Path, default=LOCATION_PATH / 'changelog_override.json',
|
||||||
|
help='path to the changelog_override.json file')
|
||||||
|
parser.add_argument(
|
||||||
|
'--default-author', default='pukkandan',
|
||||||
|
help='the author to use without a author indicator (default: %(default)s)')
|
||||||
|
parser.add_argument(
|
||||||
|
'--repo', default='yt-dlp/yt-dlp',
|
||||||
|
help='the github repository to use for the operations (default: %(default)s)')
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
logging.basicConfig(
|
||||||
|
datefmt='%Y-%m-%d %H-%M-%S', format='{asctime} | {levelname:<8} | {message}',
|
||||||
|
level=logging.WARNING - 10 * args.verbosity, style='{', stream=sys.stderr)
|
||||||
|
|
||||||
|
commits = CommitRange(None, args.commitish, args.default_author)
|
||||||
|
|
||||||
|
if not args.no_override:
|
||||||
|
if args.override_path.exists():
|
||||||
|
overrides = json.loads(read_file(args.override_path))
|
||||||
|
commits.apply_overrides(overrides)
|
||||||
|
else:
|
||||||
|
logger.warning(f'File {args.override_path.as_posix()} does not exist')
|
||||||
|
|
||||||
|
logger.info(f'Loaded {len(commits)} commits')
|
||||||
|
|
||||||
|
new_contributors = get_new_contributors(args.contributors_path, commits)
|
||||||
|
if new_contributors:
|
||||||
|
if args.contributors:
|
||||||
|
write_file(args.contributors_path, '\n'.join(new_contributors) + '\n', mode='a')
|
||||||
|
logger.info(f'New contributors: {", ".join(new_contributors)}')
|
||||||
|
|
||||||
|
print(Changelog(commits.groups(), args.repo))
|
|
@ -24,6 +24,8 @@
|
||||||
options:
|
options:
|
||||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||||
required: true
|
required: true
|
||||||
|
- label: "If using API, add `'verbose': True` to `YoutubeDL` params instead"
|
||||||
|
required: false
|
||||||
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
- label: Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
|
||||||
required: true
|
required: true
|
||||||
- type: textarea
|
- type: textarea
|
||||||
|
@ -58,7 +60,7 @@
|
||||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||||
description: Fill all fields even if you think it is irrelevant for the issue
|
description: Fill all fields even if you think it is irrelevant for the issue
|
||||||
options:
|
options:
|
||||||
- label: I understand that I will be **blocked** if I remove or skip any mandatory\\* field
|
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field
|
||||||
required: true
|
required: true
|
||||||
'''.strip()
|
'''.strip()
|
||||||
|
|
||||||
|
|
|
@ -45,33 +45,43 @@ def apply_patch(text, patch):
|
||||||
delim = f'\n{" " * switch_col_width}'
|
delim = f'\n{" " * switch_col_width}'
|
||||||
|
|
||||||
PATCHES = (
|
PATCHES = (
|
||||||
( # Standardize update message
|
( # Standardize `--update` message
|
||||||
r'(?m)^( -U, --update\s+).+(\n \s.+)*$',
|
r'(?m)^( -U, --update\s+).+(\n \s.+)*$',
|
||||||
r'\1Update this program to the latest version',
|
r'\1Update this program to the latest version',
|
||||||
),
|
),
|
||||||
( # Headings
|
( # Headings
|
||||||
r'(?m)^ (\w.+\n)( (?=\w))?',
|
r'(?m)^ (\w.+\n)( (?=\w))?',
|
||||||
r'## \1'
|
r'## \1'
|
||||||
),
|
),
|
||||||
( # Do not split URLs
|
( # Fixup `--date` formatting
|
||||||
|
rf'(?m)( --date DATE.+({delim}[^\[]+)*)\[.+({delim}.+)*$',
|
||||||
|
(rf'\1[now|today|yesterday][-N[day|week|month|year]].{delim}'
|
||||||
|
f'E.g. "--date today-2weeks" downloads only{delim}'
|
||||||
|
'videos uploaded on the same day two weeks ago'),
|
||||||
|
),
|
||||||
|
( # Do not split URLs
|
||||||
rf'({delim[:-1]})? (?P<label>\[\S+\] )?(?P<url>https?({delim})?:({delim})?/({delim})?/(({delim})?\S+)+)\s',
|
rf'({delim[:-1]})? (?P<label>\[\S+\] )?(?P<url>https?({delim})?:({delim})?/({delim})?/(({delim})?\S+)+)\s',
|
||||||
lambda mobj: ''.join((delim, mobj.group('label') or '', re.sub(r'\s+', '', mobj.group('url')), '\n'))
|
lambda mobj: ''.join((delim, mobj.group('label') or '', re.sub(r'\s+', '', mobj.group('url')), '\n'))
|
||||||
),
|
),
|
||||||
( # Do not split "words"
|
( # Do not split "words"
|
||||||
rf'(?m)({delim}\S+)+$',
|
rf'(?m)({delim}\S+)+$',
|
||||||
lambda mobj: ''.join((delim, mobj.group(0).replace(delim, '')))
|
lambda mobj: ''.join((delim, mobj.group(0).replace(delim, '')))
|
||||||
),
|
),
|
||||||
( # Allow overshooting last line
|
( # Allow overshooting last line
|
||||||
rf'(?m)^(?P<prev>.+)${delim}(?P<current>.+)$(?!{delim})',
|
rf'(?m)^(?P<prev>.+)${delim}(?P<current>.+)$(?!{delim})',
|
||||||
lambda mobj: (mobj.group().replace(delim, ' ')
|
lambda mobj: (mobj.group().replace(delim, ' ')
|
||||||
if len(mobj.group()) - len(delim) + 1 <= max_width + ALLOWED_OVERSHOOT
|
if len(mobj.group()) - len(delim) + 1 <= max_width + ALLOWED_OVERSHOOT
|
||||||
else mobj.group())
|
else mobj.group())
|
||||||
),
|
),
|
||||||
( # Avoid newline when a space is available b/w switch and description
|
( # Avoid newline when a space is available b/w switch and description
|
||||||
DISABLE_PATCH, # This creates issues with prepare_manpage
|
DISABLE_PATCH, # This creates issues with prepare_manpage
|
||||||
r'(?m)^(\s{4}-.{%d})(%s)' % (switch_col_width - 6, delim),
|
r'(?m)^(\s{4}-.{%d})(%s)' % (switch_col_width - 6, delim),
|
||||||
r'\1 '
|
r'\1 '
|
||||||
),
|
),
|
||||||
|
( # Replace brackets with a Markdown link
|
||||||
|
r'SponsorBlock API \((http.+)\)',
|
||||||
|
r'[SponsorBlock API](\1)'
|
||||||
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
readme = read_file(README_FILE)
|
readme = read_file(README_FILE)
|
||||||
|
|
|
@ -7,16 +7,17 @@
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
|
||||||
|
|
||||||
|
import argparse
|
||||||
import contextlib
|
import contextlib
|
||||||
import subprocess
|
|
||||||
import sys
|
import sys
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
|
||||||
from devscripts.utils import read_version, write_file
|
from devscripts.utils import read_version, run_process, write_file
|
||||||
|
|
||||||
|
|
||||||
def get_new_version(revision):
|
def get_new_version(version, revision):
|
||||||
version = datetime.utcnow().strftime('%Y.%m.%d')
|
if not version:
|
||||||
|
version = datetime.utcnow().strftime('%Y.%m.%d')
|
||||||
|
|
||||||
if revision:
|
if revision:
|
||||||
assert revision.isdigit(), 'Revision must be a number'
|
assert revision.isdigit(), 'Revision must be a number'
|
||||||
|
@ -30,27 +31,41 @@ def get_new_version(revision):
|
||||||
|
|
||||||
def get_git_head():
|
def get_git_head():
|
||||||
with contextlib.suppress(Exception):
|
with contextlib.suppress(Exception):
|
||||||
sp = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], stdout=subprocess.PIPE)
|
return run_process('git', 'rev-parse', 'HEAD').stdout.strip()
|
||||||
return sp.communicate()[0].decode().strip() or None
|
|
||||||
|
|
||||||
|
|
||||||
VERSION = get_new_version((sys.argv + [''])[1])
|
VERSION_TEMPLATE = '''\
|
||||||
GIT_HEAD = get_git_head()
|
|
||||||
|
|
||||||
VERSION_FILE = f'''\
|
|
||||||
# Autogenerated by devscripts/update-version.py
|
# Autogenerated by devscripts/update-version.py
|
||||||
|
|
||||||
__version__ = {VERSION!r}
|
__version__ = {version!r}
|
||||||
|
|
||||||
RELEASE_GIT_HEAD = {GIT_HEAD!r}
|
RELEASE_GIT_HEAD = {git_head!r}
|
||||||
|
|
||||||
VARIANT = None
|
VARIANT = None
|
||||||
|
|
||||||
UPDATE_HINT = None
|
UPDATE_HINT = None
|
||||||
|
|
||||||
|
CHANNEL = {channel!r}
|
||||||
'''
|
'''
|
||||||
|
|
||||||
write_file('yt_dlp/version.py', VERSION_FILE)
|
if __name__ == '__main__':
|
||||||
github_output = os.getenv('GITHUB_OUTPUT')
|
parser = argparse.ArgumentParser(description='Update the version.py file')
|
||||||
if github_output:
|
parser.add_argument(
|
||||||
write_file(github_output, f'ytdlp_version={VERSION}\n', 'a')
|
'-c', '--channel', choices=['stable', 'nightly'], default='stable',
|
||||||
print(f'\nVersion = {VERSION}, Git HEAD = {GIT_HEAD}')
|
help='Select update channel (default: %(default)s)')
|
||||||
|
parser.add_argument(
|
||||||
|
'-o', '--output', default='yt_dlp/version.py',
|
||||||
|
help='The output file to write to (default: %(default)s)')
|
||||||
|
parser.add_argument(
|
||||||
|
'version', nargs='?', default=None,
|
||||||
|
help='A version or revision to use instead of generating one')
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
git_head = get_git_head()
|
||||||
|
version = (
|
||||||
|
args.version if args.version and '.' in args.version
|
||||||
|
else get_new_version(None, args.version))
|
||||||
|
write_file(args.output, VERSION_TEMPLATE.format(
|
||||||
|
version=version, git_head=git_head, channel=args.channel))
|
||||||
|
|
||||||
|
print(f'version={version} ({args.channel}), head={git_head}')
|
||||||
|
|
|
@ -1,5 +1,6 @@
|
||||||
import argparse
|
import argparse
|
||||||
import functools
|
import functools
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
|
||||||
def read_file(fname):
|
def read_file(fname):
|
||||||
|
@ -12,8 +13,8 @@ def write_file(fname, content, mode='w'):
|
||||||
return f.write(content)
|
return f.write(content)
|
||||||
|
|
||||||
|
|
||||||
# Get the version without importing the package
|
|
||||||
def read_version(fname='yt_dlp/version.py'):
|
def read_version(fname='yt_dlp/version.py'):
|
||||||
|
"""Get the version without importing the package"""
|
||||||
exec(compile(read_file(fname), fname, 'exec'))
|
exec(compile(read_file(fname), fname, 'exec'))
|
||||||
return locals()['__version__']
|
return locals()['__version__']
|
||||||
|
|
||||||
|
@ -33,3 +34,13 @@ def get_filename_args(has_infile=False, default_outfile=None):
|
||||||
|
|
||||||
def compose_functions(*functions):
|
def compose_functions(*functions):
|
||||||
return lambda x: functools.reduce(lambda y, f: f(y), functions, x)
|
return lambda x: functools.reduce(lambda y, f: f(y), functions, x)
|
||||||
|
|
||||||
|
|
||||||
|
def run_process(*args, **kwargs):
|
||||||
|
kwargs.setdefault('text', True)
|
||||||
|
kwargs.setdefault('check', True)
|
||||||
|
kwargs.setdefault('capture_output', True)
|
||||||
|
if kwargs['text']:
|
||||||
|
kwargs.setdefault('encoding', 'utf-8')
|
||||||
|
kwargs.setdefault('errors', 'replace')
|
||||||
|
return subprocess.run(args, **kwargs)
|
||||||
|
|
29
public.key
Normal file
29
public.key
Normal file
|
@ -0,0 +1,29 @@
|
||||||
|
-----BEGIN PGP PUBLIC KEY BLOCK-----
|
||||||
|
|
||||||
|
mQINBGP78C4BEAD0rF9zjGPAt0thlt5C1ebzccAVX7Nb1v+eqQjk+WEZdTETVCg3
|
||||||
|
WAM5ngArlHdm/fZqzUgO+pAYrB60GKeg7ffUDf+S0XFKEZdeRLYeAaqqKhSibVal
|
||||||
|
DjvOBOztu3W607HLETQAqA7wTPuIt2WqmpL60NIcyr27LxqmgdN3mNvZ2iLO+bP0
|
||||||
|
nKR/C+PgE9H4ytywDa12zMx6PmZCnVOOOu6XZEFmdUxxdQ9fFDqd9LcBKY2LDOcS
|
||||||
|
Yo1saY0YWiZWHtzVoZu1kOzjnS5Fjq/yBHJLImDH7pNxHm7s/PnaurpmQFtDFruk
|
||||||
|
t+2lhDnpKUmGr/I/3IHqH/X+9nPoS4uiqQ5HpblB8BK+4WfpaiEg75LnvuOPfZIP
|
||||||
|
KYyXa/0A7QojMwgOrD88ozT+VCkKkkJ+ijXZ7gHNjmcBaUdKK7fDIEOYI63Lyc6Q
|
||||||
|
WkGQTigFffSUXWHDCO9aXNhP3ejqFWgGMtCUsrbkcJkWuWY7q5ARy/05HbSM3K4D
|
||||||
|
U9eqtnxmiV1WQ8nXuI9JgJQRvh5PTkny5LtxqzcmqvWO9TjHBbrs14BPEO9fcXxK
|
||||||
|
L/CFBbzXDSvvAgArdqqlMoncQ/yicTlfL6qzJ8EKFiqW14QMTdAn6SuuZTodXCTi
|
||||||
|
InwoT7WjjuFPKKdvfH1GP4bnqdzTnzLxCSDIEtfyfPsIX+9GI7Jkk/zZjQARAQAB
|
||||||
|
tDdTaW1vbiBTYXdpY2tpICh5dC1kbHAgc2lnbmluZyBrZXkpIDxjb250YWN0QGdy
|
||||||
|
dWI0ay54eXo+iQJOBBMBCgA4FiEErAy75oSNaoc0ZK9OV89lkztadYEFAmP78C4C
|
||||||
|
GwMFCwkIBwIGFQoJCAsCBBYCAwECHgECF4AACgkQV89lkztadYEVqQ//cW7TxhXg
|
||||||
|
7Xbh2EZQzXml0egn6j8QaV9KzGragMiShrlvTO2zXfLXqyizrFP4AspgjSn/4NrI
|
||||||
|
8mluom+Yi+qr7DXT4BjQqIM9y3AjwZPdywe912Lxcw52NNoPZCm24I9T7ySc8lmR
|
||||||
|
FQvZC0w4H/VTNj/2lgJ1dwMflpwvNRiWa5YzcFGlCUeDIPskLx9++AJE+xwU3LYm
|
||||||
|
jQQsPBqpHHiTBEJzMLl+rfd9Fg4N+QNzpFkTDW3EPerLuvJniSBBwZthqxeAtw4M
|
||||||
|
UiAXh6JvCc2hJkKCoygRfM281MeolvmsGNyQm+axlB0vyldiPP6BnaRgZlx+l6MU
|
||||||
|
cPqgHblb7RW5j9lfr6OYL7SceBIHNv0CFrt1OnkGo/tVMwcs8LH3Ae4a7UJlIceL
|
||||||
|
V54aRxSsZU7w4iX+PB79BWkEsQzwKrUuJVOeL4UDwWajp75OFaUqbS/slDDVXvK5
|
||||||
|
OIeuth3mA/adjdvgjPxhRQjA3l69rRWIJDrqBSHldmRsnX6cvXTDy8wSXZgy51lP
|
||||||
|
m4IVLHnCy9m4SaGGoAsfTZS0cC9FgjUIyTyrq9M67wOMpUxnuB0aRZgJE1DsI23E
|
||||||
|
qdvcSNVlO+39xM/KPWUEh6b83wMn88QeW+DCVGWACQq5N3YdPnAJa50617fGbY6I
|
||||||
|
gXIoRHXkDqe23PZ/jURYCv0sjVtjPoVC+bg=
|
||||||
|
=bJkn
|
||||||
|
-----END PGP PUBLIC KEY BLOCK-----
|
32
pyinst.py
32
pyinst.py
|
@ -37,7 +37,7 @@ def main():
|
||||||
'--icon=devscripts/logo.ico',
|
'--icon=devscripts/logo.ico',
|
||||||
'--upx-exclude=vcruntime140.dll',
|
'--upx-exclude=vcruntime140.dll',
|
||||||
'--noconfirm',
|
'--noconfirm',
|
||||||
*dependency_options(),
|
'--additional-hooks-dir=yt_dlp/__pyinstaller',
|
||||||
*opts,
|
*opts,
|
||||||
'yt_dlp/__main__.py',
|
'yt_dlp/__main__.py',
|
||||||
]
|
]
|
||||||
|
@ -77,30 +77,6 @@ def version_to_list(version):
|
||||||
return list(map(int, version_list)) + [0] * (4 - len(version_list))
|
return list(map(int, version_list)) + [0] * (4 - len(version_list))
|
||||||
|
|
||||||
|
|
||||||
def dependency_options():
|
|
||||||
# Due to the current implementation, these are auto-detected, but explicitly add them just in case
|
|
||||||
dependencies = [pycryptodome_module(), 'mutagen', 'brotli', 'certifi', 'websockets']
|
|
||||||
excluded_modules = ('youtube_dl', 'youtube_dlc', 'test', 'ytdlp_plugins', 'devscripts')
|
|
||||||
|
|
||||||
yield from (f'--hidden-import={module}' for module in dependencies)
|
|
||||||
yield '--collect-submodules=websockets'
|
|
||||||
yield from (f'--exclude-module={module}' for module in excluded_modules)
|
|
||||||
|
|
||||||
|
|
||||||
def pycryptodome_module():
|
|
||||||
try:
|
|
||||||
import Cryptodome # noqa: F401
|
|
||||||
except ImportError:
|
|
||||||
try:
|
|
||||||
import Crypto # noqa: F401
|
|
||||||
print('WARNING: Using Crypto since Cryptodome is not available. '
|
|
||||||
'Install with: pip install pycryptodomex', file=sys.stderr)
|
|
||||||
return 'Crypto'
|
|
||||||
except ImportError:
|
|
||||||
pass
|
|
||||||
return 'Cryptodome'
|
|
||||||
|
|
||||||
|
|
||||||
def set_version_info(exe, version):
|
def set_version_info(exe, version):
|
||||||
if OS_NAME == 'win32':
|
if OS_NAME == 'win32':
|
||||||
windows_set_version(exe, version)
|
windows_set_version(exe, version)
|
||||||
|
@ -109,7 +85,6 @@ def set_version_info(exe, version):
|
||||||
def windows_set_version(exe, version):
|
def windows_set_version(exe, version):
|
||||||
from PyInstaller.utils.win32.versioninfo import (
|
from PyInstaller.utils.win32.versioninfo import (
|
||||||
FixedFileInfo,
|
FixedFileInfo,
|
||||||
SetVersion,
|
|
||||||
StringFileInfo,
|
StringFileInfo,
|
||||||
StringStruct,
|
StringStruct,
|
||||||
StringTable,
|
StringTable,
|
||||||
|
@ -118,6 +93,11 @@ def windows_set_version(exe, version):
|
||||||
VSVersionInfo,
|
VSVersionInfo,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
from PyInstaller.utils.win32.versioninfo import SetVersion
|
||||||
|
except ImportError: # Pyinstaller >= 5.8
|
||||||
|
from PyInstaller.utils.win32.versioninfo import write_version_info_to_executable as SetVersion
|
||||||
|
|
||||||
version_list = version_to_list(version)
|
version_list = version_to_list(version)
|
||||||
suffix = MACHINE and f'_{MACHINE}'
|
suffix = MACHINE and f'_{MACHINE}'
|
||||||
SetVersion(exe, VSVersionInfo(
|
SetVersion(exe, VSVersionInfo(
|
||||||
|
|
5
setup.py
5
setup.py
|
@ -92,7 +92,10 @@ def build_params():
|
||||||
params = {'data_files': data_files}
|
params = {'data_files': data_files}
|
||||||
|
|
||||||
if setuptools_available:
|
if setuptools_available:
|
||||||
params['entry_points'] = {'console_scripts': ['yt-dlp = yt_dlp:main']}
|
params['entry_points'] = {
|
||||||
|
'console_scripts': ['yt-dlp = yt_dlp:main'],
|
||||||
|
'pyinstaller40': ['hook-dirs = yt_dlp.__pyinstaller:get_hook_dirs'],
|
||||||
|
}
|
||||||
else:
|
else:
|
||||||
params['scripts'] = ['yt-dlp']
|
params['scripts'] = ['yt-dlp']
|
||||||
return params
|
return params
|
||||||
|
|
|
@ -28,14 +28,14 @@ # Supported sites
|
||||||
- **abcnews:video**
|
- **abcnews:video**
|
||||||
- **abcotvs**: ABC Owned Television Stations
|
- **abcotvs**: ABC Owned Television Stations
|
||||||
- **abcotvs:clips**
|
- **abcotvs:clips**
|
||||||
- **AbemaTV**: [<abbr title="netrc machine"><em>abematv</em></abbr>]
|
- **AbemaTV**: [*abematv*](## "netrc machine")
|
||||||
- **AbemaTVTitle**
|
- **AbemaTVTitle**
|
||||||
- **AcademicEarth:Course**
|
- **AcademicEarth:Course**
|
||||||
- **acast**
|
- **acast**
|
||||||
- **acast:channel**
|
- **acast:channel**
|
||||||
- **AcFunBangumi**
|
- **AcFunBangumi**
|
||||||
- **AcFunVideo**
|
- **AcFunVideo**
|
||||||
- **ADN**: [<abbr title="netrc machine"><em>animationdigitalnetwork</em></abbr>] Animation Digital Network
|
- **ADN**: [*animationdigitalnetwork*](## "netrc machine") Animation Digital Network
|
||||||
- **AdobeConnect**
|
- **AdobeConnect**
|
||||||
- **adobetv**
|
- **adobetv**
|
||||||
- **adobetv:channel**
|
- **adobetv:channel**
|
||||||
|
@ -47,8 +47,8 @@ # Supported sites
|
||||||
- **aenetworks:collection**
|
- **aenetworks:collection**
|
||||||
- **aenetworks:show**
|
- **aenetworks:show**
|
||||||
- **AeonCo**
|
- **AeonCo**
|
||||||
- **afreecatv**: [<abbr title="netrc machine"><em>afreecatv</em></abbr>] afreecatv.com
|
- **afreecatv**: [*afreecatv*](## "netrc machine") afreecatv.com
|
||||||
- **afreecatv:live**: [<abbr title="netrc machine"><em>afreecatv</em></abbr>] afreecatv.com
|
- **afreecatv:live**: [*afreecatv*](## "netrc machine") afreecatv.com
|
||||||
- **afreecatv:user**
|
- **afreecatv:user**
|
||||||
- **AirMozilla**
|
- **AirMozilla**
|
||||||
- **AirTV**
|
- **AirTV**
|
||||||
|
@ -59,18 +59,19 @@ # Supported sites
|
||||||
- **AlphaPorno**
|
- **AlphaPorno**
|
||||||
- **Alsace20TV**
|
- **Alsace20TV**
|
||||||
- **Alsace20TVEmbed**
|
- **Alsace20TVEmbed**
|
||||||
- **Alura**: [<abbr title="netrc machine"><em>alura</em></abbr>]
|
- **Alura**: [*alura*](## "netrc machine")
|
||||||
- **AluraCourse**: [<abbr title="netrc machine"><em>aluracourse</em></abbr>]
|
- **AluraCourse**: [*aluracourse*](## "netrc machine")
|
||||||
- **Amara**
|
- **Amara**
|
||||||
- **AmazonMiniTV**
|
- **AmazonMiniTV**
|
||||||
- **amazonminitv:season**: Amazon MiniTV Series, "minitv:season:" prefix
|
- **amazonminitv:season**: Amazon MiniTV Season, "minitv:season:" prefix
|
||||||
- **amazonminitv:series**
|
- **amazonminitv:series**: Amazon MiniTV Series, "minitv:series:" prefix
|
||||||
- **AmazonReviews**
|
- **AmazonReviews**
|
||||||
- **AmazonStore**
|
- **AmazonStore**
|
||||||
- **AMCNetworks**
|
- **AMCNetworks**
|
||||||
- **AmericasTestKitchen**
|
- **AmericasTestKitchen**
|
||||||
- **AmericasTestKitchenSeason**
|
- **AmericasTestKitchenSeason**
|
||||||
- **AmHistoryChannel**
|
- **AmHistoryChannel**
|
||||||
|
- **AnchorFMEpisode**
|
||||||
- **anderetijden**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl
|
- **anderetijden**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl
|
||||||
- **Angel**
|
- **Angel**
|
||||||
- **AnimalPlanet**
|
- **AnimalPlanet**
|
||||||
|
@ -99,7 +100,7 @@ # Supported sites
|
||||||
- **ArteTVPlaylist**
|
- **ArteTVPlaylist**
|
||||||
- **AsianCrush**
|
- **AsianCrush**
|
||||||
- **AsianCrushPlaylist**
|
- **AsianCrushPlaylist**
|
||||||
- **AtresPlayer**: [<abbr title="netrc machine"><em>atresplayer</em></abbr>]
|
- **AtresPlayer**: [*atresplayer*](## "netrc machine")
|
||||||
- **AtScaleConfEvent**
|
- **AtScaleConfEvent**
|
||||||
- **ATTTechChannel**
|
- **ATTTechChannel**
|
||||||
- **ATVAt**
|
- **ATVAt**
|
||||||
|
@ -127,15 +128,15 @@ # Supported sites
|
||||||
- **Bandcamp:user**
|
- **Bandcamp:user**
|
||||||
- **Bandcamp:weekly**
|
- **Bandcamp:weekly**
|
||||||
- **BannedVideo**
|
- **BannedVideo**
|
||||||
- **bbc**: [<abbr title="netrc machine"><em>bbc</em></abbr>] BBC
|
- **bbc**: [*bbc*](## "netrc machine") BBC
|
||||||
- **bbc.co.uk**: [<abbr title="netrc machine"><em>bbc</em></abbr>] BBC iPlayer
|
- **bbc.co.uk**: [*bbc*](## "netrc machine") BBC iPlayer
|
||||||
- **bbc.co.uk:article**: BBC articles
|
- **bbc.co.uk:article**: BBC articles
|
||||||
- **bbc.co.uk:iplayer:episodes**
|
- **bbc.co.uk:iplayer:episodes**
|
||||||
- **bbc.co.uk:iplayer:group**
|
- **bbc.co.uk:iplayer:group**
|
||||||
- **bbc.co.uk:playlist**
|
- **bbc.co.uk:playlist**
|
||||||
- **BBVTV**: [<abbr title="netrc machine"><em>bbvtv</em></abbr>]
|
- **BBVTV**: [*bbvtv*](## "netrc machine")
|
||||||
- **BBVTVLive**: [<abbr title="netrc machine"><em>bbvtv</em></abbr>]
|
- **BBVTVLive**: [*bbvtv*](## "netrc machine")
|
||||||
- **BBVTVRecordings**: [<abbr title="netrc machine"><em>bbvtv</em></abbr>]
|
- **BBVTVRecordings**: [*bbvtv*](## "netrc machine")
|
||||||
- **BeatBumpPlaylist**
|
- **BeatBumpPlaylist**
|
||||||
- **BeatBumpVideo**
|
- **BeatBumpVideo**
|
||||||
- **Beatport**
|
- **Beatport**
|
||||||
|
@ -164,8 +165,8 @@ # Supported sites
|
||||||
- **BilibiliSpaceAudio**
|
- **BilibiliSpaceAudio**
|
||||||
- **BilibiliSpacePlaylist**
|
- **BilibiliSpacePlaylist**
|
||||||
- **BilibiliSpaceVideo**
|
- **BilibiliSpaceVideo**
|
||||||
- **BiliIntl**: [<abbr title="netrc machine"><em>biliintl</em></abbr>]
|
- **BiliIntl**: [*biliintl*](## "netrc machine")
|
||||||
- **biliIntl:series**: [<abbr title="netrc machine"><em>biliintl</em></abbr>]
|
- **biliIntl:series**: [*biliintl*](## "netrc machine")
|
||||||
- **BiliLive**
|
- **BiliLive**
|
||||||
- **BioBioChileTV**
|
- **BioBioChileTV**
|
||||||
- **Biography**
|
- **Biography**
|
||||||
|
@ -177,6 +178,7 @@ # Supported sites
|
||||||
- **BlackboardCollaborate**
|
- **BlackboardCollaborate**
|
||||||
- **BleacherReport**
|
- **BleacherReport**
|
||||||
- **BleacherReportCMS**
|
- **BleacherReportCMS**
|
||||||
|
- **blerp**
|
||||||
- **blogger.com**
|
- **blogger.com**
|
||||||
- **Bloomberg**
|
- **Bloomberg**
|
||||||
- **BokeCC**
|
- **BokeCC**
|
||||||
|
@ -184,6 +186,7 @@ # Supported sites
|
||||||
- **BooyahClips**
|
- **BooyahClips**
|
||||||
- **BostonGlobe**
|
- **BostonGlobe**
|
||||||
- **Box**
|
- **Box**
|
||||||
|
- **BoxCastVideo**
|
||||||
- **Bpb**: Bundeszentrale für politische Bildung
|
- **Bpb**: Bundeszentrale für politische Bildung
|
||||||
- **BR**: Bayerischer Rundfunk
|
- **BR**: Bayerischer Rundfunk
|
||||||
- **BravoTV**
|
- **BravoTV**
|
||||||
|
@ -229,7 +232,7 @@ # Supported sites
|
||||||
- **cbssports:embed**
|
- **cbssports:embed**
|
||||||
- **CCMA**
|
- **CCMA**
|
||||||
- **CCTV**: 央视网
|
- **CCTV**: 央视网
|
||||||
- **CDA**: [<abbr title="netrc machine"><em>cdapl</em></abbr>]
|
- **CDA**: [*cdapl*](## "netrc machine")
|
||||||
- **Cellebrite**
|
- **Cellebrite**
|
||||||
- **CeskaTelevize**
|
- **CeskaTelevize**
|
||||||
- **CGTN**
|
- **CGTN**
|
||||||
|
@ -283,8 +286,8 @@ # Supported sites
|
||||||
- **CrooksAndLiars**
|
- **CrooksAndLiars**
|
||||||
- **CrowdBunker**
|
- **CrowdBunker**
|
||||||
- **CrowdBunkerChannel**
|
- **CrowdBunkerChannel**
|
||||||
- **crunchyroll**: [<abbr title="netrc machine"><em>crunchyroll</em></abbr>]
|
- **crunchyroll**: [*crunchyroll*](## "netrc machine")
|
||||||
- **crunchyroll:playlist**: [<abbr title="netrc machine"><em>crunchyroll</em></abbr>]
|
- **crunchyroll:playlist**: [*crunchyroll*](## "netrc machine")
|
||||||
- **CSpan**: C-SPAN
|
- **CSpan**: C-SPAN
|
||||||
- **CSpanCongress**
|
- **CSpanCongress**
|
||||||
- **CtsNews**: 華視新聞
|
- **CtsNews**: 華視新聞
|
||||||
|
@ -292,18 +295,18 @@ # Supported sites
|
||||||
- **CTVNews**
|
- **CTVNews**
|
||||||
- **cu.ntv.co.jp**: Nippon Television Network
|
- **cu.ntv.co.jp**: Nippon Television Network
|
||||||
- **CultureUnplugged**
|
- **CultureUnplugged**
|
||||||
- **curiositystream**: [<abbr title="netrc machine"><em>curiositystream</em></abbr>]
|
- **curiositystream**: [*curiositystream*](## "netrc machine")
|
||||||
- **curiositystream:collections**: [<abbr title="netrc machine"><em>curiositystream</em></abbr>]
|
- **curiositystream:collections**: [*curiositystream*](## "netrc machine")
|
||||||
- **curiositystream:series**: [<abbr title="netrc machine"><em>curiositystream</em></abbr>]
|
- **curiositystream:series**: [*curiositystream*](## "netrc machine")
|
||||||
- **CWTV**
|
- **CWTV**
|
||||||
- **Cybrary**: [<abbr title="netrc machine"><em>cybrary</em></abbr>]
|
- **Cybrary**: [*cybrary*](## "netrc machine")
|
||||||
- **CybraryCourse**: [<abbr title="netrc machine"><em>cybrary</em></abbr>]
|
- **CybraryCourse**: [*cybrary*](## "netrc machine")
|
||||||
- **Daftsex**
|
- **Daftsex**
|
||||||
- **DagelijkseKost**: dagelijksekost.een.be
|
- **DagelijkseKost**: dagelijksekost.een.be
|
||||||
- **DailyMail**
|
- **DailyMail**
|
||||||
- **dailymotion**: [<abbr title="netrc machine"><em>dailymotion</em></abbr>]
|
- **dailymotion**: [*dailymotion*](## "netrc machine")
|
||||||
- **dailymotion:playlist**: [<abbr title="netrc machine"><em>dailymotion</em></abbr>]
|
- **dailymotion:playlist**: [*dailymotion*](## "netrc machine")
|
||||||
- **dailymotion:user**: [<abbr title="netrc machine"><em>dailymotion</em></abbr>]
|
- **dailymotion:user**: [*dailymotion*](## "netrc machine")
|
||||||
- **DailyWire**
|
- **DailyWire**
|
||||||
- **DailyWirePodcast**
|
- **DailyWirePodcast**
|
||||||
- **damtomo:record**
|
- **damtomo:record**
|
||||||
|
@ -325,7 +328,7 @@ # Supported sites
|
||||||
- **DeuxMNews**
|
- **DeuxMNews**
|
||||||
- **DHM**: Filmarchiv - Deutsches Historisches Museum
|
- **DHM**: Filmarchiv - Deutsches Historisches Museum
|
||||||
- **Digg**
|
- **Digg**
|
||||||
- **DigitalConcertHall**: [<abbr title="netrc machine"><em>digitalconcerthall</em></abbr>] DigitalConcertHall extractor
|
- **DigitalConcertHall**: [*digitalconcerthall*](## "netrc machine") DigitalConcertHall extractor
|
||||||
- **DigitallySpeaking**
|
- **DigitallySpeaking**
|
||||||
- **Digiteka**
|
- **Digiteka**
|
||||||
- **Discovery**
|
- **Discovery**
|
||||||
|
@ -348,7 +351,7 @@ # Supported sites
|
||||||
- **DRBonanza**
|
- **DRBonanza**
|
||||||
- **Drooble**
|
- **Drooble**
|
||||||
- **Dropbox**
|
- **Dropbox**
|
||||||
- **Dropout**: [<abbr title="netrc machine"><em>dropout</em></abbr>]
|
- **Dropout**: [*dropout*](## "netrc machine")
|
||||||
- **DropoutSeason**
|
- **DropoutSeason**
|
||||||
- **DrTuber**
|
- **DrTuber**
|
||||||
- **drtv**
|
- **drtv**
|
||||||
|
@ -364,14 +367,15 @@ # Supported sites
|
||||||
- **dw:article**
|
- **dw:article**
|
||||||
- **EaglePlatform**
|
- **EaglePlatform**
|
||||||
- **EbaumsWorld**
|
- **EbaumsWorld**
|
||||||
|
- **Ebay**
|
||||||
- **EchoMsk**
|
- **EchoMsk**
|
||||||
- **egghead:course**: egghead.io course
|
- **egghead:course**: egghead.io course
|
||||||
- **egghead:lesson**: egghead.io lesson
|
- **egghead:lesson**: egghead.io lesson
|
||||||
- **ehftv**
|
- **ehftv**
|
||||||
- **eHow**
|
- **eHow**
|
||||||
- **EinsUndEinsTV**: [<abbr title="netrc machine"><em>1und1tv</em></abbr>]
|
- **EinsUndEinsTV**: [*1und1tv*](## "netrc machine")
|
||||||
- **EinsUndEinsTVLive**: [<abbr title="netrc machine"><em>1und1tv</em></abbr>]
|
- **EinsUndEinsTVLive**: [*1und1tv*](## "netrc machine")
|
||||||
- **EinsUndEinsTVRecordings**: [<abbr title="netrc machine"><em>1und1tv</em></abbr>]
|
- **EinsUndEinsTVRecordings**: [*1und1tv*](## "netrc machine")
|
||||||
- **Einthusan**
|
- **Einthusan**
|
||||||
- **eitb.tv**
|
- **eitb.tv**
|
||||||
- **EllenTube**
|
- **EllenTube**
|
||||||
|
@ -386,7 +390,7 @@ # Supported sites
|
||||||
- **EpiconSeries**
|
- **EpiconSeries**
|
||||||
- **Epoch**
|
- **Epoch**
|
||||||
- **Eporner**
|
- **Eporner**
|
||||||
- **EroProfile**: [<abbr title="netrc machine"><em>eroprofile</em></abbr>]
|
- **EroProfile**: [*eroprofile*](## "netrc machine")
|
||||||
- **EroProfile:album**
|
- **EroProfile:album**
|
||||||
- **ertflix**: ERTFLIX videos
|
- **ertflix**: ERTFLIX videos
|
||||||
- **ertflix:codename**: ERTFLIX videos by codename
|
- **ertflix:codename**: ERTFLIX videos by codename
|
||||||
|
@ -401,20 +405,20 @@ # Supported sites
|
||||||
- **EuropeanTour**
|
- **EuropeanTour**
|
||||||
- **Eurosport**
|
- **Eurosport**
|
||||||
- **EUScreen**
|
- **EUScreen**
|
||||||
- **EWETV**: [<abbr title="netrc machine"><em>ewetv</em></abbr>]
|
- **EWETV**: [*ewetv*](## "netrc machine")
|
||||||
- **EWETVLive**: [<abbr title="netrc machine"><em>ewetv</em></abbr>]
|
- **EWETVLive**: [*ewetv*](## "netrc machine")
|
||||||
- **EWETVRecordings**: [<abbr title="netrc machine"><em>ewetv</em></abbr>]
|
- **EWETVRecordings**: [*ewetv*](## "netrc machine")
|
||||||
- **ExpoTV**
|
- **ExpoTV**
|
||||||
- **Expressen**
|
- **Expressen**
|
||||||
- **ExtremeTube**
|
- **ExtremeTube**
|
||||||
- **EyedoTV**
|
- **EyedoTV**
|
||||||
- **facebook**: [<abbr title="netrc machine"><em>facebook</em></abbr>]
|
- **facebook**: [*facebook*](## "netrc machine")
|
||||||
- **facebook:reel**
|
- **facebook:reel**
|
||||||
- **FacebookPluginsVideo**
|
- **FacebookPluginsVideo**
|
||||||
- **fancode:live**: [<abbr title="netrc machine"><em>fancode</em></abbr>]
|
- **fancode:live**: [*fancode*](## "netrc machine")
|
||||||
- **fancode:vod**: [<abbr title="netrc machine"><em>fancode</em></abbr>]
|
- **fancode:vod**: [*fancode*](## "netrc machine")
|
||||||
- **faz.net**
|
- **faz.net**
|
||||||
- **fc2**: [<abbr title="netrc machine"><em>fc2</em></abbr>]
|
- **fc2**: [*fc2*](## "netrc machine")
|
||||||
- **fc2:embed**
|
- **fc2:embed**
|
||||||
- **fc2:live**
|
- **fc2:live**
|
||||||
- **Fczenit**
|
- **Fczenit**
|
||||||
|
@ -448,20 +452,20 @@ # Supported sites
|
||||||
- **freespeech.org**
|
- **freespeech.org**
|
||||||
- **freetv:series**
|
- **freetv:series**
|
||||||
- **FreeTvMovies**
|
- **FreeTvMovies**
|
||||||
- **FrontendMasters**: [<abbr title="netrc machine"><em>frontendmasters</em></abbr>]
|
- **FrontendMasters**: [*frontendmasters*](## "netrc machine")
|
||||||
- **FrontendMastersCourse**: [<abbr title="netrc machine"><em>frontendmasters</em></abbr>]
|
- **FrontendMastersCourse**: [*frontendmasters*](## "netrc machine")
|
||||||
- **FrontendMastersLesson**: [<abbr title="netrc machine"><em>frontendmasters</em></abbr>]
|
- **FrontendMastersLesson**: [*frontendmasters*](## "netrc machine")
|
||||||
- **FujiTVFODPlus7**
|
- **FujiTVFODPlus7**
|
||||||
- **Funimation**: [<abbr title="netrc machine"><em>funimation</em></abbr>]
|
- **Funimation**: [*funimation*](## "netrc machine")
|
||||||
- **funimation:page**: [<abbr title="netrc machine"><em>funimation</em></abbr>]
|
- **funimation:page**: [*funimation*](## "netrc machine")
|
||||||
- **funimation:show**: [<abbr title="netrc machine"><em>funimation</em></abbr>]
|
- **funimation:show**: [*funimation*](## "netrc machine")
|
||||||
- **Funk**
|
- **Funk**
|
||||||
- **Fusion**
|
- **Fusion**
|
||||||
- **Fux**
|
- **Fux**
|
||||||
- **FuyinTV**
|
- **FuyinTV**
|
||||||
- **Gab**
|
- **Gab**
|
||||||
- **GabTV**
|
- **GabTV**
|
||||||
- **Gaia**: [<abbr title="netrc machine"><em>gaia</em></abbr>]
|
- **Gaia**: [*gaia*](## "netrc machine")
|
||||||
- **GameInformer**
|
- **GameInformer**
|
||||||
- **GameJolt**
|
- **GameJolt**
|
||||||
- **GameJoltCommunity**
|
- **GameJoltCommunity**
|
||||||
|
@ -473,9 +477,9 @@ # Supported sites
|
||||||
- **GameStar**
|
- **GameStar**
|
||||||
- **Gaskrank**
|
- **Gaskrank**
|
||||||
- **Gazeta**
|
- **Gazeta**
|
||||||
- **GDCVault**: [<abbr title="netrc machine"><em>gdcvault</em></abbr>]
|
- **GDCVault**: [*gdcvault*](## "netrc machine")
|
||||||
- **GediDigital**
|
- **GediDigital**
|
||||||
- **gem.cbc.ca**: [<abbr title="netrc machine"><em>cbcgem</em></abbr>]
|
- **gem.cbc.ca**: [*cbcgem*](## "netrc machine")
|
||||||
- **gem.cbc.ca:live**
|
- **gem.cbc.ca:live**
|
||||||
- **gem.cbc.ca:playlist**
|
- **gem.cbc.ca:playlist**
|
||||||
- **Genius**
|
- **Genius**
|
||||||
|
@ -485,11 +489,11 @@ # Supported sites
|
||||||
- **Gfycat**
|
- **Gfycat**
|
||||||
- **GiantBomb**
|
- **GiantBomb**
|
||||||
- **Giga**
|
- **Giga**
|
||||||
- **GlattvisionTV**: [<abbr title="netrc machine"><em>glattvisiontv</em></abbr>]
|
- **GlattvisionTV**: [*glattvisiontv*](## "netrc machine")
|
||||||
- **GlattvisionTVLive**: [<abbr title="netrc machine"><em>glattvisiontv</em></abbr>]
|
- **GlattvisionTVLive**: [*glattvisiontv*](## "netrc machine")
|
||||||
- **GlattvisionTVRecordings**: [<abbr title="netrc machine"><em>glattvisiontv</em></abbr>]
|
- **GlattvisionTVRecordings**: [*glattvisiontv*](## "netrc machine")
|
||||||
- **Glide**: Glide mobile video messages (glide.me)
|
- **Glide**: Glide mobile video messages (glide.me)
|
||||||
- **Globo**: [<abbr title="netrc machine"><em>globo</em></abbr>]
|
- **Globo**: [*globo*](## "netrc machine")
|
||||||
- **GloboArticle**
|
- **GloboArticle**
|
||||||
- **glomex**: Glomex videos
|
- **glomex**: Glomex videos
|
||||||
- **glomex:embed**: Glomex embedded videos
|
- **glomex:embed**: Glomex embedded videos
|
||||||
|
@ -503,7 +507,7 @@ # Supported sites
|
||||||
- **google:podcasts:feed**
|
- **google:podcasts:feed**
|
||||||
- **GoogleDrive**
|
- **GoogleDrive**
|
||||||
- **GoogleDrive:Folder**
|
- **GoogleDrive:Folder**
|
||||||
- **GoPlay**: [<abbr title="netrc machine"><em>goplay</em></abbr>]
|
- **GoPlay**: [*goplay*](## "netrc machine")
|
||||||
- **GoPro**
|
- **GoPro**
|
||||||
- **Goshgay**
|
- **Goshgay**
|
||||||
- **GoToStage**
|
- **GoToStage**
|
||||||
|
@ -523,7 +527,7 @@ # Supported sites
|
||||||
- **hgtv.com:show**
|
- **hgtv.com:show**
|
||||||
- **HGTVDe**
|
- **HGTVDe**
|
||||||
- **HGTVUsa**
|
- **HGTVUsa**
|
||||||
- **HiDive**: [<abbr title="netrc machine"><em>hidive</em></abbr>]
|
- **HiDive**: [*hidive*](## "netrc machine")
|
||||||
- **HistoricFilms**
|
- **HistoricFilms**
|
||||||
- **history:player**
|
- **history:player**
|
||||||
- **history:topic**: History.com Topic
|
- **history:topic**: History.com Topic
|
||||||
|
@ -540,8 +544,8 @@ # Supported sites
|
||||||
- **Howcast**
|
- **Howcast**
|
||||||
- **HowStuffWorks**
|
- **HowStuffWorks**
|
||||||
- **hrfernsehen**
|
- **hrfernsehen**
|
||||||
- **HRTi**: [<abbr title="netrc machine"><em>hrti</em></abbr>]
|
- **HRTi**: [*hrti*](## "netrc machine")
|
||||||
- **HRTiPlaylist**: [<abbr title="netrc machine"><em>hrti</em></abbr>]
|
- **HRTiPlaylist**: [*hrti*](## "netrc machine")
|
||||||
- **HSEProduct**
|
- **HSEProduct**
|
||||||
- **HSEShow**
|
- **HSEShow**
|
||||||
- **html5**
|
- **html5**
|
||||||
|
@ -571,19 +575,19 @@ # Supported sites
|
||||||
- **Inc**
|
- **Inc**
|
||||||
- **IndavideoEmbed**
|
- **IndavideoEmbed**
|
||||||
- **InfoQ**
|
- **InfoQ**
|
||||||
- **Instagram**: [<abbr title="netrc machine"><em>instagram</em></abbr>]
|
- **Instagram**: [*instagram*](## "netrc machine")
|
||||||
- **instagram:story**: [<abbr title="netrc machine"><em>instagram</em></abbr>]
|
- **instagram:story**: [*instagram*](## "netrc machine")
|
||||||
- **instagram:tag**: [<abbr title="netrc machine"><em>instagram</em></abbr>] Instagram hashtag search URLs
|
- **instagram:tag**: [*instagram*](## "netrc machine") Instagram hashtag search URLs
|
||||||
- **instagram:user**: [<abbr title="netrc machine"><em>instagram</em></abbr>] Instagram user profile
|
- **instagram:user**: [*instagram*](## "netrc machine") Instagram user profile
|
||||||
- **InstagramIOS**: IOS instagram:// URL
|
- **InstagramIOS**: IOS instagram:// URL
|
||||||
- **Internazionale**
|
- **Internazionale**
|
||||||
- **InternetVideoArchive**
|
- **InternetVideoArchive**
|
||||||
- **InvestigationDiscovery**
|
- **InvestigationDiscovery**
|
||||||
- **IPrima**: [<abbr title="netrc machine"><em>iprima</em></abbr>]
|
- **IPrima**: [*iprima*](## "netrc machine")
|
||||||
- **IPrimaCNN**
|
- **IPrimaCNN**
|
||||||
- **iq.com**: International version of iQiyi
|
- **iq.com**: International version of iQiyi
|
||||||
- **iq.com:album**
|
- **iq.com:album**
|
||||||
- **iqiyi**: [<abbr title="netrc machine"><em>iqiyi</em></abbr>] 爱奇艺
|
- **iqiyi**: [*iqiyi*](## "netrc machine") 爱奇艺
|
||||||
- **IslamChannel**
|
- **IslamChannel**
|
||||||
- **IslamChannelSeries**
|
- **IslamChannelSeries**
|
||||||
- **IsraelNationalNews**
|
- **IsraelNationalNews**
|
||||||
|
@ -595,6 +599,7 @@ # Supported sites
|
||||||
- **ivi**: ivi.ru
|
- **ivi**: ivi.ru
|
||||||
- **ivi:compilation**: ivi.ru compilations
|
- **ivi:compilation**: ivi.ru compilations
|
||||||
- **ivideon**: Ivideon TV
|
- **ivideon**: Ivideon TV
|
||||||
|
- **IVXPlayer**
|
||||||
- **Iwara**
|
- **Iwara**
|
||||||
- **iwara:playlist**
|
- **iwara:playlist**
|
||||||
- **iwara:user**
|
- **iwara:user**
|
||||||
|
@ -626,6 +631,7 @@ # Supported sites
|
||||||
- **KickVOD**
|
- **KickVOD**
|
||||||
- **KinjaEmbed**
|
- **KinjaEmbed**
|
||||||
- **KinoPoisk**
|
- **KinoPoisk**
|
||||||
|
- **Kommunetv**
|
||||||
- **KompasVideo**
|
- **KompasVideo**
|
||||||
- **KonserthusetPlay**
|
- **KonserthusetPlay**
|
||||||
- **Koo**
|
- **Koo**
|
||||||
|
@ -654,9 +660,11 @@ # Supported sites
|
||||||
- **LcpPlay**
|
- **LcpPlay**
|
||||||
- **Le**: 乐视网
|
- **Le**: 乐视网
|
||||||
- **Lecture2Go**
|
- **Lecture2Go**
|
||||||
- **Lecturio**: [<abbr title="netrc machine"><em>lecturio</em></abbr>]
|
- **Lecturio**: [*lecturio*](## "netrc machine")
|
||||||
- **LecturioCourse**: [<abbr title="netrc machine"><em>lecturio</em></abbr>]
|
- **LecturioCourse**: [*lecturio*](## "netrc machine")
|
||||||
- **LecturioDeCourse**: [<abbr title="netrc machine"><em>lecturio</em></abbr>]
|
- **LecturioDeCourse**: [*lecturio*](## "netrc machine")
|
||||||
|
- **LeFigaroVideoEmbed**
|
||||||
|
- **LeFigaroVideoSection**
|
||||||
- **LEGO**
|
- **LEGO**
|
||||||
- **Lemonde**
|
- **Lemonde**
|
||||||
- **Lenta**
|
- **Lenta**
|
||||||
|
@ -672,10 +680,10 @@ # Supported sites
|
||||||
- **limelight:channel_list**
|
- **limelight:channel_list**
|
||||||
- **LineLive**
|
- **LineLive**
|
||||||
- **LineLiveChannel**
|
- **LineLiveChannel**
|
||||||
- **LinkedIn**: [<abbr title="netrc machine"><em>linkedin</em></abbr>]
|
- **LinkedIn**: [*linkedin*](## "netrc machine")
|
||||||
- **linkedin:learning**: [<abbr title="netrc machine"><em>linkedin</em></abbr>]
|
- **linkedin:learning**: [*linkedin*](## "netrc machine")
|
||||||
- **linkedin:learning:course**: [<abbr title="netrc machine"><em>linkedin</em></abbr>]
|
- **linkedin:learning:course**: [*linkedin*](## "netrc machine")
|
||||||
- **LinuxAcademy**: [<abbr title="netrc machine"><em>linuxacademy</em></abbr>]
|
- **LinuxAcademy**: [*linuxacademy*](## "netrc machine")
|
||||||
- **Liputan6**
|
- **Liputan6**
|
||||||
- **ListenNotes**
|
- **ListenNotes**
|
||||||
- **LiTV**
|
- **LiTV**
|
||||||
|
@ -690,8 +698,9 @@ # Supported sites
|
||||||
- **LoveHomePorn**
|
- **LoveHomePorn**
|
||||||
- **LRTStream**
|
- **LRTStream**
|
||||||
- **LRTVOD**
|
- **LRTVOD**
|
||||||
- **lynda**: [<abbr title="netrc machine"><em>lynda</em></abbr>] lynda.com videos
|
- **Lumni**
|
||||||
- **lynda:course**: [<abbr title="netrc machine"><em>lynda</em></abbr>] lynda.com online courses
|
- **lynda**: [*lynda*](## "netrc machine") lynda.com videos
|
||||||
|
- **lynda:course**: [*lynda*](## "netrc machine") lynda.com online courses
|
||||||
- **m6**
|
- **m6**
|
||||||
- **MagentaMusik360**
|
- **MagentaMusik360**
|
||||||
- **mailru**: Видео@Mail.Ru
|
- **mailru**: Видео@Mail.Ru
|
||||||
|
@ -761,18 +770,19 @@ # Supported sites
|
||||||
- **mixcloud:user**
|
- **mixcloud:user**
|
||||||
- **MLB**
|
- **MLB**
|
||||||
- **MLBArticle**
|
- **MLBArticle**
|
||||||
- **MLBTV**: [<abbr title="netrc machine"><em>mlb</em></abbr>]
|
- **MLBTV**: [*mlb*](## "netrc machine")
|
||||||
- **MLBVideo**
|
- **MLBVideo**
|
||||||
- **MLSSoccer**
|
- **MLSSoccer**
|
||||||
- **Mnet**
|
- **Mnet**
|
||||||
- **MNetTV**: [<abbr title="netrc machine"><em>mnettv</em></abbr>]
|
- **MNetTV**: [*mnettv*](## "netrc machine")
|
||||||
- **MNetTVLive**: [<abbr title="netrc machine"><em>mnettv</em></abbr>]
|
- **MNetTVLive**: [*mnettv*](## "netrc machine")
|
||||||
- **MNetTVRecordings**: [<abbr title="netrc machine"><em>mnettv</em></abbr>]
|
- **MNetTVRecordings**: [*mnettv*](## "netrc machine")
|
||||||
- **MochaVideo**
|
- **MochaVideo**
|
||||||
- **MoeVideo**: LetitBit video services: moevideo.net, playreplay.net and videochart.net
|
- **MoeVideo**: LetitBit video services: moevideo.net, playreplay.net and videochart.net
|
||||||
- **Mofosex**
|
- **Mofosex**
|
||||||
- **MofosexEmbed**
|
- **MofosexEmbed**
|
||||||
- **Mojvideo**
|
- **Mojvideo**
|
||||||
|
- **MonsterSirenHypergryphMusic**
|
||||||
- **Morningstar**: morningstar.com
|
- **Morningstar**: morningstar.com
|
||||||
- **Motherless**
|
- **Motherless**
|
||||||
- **MotherlessGroup**
|
- **MotherlessGroup**
|
||||||
|
@ -845,9 +855,9 @@ # Supported sites
|
||||||
- **ndr:embed**
|
- **ndr:embed**
|
||||||
- **ndr:embed:base**
|
- **ndr:embed:base**
|
||||||
- **NDTV**
|
- **NDTV**
|
||||||
- **Nebula**: [<abbr title="netrc machine"><em>watchnebula</em></abbr>]
|
- **Nebula**: [*watchnebula*](## "netrc machine")
|
||||||
- **nebula:channel**: [<abbr title="netrc machine"><em>watchnebula</em></abbr>]
|
- **nebula:channel**: [*watchnebula*](## "netrc machine")
|
||||||
- **nebula:subscriptions**: [<abbr title="netrc machine"><em>watchnebula</em></abbr>]
|
- **nebula:subscriptions**: [*watchnebula*](## "netrc machine")
|
||||||
- **NerdCubedFeed**
|
- **NerdCubedFeed**
|
||||||
- **netease:album**: 网易云音乐 - 专辑
|
- **netease:album**: 网易云音乐 - 专辑
|
||||||
- **netease:djradio**: 网易云音乐 - 电台
|
- **netease:djradio**: 网易云音乐 - 电台
|
||||||
|
@ -856,9 +866,9 @@ # Supported sites
|
||||||
- **netease:program**: 网易云音乐 - 电台节目
|
- **netease:program**: 网易云音乐 - 电台节目
|
||||||
- **netease:singer**: 网易云音乐 - 歌手
|
- **netease:singer**: 网易云音乐 - 歌手
|
||||||
- **netease:song**: 网易云音乐
|
- **netease:song**: 网易云音乐
|
||||||
- **NetPlusTV**: [<abbr title="netrc machine"><em>netplus</em></abbr>]
|
- **NetPlusTV**: [*netplus*](## "netrc machine")
|
||||||
- **NetPlusTVLive**: [<abbr title="netrc machine"><em>netplus</em></abbr>]
|
- **NetPlusTVLive**: [*netplus*](## "netrc machine")
|
||||||
- **NetPlusTVRecordings**: [<abbr title="netrc machine"><em>netplus</em></abbr>]
|
- **NetPlusTVRecordings**: [*netplus*](## "netrc machine")
|
||||||
- **Netverse**
|
- **Netverse**
|
||||||
- **NetversePlaylist**
|
- **NetversePlaylist**
|
||||||
- **NetverseSearch**: "netsearch:" prefix
|
- **NetverseSearch**: "netsearch:" prefix
|
||||||
|
@ -878,6 +888,8 @@ # Supported sites
|
||||||
- **NFHSNetwork**
|
- **NFHSNetwork**
|
||||||
- **nfl.com**
|
- **nfl.com**
|
||||||
- **nfl.com:article**
|
- **nfl.com:article**
|
||||||
|
- **nfl.com:plus:episode**
|
||||||
|
- **nfl.com:plus:replay**
|
||||||
- **NhkForSchoolBangumi**
|
- **NhkForSchoolBangumi**
|
||||||
- **NhkForSchoolProgramList**
|
- **NhkForSchoolProgramList**
|
||||||
- **NhkForSchoolSubject**: Portal page for each school subjects, like Japanese (kokugo, 国語) or math (sansuu/suugaku or 算数・数学)
|
- **NhkForSchoolSubject**: Portal page for each school subjects, like Japanese (kokugo, 国語) or math (sansuu/suugaku or 算数・数学)
|
||||||
|
@ -889,8 +901,8 @@ # Supported sites
|
||||||
- **nickelodeon:br**
|
- **nickelodeon:br**
|
||||||
- **nickelodeonru**
|
- **nickelodeonru**
|
||||||
- **nicknight**
|
- **nicknight**
|
||||||
- **niconico**: [<abbr title="netrc machine"><em>niconico</em></abbr>] ニコニコ動画
|
- **niconico**: [*niconico*](## "netrc machine") ニコニコ動画
|
||||||
- **niconico:history**: NicoNico user history. Requires cookies.
|
- **niconico:history**: NicoNico user history or likes. Requires cookies.
|
||||||
- **niconico:playlist**
|
- **niconico:playlist**
|
||||||
- **niconico:series**
|
- **niconico:series**
|
||||||
- **niconico:tag**: NicoNico video tag URLs
|
- **niconico:tag**: NicoNico video tag URLs
|
||||||
|
@ -902,7 +914,7 @@ # Supported sites
|
||||||
- **Nitter**
|
- **Nitter**
|
||||||
- **njoy**: N-JOY
|
- **njoy**: N-JOY
|
||||||
- **njoy:embed**
|
- **njoy:embed**
|
||||||
- **NJPWWorld**: [<abbr title="netrc machine"><em>njpwworld</em></abbr>] 新日本プロレスワールド
|
- **NJPWWorld**: [*njpwworld*](## "netrc machine") 新日本プロレスワールド
|
||||||
- **NobelPrize**
|
- **NobelPrize**
|
||||||
- **NoicePodcast**
|
- **NoicePodcast**
|
||||||
- **NonkTube**
|
- **NonkTube**
|
||||||
|
@ -940,6 +952,7 @@ # Supported sites
|
||||||
- **NYTimesArticle**
|
- **NYTimesArticle**
|
||||||
- **NYTimesCooking**
|
- **NYTimesCooking**
|
||||||
- **nzherald**
|
- **nzherald**
|
||||||
|
- **NZOnScreen**
|
||||||
- **NZZ**
|
- **NZZ**
|
||||||
- **ocw.mit.edu**
|
- **ocw.mit.edu**
|
||||||
- **OdaTV**
|
- **OdaTV**
|
||||||
|
@ -949,6 +962,7 @@ # Supported sites
|
||||||
- **OktoberfestTV**
|
- **OktoberfestTV**
|
||||||
- **OlympicsReplay**
|
- **OlympicsReplay**
|
||||||
- **on24**: ON24
|
- **on24**: ON24
|
||||||
|
- **OnDemandChinaEpisode**
|
||||||
- **OnDemandKorea**
|
- **OnDemandKorea**
|
||||||
- **OneFootball**
|
- **OneFootball**
|
||||||
- **OnePlacePodcast**
|
- **OnePlacePodcast**
|
||||||
|
@ -969,11 +983,11 @@ # Supported sites
|
||||||
- **orf:iptv**: iptv.ORF.at
|
- **orf:iptv**: iptv.ORF.at
|
||||||
- **orf:radio**
|
- **orf:radio**
|
||||||
- **orf:tvthek**: ORF TVthek
|
- **orf:tvthek**: ORF TVthek
|
||||||
- **OsnatelTV**: [<abbr title="netrc machine"><em>osnateltv</em></abbr>]
|
- **OsnatelTV**: [*osnateltv*](## "netrc machine")
|
||||||
- **OsnatelTVLive**: [<abbr title="netrc machine"><em>osnateltv</em></abbr>]
|
- **OsnatelTVLive**: [*osnateltv*](## "netrc machine")
|
||||||
- **OsnatelTVRecordings**: [<abbr title="netrc machine"><em>osnateltv</em></abbr>]
|
- **OsnatelTVRecordings**: [*osnateltv*](## "netrc machine")
|
||||||
- **OutsideTV**
|
- **OutsideTV**
|
||||||
- **PacktPub**: [<abbr title="netrc machine"><em>packtpub</em></abbr>]
|
- **PacktPub**: [*packtpub*](## "netrc machine")
|
||||||
- **PacktPubCourse**
|
- **PacktPubCourse**
|
||||||
- **PalcoMP3:artist**
|
- **PalcoMP3:artist**
|
||||||
- **PalcoMP3:song**
|
- **PalcoMP3:song**
|
||||||
|
@ -996,7 +1010,7 @@ # Supported sites
|
||||||
- **peer.tv**
|
- **peer.tv**
|
||||||
- **PeerTube**
|
- **PeerTube**
|
||||||
- **PeerTube:Playlist**
|
- **PeerTube:Playlist**
|
||||||
- **peloton**: [<abbr title="netrc machine"><em>peloton</em></abbr>]
|
- **peloton**: [*peloton*](## "netrc machine")
|
||||||
- **peloton:live**: Peloton Live
|
- **peloton:live**: Peloton Live
|
||||||
- **People**
|
- **People**
|
||||||
- **PerformGroup**
|
- **PerformGroup**
|
||||||
|
@ -1005,7 +1019,7 @@ # Supported sites
|
||||||
- **PhilharmonieDeParis**: Philharmonie de Paris
|
- **PhilharmonieDeParis**: Philharmonie de Paris
|
||||||
- **phoenix.de**
|
- **phoenix.de**
|
||||||
- **Photobucket**
|
- **Photobucket**
|
||||||
- **Piapro**: [<abbr title="netrc machine"><em>piapro</em></abbr>]
|
- **Piapro**: [*piapro*](## "netrc machine")
|
||||||
- **Picarto**
|
- **Picarto**
|
||||||
- **PicartoVod**
|
- **PicartoVod**
|
||||||
- **Piksel**
|
- **Piksel**
|
||||||
|
@ -1016,11 +1030,11 @@ # Supported sites
|
||||||
- **pixiv:sketch:user**
|
- **pixiv:sketch:user**
|
||||||
- **Pladform**
|
- **Pladform**
|
||||||
- **PlanetMarathi**
|
- **PlanetMarathi**
|
||||||
- **Platzi**: [<abbr title="netrc machine"><em>platzi</em></abbr>]
|
- **Platzi**: [*platzi*](## "netrc machine")
|
||||||
- **PlatziCourse**: [<abbr title="netrc machine"><em>platzi</em></abbr>]
|
- **PlatziCourse**: [*platzi*](## "netrc machine")
|
||||||
- **play.fm**
|
- **play.fm**
|
||||||
- **player.sky.it**
|
- **player.sky.it**
|
||||||
- **PlayPlusTV**: [<abbr title="netrc machine"><em>playplustv</em></abbr>]
|
- **PlayPlusTV**: [*playplustv*](## "netrc machine")
|
||||||
- **PlayStuff**
|
- **PlayStuff**
|
||||||
- **PlaysTV**
|
- **PlaysTV**
|
||||||
- **PlaySuisse**
|
- **PlaySuisse**
|
||||||
|
@ -1028,7 +1042,7 @@ # Supported sites
|
||||||
- **Playvid**
|
- **Playvid**
|
||||||
- **PlayVids**
|
- **PlayVids**
|
||||||
- **Playwire**
|
- **Playwire**
|
||||||
- **pluralsight**: [<abbr title="netrc machine"><em>pluralsight</em></abbr>]
|
- **pluralsight**: [*pluralsight*](## "netrc machine")
|
||||||
- **pluralsight:course**
|
- **pluralsight:course**
|
||||||
- **PlutoTV**
|
- **PlutoTV**
|
||||||
- **PodbayFM**
|
- **PodbayFM**
|
||||||
|
@ -1037,8 +1051,8 @@ # Supported sites
|
||||||
- **podomatic**
|
- **podomatic**
|
||||||
- **Pokemon**
|
- **Pokemon**
|
||||||
- **PokemonWatch**
|
- **PokemonWatch**
|
||||||
- **PokerGo**: [<abbr title="netrc machine"><em>pokergo</em></abbr>]
|
- **PokerGo**: [*pokergo*](## "netrc machine")
|
||||||
- **PokerGoCollection**: [<abbr title="netrc machine"><em>pokergo</em></abbr>]
|
- **PokerGoCollection**: [*pokergo*](## "netrc machine")
|
||||||
- **PolsatGo**
|
- **PolsatGo**
|
||||||
- **PolskieRadio**
|
- **PolskieRadio**
|
||||||
- **polskieradio:audition**
|
- **polskieradio:audition**
|
||||||
|
@ -1055,15 +1069,18 @@ # Supported sites
|
||||||
- **Pornez**
|
- **Pornez**
|
||||||
- **PornFlip**
|
- **PornFlip**
|
||||||
- **PornHd**
|
- **PornHd**
|
||||||
- **PornHub**: [<abbr title="netrc machine"><em>pornhub</em></abbr>] PornHub and Thumbzilla
|
- **PornHub**: [*pornhub*](## "netrc machine") PornHub and Thumbzilla
|
||||||
- **PornHubPagedVideoList**: [<abbr title="netrc machine"><em>pornhub</em></abbr>]
|
- **PornHubPagedVideoList**: [*pornhub*](## "netrc machine")
|
||||||
- **PornHubPlaylist**: [<abbr title="netrc machine"><em>pornhub</em></abbr>]
|
- **PornHubPlaylist**: [*pornhub*](## "netrc machine")
|
||||||
- **PornHubUser**: [<abbr title="netrc machine"><em>pornhub</em></abbr>]
|
- **PornHubUser**: [*pornhub*](## "netrc machine")
|
||||||
- **PornHubUserVideosUpload**: [<abbr title="netrc machine"><em>pornhub</em></abbr>]
|
- **PornHubUserVideosUpload**: [*pornhub*](## "netrc machine")
|
||||||
- **Pornotube**
|
- **Pornotube**
|
||||||
- **PornoVoisines**
|
- **PornoVoisines**
|
||||||
- **PornoXO**
|
- **PornoXO**
|
||||||
|
- **PornTop**
|
||||||
- **PornTube**
|
- **PornTube**
|
||||||
|
- **Pr0gramm**
|
||||||
|
- **Pr0grammStatic**
|
||||||
- **PrankCast**
|
- **PrankCast**
|
||||||
- **PremiershipRugby**
|
- **PremiershipRugby**
|
||||||
- **PressTV**
|
- **PressTV**
|
||||||
|
@ -1084,9 +1101,9 @@ # Supported sites
|
||||||
- **qqmusic:playlist**: QQ音乐 - 歌单
|
- **qqmusic:playlist**: QQ音乐 - 歌单
|
||||||
- **qqmusic:singer**: QQ音乐 - 歌手
|
- **qqmusic:singer**: QQ音乐 - 歌手
|
||||||
- **qqmusic:toplist**: QQ音乐 - 排行榜
|
- **qqmusic:toplist**: QQ音乐 - 排行榜
|
||||||
- **QuantumTV**: [<abbr title="netrc machine"><em>quantumtv</em></abbr>]
|
- **QuantumTV**: [*quantumtv*](## "netrc machine")
|
||||||
- **QuantumTVLive**: [<abbr title="netrc machine"><em>quantumtv</em></abbr>]
|
- **QuantumTVLive**: [*quantumtv*](## "netrc machine")
|
||||||
- **QuantumTVRecordings**: [<abbr title="netrc machine"><em>quantumtv</em></abbr>]
|
- **QuantumTVRecordings**: [*quantumtv*](## "netrc machine")
|
||||||
- **Qub**
|
- **Qub**
|
||||||
- **R7**
|
- **R7**
|
||||||
- **R7Article**
|
- **R7Article**
|
||||||
|
@ -1115,6 +1132,8 @@ # Supported sites
|
||||||
- **RaiSudtirol**
|
- **RaiSudtirol**
|
||||||
- **RayWenderlich**
|
- **RayWenderlich**
|
||||||
- **RayWenderlichCourse**
|
- **RayWenderlichCourse**
|
||||||
|
- **RbgTum**
|
||||||
|
- **RbgTumCourse**
|
||||||
- **RBMARadio**
|
- **RBMARadio**
|
||||||
- **RCS**
|
- **RCS**
|
||||||
- **RCSEmbeds**
|
- **RCSEmbeds**
|
||||||
|
@ -1141,15 +1160,16 @@ # Supported sites
|
||||||
- **RICE**
|
- **RICE**
|
||||||
- **RMCDecouverte**
|
- **RMCDecouverte**
|
||||||
- **RockstarGames**
|
- **RockstarGames**
|
||||||
- **Rokfin**: [<abbr title="netrc machine"><em>rokfin</em></abbr>]
|
- **Rokfin**: [*rokfin*](## "netrc machine")
|
||||||
- **rokfin:channel**: Rokfin Channels
|
- **rokfin:channel**: Rokfin Channels
|
||||||
- **rokfin:search**: Rokfin Search; "rkfnsearch:" prefix
|
- **rokfin:search**: Rokfin Search; "rkfnsearch:" prefix
|
||||||
- **rokfin:stack**: Rokfin Stacks
|
- **rokfin:stack**: Rokfin Stacks
|
||||||
- **RoosterTeeth**: [<abbr title="netrc machine"><em>roosterteeth</em></abbr>]
|
- **RoosterTeeth**: [*roosterteeth*](## "netrc machine")
|
||||||
- **RoosterTeethSeries**: [<abbr title="netrc machine"><em>roosterteeth</em></abbr>]
|
- **RoosterTeethSeries**: [*roosterteeth*](## "netrc machine")
|
||||||
- **RottenTomatoes**
|
- **RottenTomatoes**
|
||||||
- **Rozhlas**
|
- **Rozhlas**
|
||||||
- **RTBF**: [<abbr title="netrc machine"><em>rtbf</em></abbr>]
|
- **RozhlasVltava**
|
||||||
|
- **RTBF**: [*rtbf*](## "netrc machine")
|
||||||
- **RTDocumentry**
|
- **RTDocumentry**
|
||||||
- **RTDocumentryPlaylist**
|
- **RTDocumentryPlaylist**
|
||||||
- **rte**: Raidió Teilifís Éireann TV
|
- **rte**: Raidió Teilifís Éireann TV
|
||||||
|
@ -1191,16 +1211,16 @@ # Supported sites
|
||||||
- **Ruutu**
|
- **Ruutu**
|
||||||
- **Ruv**
|
- **Ruv**
|
||||||
- **ruv.is:spila**
|
- **ruv.is:spila**
|
||||||
- **safari**: [<abbr title="netrc machine"><em>safari</em></abbr>] safaribooksonline.com online video
|
- **safari**: [*safari*](## "netrc machine") safaribooksonline.com online video
|
||||||
- **safari:api**: [<abbr title="netrc machine"><em>safari</em></abbr>]
|
- **safari:api**: [*safari*](## "netrc machine")
|
||||||
- **safari:course**: [<abbr title="netrc machine"><em>safari</em></abbr>] safaribooksonline.com online courses
|
- **safari:course**: [*safari*](## "netrc machine") safaribooksonline.com online courses
|
||||||
- **Saitosan**
|
- **Saitosan**
|
||||||
- **SAKTV**: [<abbr title="netrc machine"><em>saktv</em></abbr>]
|
- **SAKTV**: [*saktv*](## "netrc machine")
|
||||||
- **SAKTVLive**: [<abbr title="netrc machine"><em>saktv</em></abbr>]
|
- **SAKTVLive**: [*saktv*](## "netrc machine")
|
||||||
- **SAKTVRecordings**: [<abbr title="netrc machine"><em>saktv</em></abbr>]
|
- **SAKTVRecordings**: [*saktv*](## "netrc machine")
|
||||||
- **SaltTV**: [<abbr title="netrc machine"><em>salttv</em></abbr>]
|
- **SaltTV**: [*salttv*](## "netrc machine")
|
||||||
- **SaltTVLive**: [<abbr title="netrc machine"><em>salttv</em></abbr>]
|
- **SaltTVLive**: [*salttv*](## "netrc machine")
|
||||||
- **SaltTVRecordings**: [<abbr title="netrc machine"><em>salttv</em></abbr>]
|
- **SaltTVRecordings**: [*salttv*](## "netrc machine")
|
||||||
- **SampleFocus**
|
- **SampleFocus**
|
||||||
- **Sangiin**: 参議院インターネット審議中継 (archive)
|
- **Sangiin**: 参議院インターネット審議中継 (archive)
|
||||||
- **Sapo**: SAPO Vídeos
|
- **Sapo**: SAPO Vídeos
|
||||||
|
@ -1216,8 +1236,8 @@ # Supported sites
|
||||||
- **ScrippsNetworks**
|
- **ScrippsNetworks**
|
||||||
- **scrippsnetworks:watch**
|
- **scrippsnetworks:watch**
|
||||||
- **Scrolller**
|
- **Scrolller**
|
||||||
- **SCTE**: [<abbr title="netrc machine"><em>scte</em></abbr>]
|
- **SCTE**: [*scte*](## "netrc machine")
|
||||||
- **SCTECourse**: [<abbr title="netrc machine"><em>scte</em></abbr>]
|
- **SCTECourse**: [*scte*](## "netrc machine")
|
||||||
- **Seeker**
|
- **Seeker**
|
||||||
- **SenateGov**
|
- **SenateGov**
|
||||||
- **SenateISVP**
|
- **SenateISVP**
|
||||||
|
@ -1226,7 +1246,7 @@ # Supported sites
|
||||||
- **Sexu**
|
- **Sexu**
|
||||||
- **SeznamZpravy**
|
- **SeznamZpravy**
|
||||||
- **SeznamZpravyArticle**
|
- **SeznamZpravyArticle**
|
||||||
- **Shahid**: [<abbr title="netrc machine"><em>shahid</em></abbr>]
|
- **Shahid**: [*shahid*](## "netrc machine")
|
||||||
- **ShahidShow**
|
- **ShahidShow**
|
||||||
- **Shared**: shared.sx
|
- **Shared**: shared.sx
|
||||||
- **ShareVideosEmbed**
|
- **ShareVideosEmbed**
|
||||||
|
@ -1256,16 +1276,16 @@ # Supported sites
|
||||||
- **Smotrim**
|
- **Smotrim**
|
||||||
- **Snotr**
|
- **Snotr**
|
||||||
- **Sohu**
|
- **Sohu**
|
||||||
- **SonyLIV**: [<abbr title="netrc machine"><em>sonyliv</em></abbr>]
|
- **SonyLIV**: [*sonyliv*](## "netrc machine")
|
||||||
- **SonyLIVSeries**
|
- **SonyLIVSeries**
|
||||||
- **soundcloud**: [<abbr title="netrc machine"><em>soundcloud</em></abbr>]
|
- **soundcloud**: [*soundcloud*](## "netrc machine")
|
||||||
- **soundcloud:playlist**: [<abbr title="netrc machine"><em>soundcloud</em></abbr>]
|
- **soundcloud:playlist**: [*soundcloud*](## "netrc machine")
|
||||||
- **soundcloud:related**: [<abbr title="netrc machine"><em>soundcloud</em></abbr>]
|
- **soundcloud:related**: [*soundcloud*](## "netrc machine")
|
||||||
- **soundcloud:search**: [<abbr title="netrc machine"><em>soundcloud</em></abbr>] Soundcloud search; "scsearch:" prefix
|
- **soundcloud:search**: [*soundcloud*](## "netrc machine") Soundcloud search; "scsearch:" prefix
|
||||||
- **soundcloud:set**: [<abbr title="netrc machine"><em>soundcloud</em></abbr>]
|
- **soundcloud:set**: [*soundcloud*](## "netrc machine")
|
||||||
- **soundcloud:trackstation**: [<abbr title="netrc machine"><em>soundcloud</em></abbr>]
|
- **soundcloud:trackstation**: [*soundcloud*](## "netrc machine")
|
||||||
- **soundcloud:user**: [<abbr title="netrc machine"><em>soundcloud</em></abbr>]
|
- **soundcloud:user**: [*soundcloud*](## "netrc machine")
|
||||||
- **soundcloud:user:permalink**: [<abbr title="netrc machine"><em>soundcloud</em></abbr>]
|
- **soundcloud:user:permalink**: [*soundcloud*](## "netrc machine")
|
||||||
- **SoundcloudEmbed**
|
- **SoundcloudEmbed**
|
||||||
- **soundgasm**
|
- **soundgasm**
|
||||||
- **soundgasm:profile**
|
- **soundgasm:profile**
|
||||||
|
@ -1332,13 +1352,13 @@ # Supported sites
|
||||||
- **Tass**
|
- **Tass**
|
||||||
- **TBS**
|
- **TBS**
|
||||||
- **TDSLifeway**
|
- **TDSLifeway**
|
||||||
- **Teachable**: [<abbr title="netrc machine"><em>teachable</em></abbr>]
|
- **Teachable**: [*teachable*](## "netrc machine")
|
||||||
- **TeachableCourse**: [<abbr title="netrc machine"><em>teachable</em></abbr>]
|
- **TeachableCourse**: [*teachable*](## "netrc machine")
|
||||||
- **teachertube**: teachertube.com videos
|
- **teachertube**: teachertube.com videos
|
||||||
- **teachertube:user:collection**: teachertube.com user and collection videos
|
- **teachertube:user:collection**: teachertube.com user and collection videos
|
||||||
- **TeachingChannel**
|
- **TeachingChannel**
|
||||||
- **Teamcoco**
|
- **Teamcoco**
|
||||||
- **TeamTreeHouse**: [<abbr title="netrc machine"><em>teamtreehouse</em></abbr>]
|
- **TeamTreeHouse**: [*teamtreehouse*](## "netrc machine")
|
||||||
- **TechTalks**
|
- **TechTalks**
|
||||||
- **techtv.mit.edu**
|
- **techtv.mit.edu**
|
||||||
- **TedEmbed**
|
- **TedEmbed**
|
||||||
|
@ -1348,6 +1368,7 @@ # Supported sites
|
||||||
- **Tele13**
|
- **Tele13**
|
||||||
- **Tele5**
|
- **Tele5**
|
||||||
- **TeleBruxelles**
|
- **TeleBruxelles**
|
||||||
|
- **TelecaribePlay**
|
||||||
- **Telecinco**: telecinco.es, cuatro.com and mediaset.es
|
- **Telecinco**: telecinco.es, cuatro.com and mediaset.es
|
||||||
- **Telegraaf**
|
- **Telegraaf**
|
||||||
- **telegram:embed**
|
- **telegram:embed**
|
||||||
|
@ -1361,8 +1382,8 @@ # Supported sites
|
||||||
- **TeleTask**
|
- **TeleTask**
|
||||||
- **Telewebion**
|
- **Telewebion**
|
||||||
- **Tempo**
|
- **Tempo**
|
||||||
- **TennisTV**: [<abbr title="netrc machine"><em>tennistv</em></abbr>]
|
- **TennisTV**: [*tennistv*](## "netrc machine")
|
||||||
- **TenPlay**: [<abbr title="netrc machine"><em>10play</em></abbr>]
|
- **TenPlay**: [*10play*](## "netrc machine")
|
||||||
- **TF1**
|
- **TF1**
|
||||||
- **TFO**
|
- **TFO**
|
||||||
- **TheHoleTv**
|
- **TheHoleTv**
|
||||||
|
@ -1400,13 +1421,13 @@ # Supported sites
|
||||||
- **tokfm:audition**
|
- **tokfm:audition**
|
||||||
- **tokfm:podcast**
|
- **tokfm:podcast**
|
||||||
- **ToonGoggles**
|
- **ToonGoggles**
|
||||||
- **tou.tv**: [<abbr title="netrc machine"><em>toutv</em></abbr>]
|
- **tou.tv**: [*toutv*](## "netrc machine")
|
||||||
- **Toypics**: Toypics video
|
- **Toypics**: Toypics video
|
||||||
- **ToypicsUser**: Toypics user profile
|
- **ToypicsUser**: Toypics user profile
|
||||||
- **TrailerAddict**: (**Currently broken**)
|
- **TrailerAddict**: (**Currently broken**)
|
||||||
- **TravelChannel**
|
- **TravelChannel**
|
||||||
- **Triller**: [<abbr title="netrc machine"><em>triller</em></abbr>]
|
- **Triller**: [*triller*](## "netrc machine")
|
||||||
- **TrillerUser**: [<abbr title="netrc machine"><em>triller</em></abbr>]
|
- **TrillerUser**: [*triller*](## "netrc machine")
|
||||||
- **Trilulilu**
|
- **Trilulilu**
|
||||||
- **Trovo**
|
- **Trovo**
|
||||||
- **TrovoChannelClip**: All Clips of a trovo.live channel; "trovoclip:" prefix
|
- **TrovoChannelClip**: All Clips of a trovo.live channel; "trovoclip:" prefix
|
||||||
|
@ -1418,15 +1439,14 @@ # Supported sites
|
||||||
- **Truth**
|
- **Truth**
|
||||||
- **TruTV**
|
- **TruTV**
|
||||||
- **Tube8**
|
- **Tube8**
|
||||||
- **TubeTuGraz**: [<abbr title="netrc machine"><em>tubetugraz</em></abbr>] tube.tugraz.at
|
- **TubeTuGraz**: [*tubetugraz*](## "netrc machine") tube.tugraz.at
|
||||||
- **TubeTuGrazSeries**: [<abbr title="netrc machine"><em>tubetugraz</em></abbr>]
|
- **TubeTuGrazSeries**: [*tubetugraz*](## "netrc machine")
|
||||||
- **TubiTv**: [<abbr title="netrc machine"><em>tubitv</em></abbr>]
|
- **TubiTv**: [*tubitv*](## "netrc machine")
|
||||||
- **TubiTvShow**
|
- **TubiTvShow**
|
||||||
- **Tumblr**: [<abbr title="netrc machine"><em>tumblr</em></abbr>]
|
- **Tumblr**: [*tumblr*](## "netrc machine")
|
||||||
- **tunein:clip**
|
- **TuneInPodcast**
|
||||||
- **tunein:program**
|
- **TuneInPodcastEpisode**
|
||||||
- **tunein:station**
|
- **TuneInStation**
|
||||||
- **tunein:topic**
|
|
||||||
- **TunePk**
|
- **TunePk**
|
||||||
- **Turbo**
|
- **Turbo**
|
||||||
- **tv.dfb.de**
|
- **tv.dfb.de**
|
||||||
|
@ -1472,24 +1492,25 @@ # Supported sites
|
||||||
- **TwitCasting**
|
- **TwitCasting**
|
||||||
- **TwitCastingLive**
|
- **TwitCastingLive**
|
||||||
- **TwitCastingUser**
|
- **TwitCastingUser**
|
||||||
- **twitch:clips**: [<abbr title="netrc machine"><em>twitch</em></abbr>]
|
- **twitch:clips**: [*twitch*](## "netrc machine")
|
||||||
- **twitch:stream**: [<abbr title="netrc machine"><em>twitch</em></abbr>]
|
- **twitch:stream**: [*twitch*](## "netrc machine")
|
||||||
- **twitch:vod**: [<abbr title="netrc machine"><em>twitch</em></abbr>]
|
- **twitch:vod**: [*twitch*](## "netrc machine")
|
||||||
- **TwitchCollection**: [<abbr title="netrc machine"><em>twitch</em></abbr>]
|
- **TwitchCollection**: [*twitch*](## "netrc machine")
|
||||||
- **TwitchVideos**: [<abbr title="netrc machine"><em>twitch</em></abbr>]
|
- **TwitchVideos**: [*twitch*](## "netrc machine")
|
||||||
- **TwitchVideosClips**: [<abbr title="netrc machine"><em>twitch</em></abbr>]
|
- **TwitchVideosClips**: [*twitch*](## "netrc machine")
|
||||||
- **TwitchVideosCollections**: [<abbr title="netrc machine"><em>twitch</em></abbr>]
|
- **TwitchVideosCollections**: [*twitch*](## "netrc machine")
|
||||||
- **twitter**
|
- **twitter**
|
||||||
- **twitter:amplify**
|
- **twitter:amplify**
|
||||||
- **twitter:broadcast**
|
- **twitter:broadcast**
|
||||||
- **twitter:card**
|
- **twitter:card**
|
||||||
- **twitter:shortener**
|
- **twitter:shortener**
|
||||||
- **twitter:spaces**
|
- **twitter:spaces**
|
||||||
- **udemy**: [<abbr title="netrc machine"><em>udemy</em></abbr>]
|
- **Txxx**
|
||||||
- **udemy:course**: [<abbr title="netrc machine"><em>udemy</em></abbr>]
|
- **udemy**: [*udemy*](## "netrc machine")
|
||||||
|
- **udemy:course**: [*udemy*](## "netrc machine")
|
||||||
- **UDNEmbed**: 聯合影音
|
- **UDNEmbed**: 聯合影音
|
||||||
- **UFCArabia**: [<abbr title="netrc machine"><em>ufcarabia</em></abbr>]
|
- **UFCArabia**: [*ufcarabia*](## "netrc machine")
|
||||||
- **UFCTV**: [<abbr title="netrc machine"><em>ufctv</em></abbr>]
|
- **UFCTV**: [*ufctv*](## "netrc machine")
|
||||||
- **ukcolumn**
|
- **ukcolumn**
|
||||||
- **UKTVPlay**
|
- **UKTVPlay**
|
||||||
- **umg:de**: Universal Music Deutschland
|
- **umg:de**: Universal Music Deutschland
|
||||||
|
@ -1519,7 +1540,7 @@ # Supported sites
|
||||||
- **VevoPlaylist**
|
- **VevoPlaylist**
|
||||||
- **VGTV**: VGTV, BTTV, FTV, Aftenposten and Aftonbladet
|
- **VGTV**: VGTV, BTTV, FTV, Aftenposten and Aftonbladet
|
||||||
- **vh1.com**
|
- **vh1.com**
|
||||||
- **vhx:embed**: [<abbr title="netrc machine"><em>vimeo</em></abbr>]
|
- **vhx:embed**: [*vimeo*](## "netrc machine")
|
||||||
- **Viafree**
|
- **Viafree**
|
||||||
- **vice**
|
- **vice**
|
||||||
- **vice:article**
|
- **vice:article**
|
||||||
|
@ -1542,25 +1563,25 @@ # Supported sites
|
||||||
- **videomore:season**
|
- **videomore:season**
|
||||||
- **videomore:video**
|
- **videomore:video**
|
||||||
- **VideoPress**
|
- **VideoPress**
|
||||||
- **Vidio**: [<abbr title="netrc machine"><em>vidio</em></abbr>]
|
- **Vidio**: [*vidio*](## "netrc machine")
|
||||||
- **VidioLive**: [<abbr title="netrc machine"><em>vidio</em></abbr>]
|
- **VidioLive**: [*vidio*](## "netrc machine")
|
||||||
- **VidioPremier**: [<abbr title="netrc machine"><em>vidio</em></abbr>]
|
- **VidioPremier**: [*vidio*](## "netrc machine")
|
||||||
- **VidLii**
|
- **VidLii**
|
||||||
- **viewlift**
|
- **viewlift**
|
||||||
- **viewlift:embed**
|
- **viewlift:embed**
|
||||||
- **Viidea**
|
- **Viidea**
|
||||||
- **viki**: [<abbr title="netrc machine"><em>viki</em></abbr>]
|
- **viki**: [*viki*](## "netrc machine")
|
||||||
- **viki:channel**: [<abbr title="netrc machine"><em>viki</em></abbr>]
|
- **viki:channel**: [*viki*](## "netrc machine")
|
||||||
- **vimeo**: [<abbr title="netrc machine"><em>vimeo</em></abbr>]
|
- **vimeo**: [*vimeo*](## "netrc machine")
|
||||||
- **vimeo:album**: [<abbr title="netrc machine"><em>vimeo</em></abbr>]
|
- **vimeo:album**: [*vimeo*](## "netrc machine")
|
||||||
- **vimeo:channel**: [<abbr title="netrc machine"><em>vimeo</em></abbr>]
|
- **vimeo:channel**: [*vimeo*](## "netrc machine")
|
||||||
- **vimeo:group**: [<abbr title="netrc machine"><em>vimeo</em></abbr>]
|
- **vimeo:group**: [*vimeo*](## "netrc machine")
|
||||||
- **vimeo:likes**: [<abbr title="netrc machine"><em>vimeo</em></abbr>] Vimeo user likes
|
- **vimeo:likes**: [*vimeo*](## "netrc machine") Vimeo user likes
|
||||||
- **vimeo:ondemand**: [<abbr title="netrc machine"><em>vimeo</em></abbr>]
|
- **vimeo:ondemand**: [*vimeo*](## "netrc machine")
|
||||||
- **vimeo:pro**: [<abbr title="netrc machine"><em>vimeo</em></abbr>]
|
- **vimeo:pro**: [*vimeo*](## "netrc machine")
|
||||||
- **vimeo:review**: [<abbr title="netrc machine"><em>vimeo</em></abbr>] Review pages on vimeo
|
- **vimeo:review**: [*vimeo*](## "netrc machine") Review pages on vimeo
|
||||||
- **vimeo:user**: [<abbr title="netrc machine"><em>vimeo</em></abbr>]
|
- **vimeo:user**: [*vimeo*](## "netrc machine")
|
||||||
- **vimeo:watchlater**: [<abbr title="netrc machine"><em>vimeo</em></abbr>] Vimeo watch later list, ":vimeowatchlater" keyword (requires authentication)
|
- **vimeo:watchlater**: [*vimeo*](## "netrc machine") Vimeo watch later list, ":vimeowatchlater" keyword (requires authentication)
|
||||||
- **Vimm:recording**
|
- **Vimm:recording**
|
||||||
- **Vimm:stream**
|
- **Vimm:stream**
|
||||||
- **ViMP**
|
- **ViMP**
|
||||||
|
@ -1570,16 +1591,15 @@ # Supported sites
|
||||||
- **vine:user**
|
- **vine:user**
|
||||||
- **Viqeo**
|
- **Viqeo**
|
||||||
- **Viu**
|
- **Viu**
|
||||||
- **viu:ott**: [<abbr title="netrc machine"><em>viu</em></abbr>]
|
- **viu:ott**: [*viu*](## "netrc machine")
|
||||||
- **viu:playlist**
|
- **viu:playlist**
|
||||||
|
- **ViuOTTIndonesia**
|
||||||
- **Vivo**: vivo.sx
|
- **Vivo**: vivo.sx
|
||||||
- **vk**: [<abbr title="netrc machine"><em>vk</em></abbr>] VK
|
- **vk**: [*vk*](## "netrc machine") VK
|
||||||
- **vk:uservideos**: [<abbr title="netrc machine"><em>vk</em></abbr>] VK - User's Videos
|
- **vk:uservideos**: [*vk*](## "netrc machine") VK - User's Videos
|
||||||
- **vk:wallpost**: [<abbr title="netrc machine"><em>vk</em></abbr>]
|
- **vk:wallpost**: [*vk*](## "netrc machine")
|
||||||
- **vlive**: [<abbr title="netrc machine"><em>vlive</em></abbr>]
|
|
||||||
- **vlive:channel**: [<abbr title="netrc machine"><em>vlive</em></abbr>]
|
|
||||||
- **vlive:post**: [<abbr title="netrc machine"><em>vlive</em></abbr>]
|
|
||||||
- **vm.tiktok**
|
- **vm.tiktok**
|
||||||
|
- **Vocaroo**
|
||||||
- **Vodlocker**
|
- **Vodlocker**
|
||||||
- **VODPl**
|
- **VODPl**
|
||||||
- **VODPlatform**
|
- **VODPlatform**
|
||||||
|
@ -1596,14 +1616,14 @@ # Supported sites
|
||||||
- **vqq:video**
|
- **vqq:video**
|
||||||
- **Vrak**
|
- **Vrak**
|
||||||
- **VRT**: VRT NWS, Flanders News, Flandern Info and Sporza
|
- **VRT**: VRT NWS, Flanders News, Flandern Info and Sporza
|
||||||
- **VrtNU**: [<abbr title="netrc machine"><em>vrtnu</em></abbr>] VrtNU.be
|
- **VrtNU**: [*vrtnu*](## "netrc machine") VrtNU.be
|
||||||
- **vrv**: [<abbr title="netrc machine"><em>vrv</em></abbr>]
|
- **vrv**: [*vrv*](## "netrc machine")
|
||||||
- **vrv:series**
|
- **vrv:series**
|
||||||
- **VShare**
|
- **VShare**
|
||||||
- **VTM**
|
- **VTM**
|
||||||
- **VTXTV**: [<abbr title="netrc machine"><em>vtxtv</em></abbr>]
|
- **VTXTV**: [*vtxtv*](## "netrc machine")
|
||||||
- **VTXTVLive**: [<abbr title="netrc machine"><em>vtxtv</em></abbr>]
|
- **VTXTVLive**: [*vtxtv*](## "netrc machine")
|
||||||
- **VTXTVRecordings**: [<abbr title="netrc machine"><em>vtxtv</em></abbr>]
|
- **VTXTVRecordings**: [*vtxtv*](## "netrc machine")
|
||||||
- **VuClip**
|
- **VuClip**
|
||||||
- **Vupload**
|
- **Vupload**
|
||||||
- **VVVVID**
|
- **VVVVID**
|
||||||
|
@ -1612,9 +1632,9 @@ # Supported sites
|
||||||
- **Vzaar**
|
- **Vzaar**
|
||||||
- **Wakanim**
|
- **Wakanim**
|
||||||
- **Walla**
|
- **Walla**
|
||||||
- **WalyTV**: [<abbr title="netrc machine"><em>walytv</em></abbr>]
|
- **WalyTV**: [*walytv*](## "netrc machine")
|
||||||
- **WalyTVLive**: [<abbr title="netrc machine"><em>walytv</em></abbr>]
|
- **WalyTVLive**: [*walytv*](## "netrc machine")
|
||||||
- **WalyTVRecordings**: [<abbr title="netrc machine"><em>walytv</em></abbr>]
|
- **WalyTVRecordings**: [*walytv*](## "netrc machine")
|
||||||
- **wasdtv:clip**
|
- **wasdtv:clip**
|
||||||
- **wasdtv:record**
|
- **wasdtv:record**
|
||||||
- **wasdtv:stream**
|
- **wasdtv:stream**
|
||||||
|
@ -1628,6 +1648,7 @@ # Supported sites
|
||||||
- **wdr:mobile**: (**Currently broken**)
|
- **wdr:mobile**: (**Currently broken**)
|
||||||
- **WDRElefant**
|
- **WDRElefant**
|
||||||
- **WDRPage**
|
- **WDRPage**
|
||||||
|
- **web.archive:vlive**: web.archive.org saved vlive videos
|
||||||
- **web.archive:youtube**: web.archive.org saved youtube videos, "ytarchive:" prefix
|
- **web.archive:youtube**: web.archive.org saved youtube videos, "ytarchive:" prefix
|
||||||
- **Webcamerapl**
|
- **Webcamerapl**
|
||||||
- **Webcaster**
|
- **Webcaster**
|
||||||
|
@ -1653,6 +1674,8 @@ # Supported sites
|
||||||
- **WorldStarHipHop**
|
- **WorldStarHipHop**
|
||||||
- **wppilot**
|
- **wppilot**
|
||||||
- **wppilot:channels**
|
- **wppilot:channels**
|
||||||
|
- **WrestleUniversePPV**
|
||||||
|
- **WrestleUniverseVOD**
|
||||||
- **WSJ**: Wall Street Journal
|
- **WSJ**: Wall Street Journal
|
||||||
- **WSJArticle**
|
- **WSJArticle**
|
||||||
- **WWE**
|
- **WWE**
|
||||||
|
@ -1675,6 +1698,7 @@ # Supported sites
|
||||||
- **XTubeUser**: XTube user profile
|
- **XTubeUser**: XTube user profile
|
||||||
- **Xuite**: 隨意窩Xuite影音
|
- **Xuite**: 隨意窩Xuite影音
|
||||||
- **XVideos**
|
- **XVideos**
|
||||||
|
- **xvideos:quickies**
|
||||||
- **XXXYMovies**
|
- **XXXYMovies**
|
||||||
- **Yahoo**: Yahoo screen and movies
|
- **Yahoo**: Yahoo screen and movies
|
||||||
- **yahoo:gyao**
|
- **yahoo:gyao**
|
||||||
|
@ -1689,6 +1713,7 @@ # Supported sites
|
||||||
- **YandexVideo**
|
- **YandexVideo**
|
||||||
- **YandexVideoPreview**
|
- **YandexVideoPreview**
|
||||||
- **YapFiles**
|
- **YapFiles**
|
||||||
|
- **Yappy**
|
||||||
- **YesJapan**
|
- **YesJapan**
|
||||||
- **yinyuetai:video**: 音悦Tai
|
- **yinyuetai:video**: 音悦Tai
|
||||||
- **YleAreena**
|
- **YleAreena**
|
||||||
|
@ -1722,13 +1747,13 @@ # Supported sites
|
||||||
- **YoutubeLivestreamEmbed**: YouTube livestream embeds
|
- **YoutubeLivestreamEmbed**: YouTube livestream embeds
|
||||||
- **YoutubeYtBe**: youtu.be
|
- **YoutubeYtBe**: youtu.be
|
||||||
- **Zapiks**
|
- **Zapiks**
|
||||||
- **Zattoo**: [<abbr title="netrc machine"><em>zattoo</em></abbr>]
|
- **Zattoo**: [*zattoo*](## "netrc machine")
|
||||||
- **ZattooLive**: [<abbr title="netrc machine"><em>zattoo</em></abbr>]
|
- **ZattooLive**: [*zattoo*](## "netrc machine")
|
||||||
- **ZattooMovies**: [<abbr title="netrc machine"><em>zattoo</em></abbr>]
|
- **ZattooMovies**: [*zattoo*](## "netrc machine")
|
||||||
- **ZattooRecordings**: [<abbr title="netrc machine"><em>zattoo</em></abbr>]
|
- **ZattooRecordings**: [*zattoo*](## "netrc machine")
|
||||||
- **ZDF**
|
- **ZDF**
|
||||||
- **ZDFChannel**
|
- **ZDFChannel**
|
||||||
- **Zee5**: [<abbr title="netrc machine"><em>zee5</em></abbr>]
|
- **Zee5**: [*zee5*](## "netrc machine")
|
||||||
- **zee5:series**
|
- **zee5:series**
|
||||||
- **ZeeNews**
|
- **ZeeNews**
|
||||||
- **ZenYandex**
|
- **ZenYandex**
|
||||||
|
|
|
@ -69,6 +69,7 @@ def test_opengraph(self):
|
||||||
<meta name="og:test1" content='foo > < bar'/>
|
<meta name="og:test1" content='foo > < bar'/>
|
||||||
<meta name="og:test2" content="foo >//< bar"/>
|
<meta name="og:test2" content="foo >//< bar"/>
|
||||||
<meta property=og-test3 content='Ill-formatted opengraph'/>
|
<meta property=og-test3 content='Ill-formatted opengraph'/>
|
||||||
|
<meta property=og:test4 content=unquoted-value/>
|
||||||
'''
|
'''
|
||||||
self.assertEqual(ie._og_search_title(html), 'Foo')
|
self.assertEqual(ie._og_search_title(html), 'Foo')
|
||||||
self.assertEqual(ie._og_search_description(html), 'Some video\'s description ')
|
self.assertEqual(ie._og_search_description(html), 'Some video\'s description ')
|
||||||
|
@ -81,6 +82,7 @@ def test_opengraph(self):
|
||||||
self.assertEqual(ie._og_search_property(('test0', 'test1'), html), 'foo > < bar')
|
self.assertEqual(ie._og_search_property(('test0', 'test1'), html), 'foo > < bar')
|
||||||
self.assertRaises(RegexNotFoundError, ie._og_search_property, 'test0', html, None, fatal=True)
|
self.assertRaises(RegexNotFoundError, ie._og_search_property, 'test0', html, None, fatal=True)
|
||||||
self.assertRaises(RegexNotFoundError, ie._og_search_property, ('test0', 'test00'), html, None, fatal=True)
|
self.assertRaises(RegexNotFoundError, ie._og_search_property, ('test0', 'test00'), html, None, fatal=True)
|
||||||
|
self.assertEqual(ie._og_search_property('test4', html), 'unquoted-value')
|
||||||
|
|
||||||
def test_html_search_meta(self):
|
def test_html_search_meta(self):
|
||||||
ie = self.ie
|
ie = self.ie
|
||||||
|
|
|
@ -26,7 +26,7 @@
|
||||||
key_expansion,
|
key_expansion,
|
||||||
pad_block,
|
pad_block,
|
||||||
)
|
)
|
||||||
from yt_dlp.dependencies import Cryptodome_AES
|
from yt_dlp.dependencies import Cryptodome
|
||||||
from yt_dlp.utils import bytes_to_intlist, intlist_to_bytes
|
from yt_dlp.utils import bytes_to_intlist, intlist_to_bytes
|
||||||
|
|
||||||
# the encrypted data can be generate with 'devscripts/generate_aes_testdata.py'
|
# the encrypted data can be generate with 'devscripts/generate_aes_testdata.py'
|
||||||
|
@ -48,7 +48,7 @@ def test_cbc_decrypt(self):
|
||||||
data = b'\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6\x27\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd'
|
data = b'\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6\x27\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd'
|
||||||
decrypted = intlist_to_bytes(aes_cbc_decrypt(bytes_to_intlist(data), self.key, self.iv))
|
decrypted = intlist_to_bytes(aes_cbc_decrypt(bytes_to_intlist(data), self.key, self.iv))
|
||||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||||
if Cryptodome_AES:
|
if Cryptodome.AES:
|
||||||
decrypted = aes_cbc_decrypt_bytes(data, intlist_to_bytes(self.key), intlist_to_bytes(self.iv))
|
decrypted = aes_cbc_decrypt_bytes(data, intlist_to_bytes(self.key), intlist_to_bytes(self.iv))
|
||||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||||
|
|
||||||
|
@ -78,7 +78,7 @@ def test_gcm_decrypt(self):
|
||||||
decrypted = intlist_to_bytes(aes_gcm_decrypt_and_verify(
|
decrypted = intlist_to_bytes(aes_gcm_decrypt_and_verify(
|
||||||
bytes_to_intlist(data), self.key, bytes_to_intlist(authentication_tag), self.iv[:12]))
|
bytes_to_intlist(data), self.key, bytes_to_intlist(authentication_tag), self.iv[:12]))
|
||||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||||
if Cryptodome_AES:
|
if Cryptodome.AES:
|
||||||
decrypted = aes_gcm_decrypt_and_verify_bytes(
|
decrypted = aes_gcm_decrypt_and_verify_bytes(
|
||||||
data, intlist_to_bytes(self.key), authentication_tag, intlist_to_bytes(self.iv[:12]))
|
data, intlist_to_bytes(self.key), authentication_tag, intlist_to_bytes(self.iv[:12]))
|
||||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||||
|
|
|
@ -10,6 +10,7 @@
|
||||||
|
|
||||||
from test.helper import is_download_test, try_rm
|
from test.helper import is_download_test, try_rm
|
||||||
from yt_dlp import YoutubeDL
|
from yt_dlp import YoutubeDL
|
||||||
|
from yt_dlp.utils import DownloadError
|
||||||
|
|
||||||
|
|
||||||
def _download_restricted(url, filename, age):
|
def _download_restricted(url, filename, age):
|
||||||
|
@ -25,10 +26,14 @@ def _download_restricted(url, filename, age):
|
||||||
ydl.add_default_info_extractors()
|
ydl.add_default_info_extractors()
|
||||||
json_filename = os.path.splitext(filename)[0] + '.info.json'
|
json_filename = os.path.splitext(filename)[0] + '.info.json'
|
||||||
try_rm(json_filename)
|
try_rm(json_filename)
|
||||||
ydl.download([url])
|
try:
|
||||||
res = os.path.exists(json_filename)
|
ydl.download([url])
|
||||||
try_rm(json_filename)
|
except DownloadError:
|
||||||
return res
|
pass
|
||||||
|
else:
|
||||||
|
return os.path.exists(json_filename)
|
||||||
|
finally:
|
||||||
|
try_rm(json_filename)
|
||||||
|
|
||||||
|
|
||||||
@is_download_test
|
@is_download_test
|
||||||
|
@ -38,12 +43,12 @@ def _assert_restricted(self, url, filename, age, old_age=None):
|
||||||
self.assertFalse(_download_restricted(url, filename, age))
|
self.assertFalse(_download_restricted(url, filename, age))
|
||||||
|
|
||||||
def test_youtube(self):
|
def test_youtube(self):
|
||||||
self._assert_restricted('07FYdnEawAQ', '07FYdnEawAQ.mp4', 10)
|
self._assert_restricted('HtVdAasjOgU', 'HtVdAasjOgU.mp4', 10)
|
||||||
|
|
||||||
def test_youporn(self):
|
def test_youporn(self):
|
||||||
self._assert_restricted(
|
self._assert_restricted(
|
||||||
'http://www.youporn.com/watch/505835/sex-ed-is-it-safe-to-masturbate-daily/',
|
'https://www.youporn.com/watch/16715086/sex-ed-in-detention-18-asmr/',
|
||||||
'505835.mp4', 2, old_age=25)
|
'16715086.mp4', 2, old_age=25)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
|
|
|
@ -31,6 +31,9 @@ def test_compat_passthrough(self):
|
||||||
# TODO: Test submodule
|
# TODO: Test submodule
|
||||||
# compat.asyncio.events # Must not raise error
|
# compat.asyncio.events # Must not raise error
|
||||||
|
|
||||||
|
with self.assertWarns(DeprecationWarning):
|
||||||
|
compat.compat_pycrypto_AES # Must not raise error
|
||||||
|
|
||||||
def test_compat_expanduser(self):
|
def test_compat_expanduser(self):
|
||||||
old_home = os.environ.get('HOME')
|
old_home = os.environ.get('HOME')
|
||||||
test_str = R'C:\Documents and Settings\тест\Application Data'
|
test_str = R'C:\Documents and Settings\тест\Application Data'
|
||||||
|
|
|
@ -155,6 +155,38 @@ def test_call(self):
|
||||||
self.assertEqual(jsi.call_function('z'), 5)
|
self.assertEqual(jsi.call_function('z'), 5)
|
||||||
self.assertEqual(jsi.call_function('y'), 2)
|
self.assertEqual(jsi.call_function('y'), 2)
|
||||||
|
|
||||||
|
def test_if(self):
|
||||||
|
jsi = JSInterpreter('''
|
||||||
|
function x() {
|
||||||
|
let a = 9;
|
||||||
|
if (0==0) {a++}
|
||||||
|
return a
|
||||||
|
}''')
|
||||||
|
self.assertEqual(jsi.call_function('x'), 10)
|
||||||
|
|
||||||
|
jsi = JSInterpreter('''
|
||||||
|
function x() {
|
||||||
|
if (0==0) {return 10}
|
||||||
|
}''')
|
||||||
|
self.assertEqual(jsi.call_function('x'), 10)
|
||||||
|
|
||||||
|
jsi = JSInterpreter('''
|
||||||
|
function x() {
|
||||||
|
if (0!=0) {return 1}
|
||||||
|
else {return 10}
|
||||||
|
}''')
|
||||||
|
self.assertEqual(jsi.call_function('x'), 10)
|
||||||
|
|
||||||
|
""" # Unsupported
|
||||||
|
jsi = JSInterpreter('''
|
||||||
|
function x() {
|
||||||
|
if (0!=0) {return 1}
|
||||||
|
else if (1==0) {return 2}
|
||||||
|
else {return 10}
|
||||||
|
}''')
|
||||||
|
self.assertEqual(jsi.call_function('x'), 10)
|
||||||
|
"""
|
||||||
|
|
||||||
def test_for_loop(self):
|
def test_for_loop(self):
|
||||||
jsi = JSInterpreter('''
|
jsi = JSInterpreter('''
|
||||||
function x() { a=0; for (i=0; i-10; i++) {a++} return a }
|
function x() { a=0; for (i=0; i-10; i++) {a++} return a }
|
||||||
|
|
|
@ -105,6 +105,7 @@
|
||||||
sanitized_Request,
|
sanitized_Request,
|
||||||
shell_quote,
|
shell_quote,
|
||||||
smuggle_url,
|
smuggle_url,
|
||||||
|
str_or_none,
|
||||||
str_to_int,
|
str_to_int,
|
||||||
strip_jsonp,
|
strip_jsonp,
|
||||||
strip_or_none,
|
strip_or_none,
|
||||||
|
@ -1999,8 +2000,8 @@ def test_traverse_obj(self):
|
||||||
|
|
||||||
# Test Ellipsis behavior
|
# Test Ellipsis behavior
|
||||||
self.assertCountEqual(traverse_obj(_TEST_DATA, ...),
|
self.assertCountEqual(traverse_obj(_TEST_DATA, ...),
|
||||||
(item for item in _TEST_DATA.values() if item is not None),
|
(item for item in _TEST_DATA.values() if item not in (None, {})),
|
||||||
msg='`...` should give all values except `None`')
|
msg='`...` should give all non discarded values')
|
||||||
self.assertCountEqual(traverse_obj(_TEST_DATA, ('urls', 0, ...)), _TEST_DATA['urls'][0].values(),
|
self.assertCountEqual(traverse_obj(_TEST_DATA, ('urls', 0, ...)), _TEST_DATA['urls'][0].values(),
|
||||||
msg='`...` selection for dicts should select all values')
|
msg='`...` selection for dicts should select all values')
|
||||||
self.assertEqual(traverse_obj(_TEST_DATA, (..., ..., 'url')),
|
self.assertEqual(traverse_obj(_TEST_DATA, (..., ..., 'url')),
|
||||||
|
@ -2015,6 +2016,29 @@ def test_traverse_obj(self):
|
||||||
msg='function as query key should perform a filter based on (key, value)')
|
msg='function as query key should perform a filter based on (key, value)')
|
||||||
self.assertCountEqual(traverse_obj(_TEST_DATA, lambda _, x: isinstance(x[0], str)), {'str'},
|
self.assertCountEqual(traverse_obj(_TEST_DATA, lambda _, x: isinstance(x[0], str)), {'str'},
|
||||||
msg='exceptions in the query function should be catched')
|
msg='exceptions in the query function should be catched')
|
||||||
|
if __debug__:
|
||||||
|
with self.assertRaises(Exception, msg='Wrong function signature should raise in debug'):
|
||||||
|
traverse_obj(_TEST_DATA, lambda a: ...)
|
||||||
|
with self.assertRaises(Exception, msg='Wrong function signature should raise in debug'):
|
||||||
|
traverse_obj(_TEST_DATA, lambda a, b, c: ...)
|
||||||
|
|
||||||
|
# Test set as key (transformation/type, like `expected_type`)
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, (..., {str.upper}, )), ['STR'],
|
||||||
|
msg='Function in set should be a transformation')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, (..., {str})), ['str'],
|
||||||
|
msg='Type in set should be a type filter')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, {dict}), _TEST_DATA,
|
||||||
|
msg='A single set should be wrapped into a path')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, (..., {str.upper})), ['STR'],
|
||||||
|
msg='Transformation function should not raise')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, (..., {str_or_none})),
|
||||||
|
[item for item in map(str_or_none, _TEST_DATA.values()) if item is not None],
|
||||||
|
msg='Function in set should be a transformation')
|
||||||
|
if __debug__:
|
||||||
|
with self.assertRaises(Exception, msg='Sets with length != 1 should raise in debug'):
|
||||||
|
traverse_obj(_TEST_DATA, set())
|
||||||
|
with self.assertRaises(Exception, msg='Sets with length != 1 should raise in debug'):
|
||||||
|
traverse_obj(_TEST_DATA, {str.upper, str})
|
||||||
|
|
||||||
# Test alternative paths
|
# Test alternative paths
|
||||||
self.assertEqual(traverse_obj(_TEST_DATA, 'fail', 'str'), 'str',
|
self.assertEqual(traverse_obj(_TEST_DATA, 'fail', 'str'), 'str',
|
||||||
|
@ -2060,15 +2084,23 @@ def test_traverse_obj(self):
|
||||||
{0: ['https://www.example.com/1', 'https://www.example.com/0']},
|
{0: ['https://www.example.com/1', 'https://www.example.com/0']},
|
||||||
msg='tripple nesting in dict path should be treated as branches')
|
msg='tripple nesting in dict path should be treated as branches')
|
||||||
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'fail'}), {},
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'fail'}), {},
|
||||||
msg='remove `None` values when dict key')
|
msg='remove `None` values when top level dict key fails')
|
||||||
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'fail'}, default=...), {0: ...},
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'fail'}, default=...), {0: ...},
|
||||||
msg='do not remove `None` values if `default`')
|
msg='use `default` if key fails and `default`')
|
||||||
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'dict'}), {0: {}},
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'dict'}), {},
|
||||||
msg='do not remove empty values when dict key')
|
msg='remove empty values when dict key')
|
||||||
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'dict'}, default=...), {0: {}},
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'dict'}, default=...), {0: ...},
|
||||||
msg='do not remove empty values when dict key and a default')
|
msg='use `default` when dict key and `default`')
|
||||||
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('dict', ...)}), {0: []},
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: {0: 'fail'}}), {},
|
||||||
msg='if branch in dict key not successful, return `[]`')
|
msg='remove empty values when nested dict key fails')
|
||||||
|
self.assertEqual(traverse_obj(None, {0: 'fail'}), {},
|
||||||
|
msg='default to dict if pruned')
|
||||||
|
self.assertEqual(traverse_obj(None, {0: 'fail'}, default=...), {0: ...},
|
||||||
|
msg='default to dict if pruned and default is given')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: {0: 'fail'}}, default=...), {0: {0: ...}},
|
||||||
|
msg='use nested `default` when nested dict key fails and `default`')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('dict', ...)}), {},
|
||||||
|
msg='remove key if branch in dict key not successful')
|
||||||
|
|
||||||
# Testing default parameter behavior
|
# Testing default parameter behavior
|
||||||
_DEFAULT_DATA = {'None': None, 'int': 0, 'list': []}
|
_DEFAULT_DATA = {'None': None, 'int': 0, 'list': []}
|
||||||
|
@ -2092,20 +2124,55 @@ def test_traverse_obj(self):
|
||||||
msg='if branched but not successful return `[]`, not `default`')
|
msg='if branched but not successful return `[]`, not `default`')
|
||||||
self.assertEqual(traverse_obj(_DEFAULT_DATA, ('list', ...)), [],
|
self.assertEqual(traverse_obj(_DEFAULT_DATA, ('list', ...)), [],
|
||||||
msg='if branched but object is empty return `[]`, not `default`')
|
msg='if branched but object is empty return `[]`, not `default`')
|
||||||
|
self.assertEqual(traverse_obj(None, ...), [],
|
||||||
|
msg='if branched but object is `None` return `[]`, not `default`')
|
||||||
|
self.assertEqual(traverse_obj({0: None}, (0, ...)), [],
|
||||||
|
msg='if branched but state is `None` return `[]`, not `default`')
|
||||||
|
|
||||||
|
branching_paths = [
|
||||||
|
('fail', ...),
|
||||||
|
(..., 'fail'),
|
||||||
|
100 * ('fail',) + (...,),
|
||||||
|
(...,) + 100 * ('fail',),
|
||||||
|
]
|
||||||
|
for branching_path in branching_paths:
|
||||||
|
self.assertEqual(traverse_obj({}, branching_path), [],
|
||||||
|
msg='if branched but state is `None`, return `[]` (not `default`)')
|
||||||
|
self.assertEqual(traverse_obj({}, 'fail', branching_path), [],
|
||||||
|
msg='if branching in last alternative and previous did not match, return `[]` (not `default`)')
|
||||||
|
self.assertEqual(traverse_obj({0: 'x'}, 0, branching_path), 'x',
|
||||||
|
msg='if branching in last alternative and previous did match, return single value')
|
||||||
|
self.assertEqual(traverse_obj({0: 'x'}, branching_path, 0), 'x',
|
||||||
|
msg='if branching in first alternative and non-branching path does match, return single value')
|
||||||
|
self.assertEqual(traverse_obj({}, branching_path, 'fail'), None,
|
||||||
|
msg='if branching in first alternative and non-branching path does not match, return `default`')
|
||||||
|
|
||||||
# Testing expected_type behavior
|
# Testing expected_type behavior
|
||||||
_EXPECTED_TYPE_DATA = {'str': 'str', 'int': 0}
|
_EXPECTED_TYPE_DATA = {'str': 'str', 'int': 0}
|
||||||
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=str), 'str',
|
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=str),
|
||||||
msg='accept matching `expected_type` type')
|
'str', msg='accept matching `expected_type` type')
|
||||||
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=int), None,
|
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=int),
|
||||||
msg='reject non matching `expected_type` type')
|
None, msg='reject non matching `expected_type` type')
|
||||||
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'int', expected_type=lambda x: str(x)), '0',
|
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'int', expected_type=lambda x: str(x)),
|
||||||
msg='transform type using type function')
|
'0', msg='transform type using type function')
|
||||||
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str',
|
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=lambda _: 1 / 0),
|
||||||
expected_type=lambda _: 1 / 0), None,
|
None, msg='wrap expected_type fuction in try_call')
|
||||||
msg='wrap expected_type fuction in try_call')
|
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, ..., expected_type=str),
|
||||||
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, ..., expected_type=str), ['str'],
|
['str'], msg='eliminate items that expected_type fails on')
|
||||||
msg='eliminate items that expected_type fails on')
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: 100, 1: 1.2}, expected_type=int),
|
||||||
|
{0: 100}, msg='type as expected_type should filter dict values')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: 100, 1: 1.2, 2: 'None'}, expected_type=str_or_none),
|
||||||
|
{0: '100', 1: '1.2'}, msg='function as expected_type should transform dict values')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, ({0: 1.2}, 0, {int_or_none}), expected_type=int),
|
||||||
|
1, msg='expected_type should not filter non final dict values')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, {0: {0: 100, 1: 'str'}}, expected_type=int),
|
||||||
|
{0: {0: 100}}, msg='expected_type should transform deep dict values')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, [({0: '...'}, {0: '...'})], expected_type=type(...)),
|
||||||
|
[{0: ...}, {0: ...}], msg='expected_type should transform branched dict values')
|
||||||
|
self.assertEqual(traverse_obj({1: {3: 4}}, [(1, 2), 3], expected_type=int),
|
||||||
|
[4], msg='expected_type regression for type matching in tuple branching')
|
||||||
|
self.assertEqual(traverse_obj(_TEST_DATA, ['data', ...], expected_type=int),
|
||||||
|
[], msg='expected_type regression for type matching in dict result')
|
||||||
|
|
||||||
# Test get_all behavior
|
# Test get_all behavior
|
||||||
_GET_ALL_DATA = {'key': [0, 1, 2]}
|
_GET_ALL_DATA = {'key': [0, 1, 2]}
|
||||||
|
@ -2145,14 +2212,17 @@ def test_traverse_obj(self):
|
||||||
traverse_string=True), '.',
|
traverse_string=True), '.',
|
||||||
msg='traverse into converted data if `traverse_string`')
|
msg='traverse into converted data if `traverse_string`')
|
||||||
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', ...),
|
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', ...),
|
||||||
traverse_string=True), list('str'),
|
traverse_string=True), 'str',
|
||||||
msg='`...` branching into string should result in list')
|
msg='`...` should result in string (same value) if `traverse_string`')
|
||||||
|
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', slice(0, None, 2)),
|
||||||
|
traverse_string=True), 'sr',
|
||||||
|
msg='`slice` should result in string if `traverse_string`')
|
||||||
|
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', lambda i, v: i or v == "s"),
|
||||||
|
traverse_string=True), 'str',
|
||||||
|
msg='function should result in string if `traverse_string`')
|
||||||
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', (0, 2)),
|
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', (0, 2)),
|
||||||
traverse_string=True), ['s', 'r'],
|
traverse_string=True), ['s', 'r'],
|
||||||
msg='branching into string should result in list')
|
msg='branching should result in list if `traverse_string`')
|
||||||
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', lambda _, x: x),
|
|
||||||
traverse_string=True), list('str'),
|
|
||||||
msg='function branching into string should result in list')
|
|
||||||
|
|
||||||
# Test is_user_input behavior
|
# Test is_user_input behavior
|
||||||
_IS_USER_INPUT_DATA = {'range8': list(range(8))}
|
_IS_USER_INPUT_DATA = {'range8': list(range(8))}
|
||||||
|
@ -2189,6 +2259,8 @@ def test_traverse_obj(self):
|
||||||
msg='failing str key on a `re.Match` should return `default`')
|
msg='failing str key on a `re.Match` should return `default`')
|
||||||
self.assertEqual(traverse_obj(mobj, 8), None,
|
self.assertEqual(traverse_obj(mobj, 8), None,
|
||||||
msg='failing int key on a `re.Match` should return `default`')
|
msg='failing int key on a `re.Match` should return `default`')
|
||||||
|
self.assertEqual(traverse_obj(mobj, lambda k, _: k in (0, 'group')), ['0123', '3'],
|
||||||
|
msg='function on a `re.Match` should give group name as well')
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
|
|
|
@ -66,6 +66,10 @@
|
||||||
]
|
]
|
||||||
|
|
||||||
_NSIG_TESTS = [
|
_NSIG_TESTS = [
|
||||||
|
(
|
||||||
|
'https://www.youtube.com/s/player/7862ca1f/player_ias.vflset/en_US/base.js',
|
||||||
|
'X_LCxVDjAavgE5t', 'yxJ1dM6iz5ogUg',
|
||||||
|
),
|
||||||
(
|
(
|
||||||
'https://www.youtube.com/s/player/9216d1f7/player_ias.vflset/en_US/base.js',
|
'https://www.youtube.com/s/player/9216d1f7/player_ias.vflset/en_US/base.js',
|
||||||
'SLp9F5bwjAdhE9F-', 'gWnb9IK2DJ8Q1w',
|
'SLp9F5bwjAdhE9F-', 'gWnb9IK2DJ8Q1w',
|
||||||
|
@ -134,6 +138,10 @@
|
||||||
'https://www.youtube.com/s/player/7a062b77/player_ias.vflset/en_US/base.js',
|
'https://www.youtube.com/s/player/7a062b77/player_ias.vflset/en_US/base.js',
|
||||||
'NRcE3y3mVtm_cV-W', 'VbsCYUATvqlt5w',
|
'NRcE3y3mVtm_cV-W', 'VbsCYUATvqlt5w',
|
||||||
),
|
),
|
||||||
|
(
|
||||||
|
'https://www.youtube.com/s/player/dac945fd/player_ias.vflset/en_US/base.js',
|
||||||
|
'o8BkRxXhuYsBCWi6RplPdP', '3Lx32v_hmzTm6A',
|
||||||
|
),
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -156,7 +156,7 @@
|
||||||
write_json_file,
|
write_json_file,
|
||||||
write_string,
|
write_string,
|
||||||
)
|
)
|
||||||
from .version import RELEASE_GIT_HEAD, VARIANT, __version__
|
from .version import CHANNEL, RELEASE_GIT_HEAD, VARIANT, __version__
|
||||||
|
|
||||||
if compat_os_name == 'nt':
|
if compat_os_name == 'nt':
|
||||||
import ctypes
|
import ctypes
|
||||||
|
@ -306,8 +306,6 @@ class YoutubeDL:
|
||||||
Videos already present in the file are not downloaded again.
|
Videos already present in the file are not downloaded again.
|
||||||
break_on_existing: Stop the download process after attempting to download a
|
break_on_existing: Stop the download process after attempting to download a
|
||||||
file that is in the archive.
|
file that is in the archive.
|
||||||
break_on_reject: Stop the download process when encountering a video that
|
|
||||||
has been filtered out.
|
|
||||||
break_per_url: Whether break_on_reject and break_on_existing
|
break_per_url: Whether break_on_reject and break_on_existing
|
||||||
should act on each input URL as opposed to for the entire queue
|
should act on each input URL as opposed to for the entire queue
|
||||||
cookiefile: File name or text stream from where cookies should be read and dumped to
|
cookiefile: File name or text stream from where cookies should be read and dumped to
|
||||||
|
@ -420,6 +418,8 @@ class YoutubeDL:
|
||||||
- If it returns None, the video is downloaded.
|
- If it returns None, the video is downloaded.
|
||||||
- If it returns utils.NO_DEFAULT, the user is interactively
|
- If it returns utils.NO_DEFAULT, the user is interactively
|
||||||
asked whether to download the video.
|
asked whether to download the video.
|
||||||
|
- Raise utils.DownloadCancelled(msg) to abort remaining
|
||||||
|
downloads when a video is rejected.
|
||||||
match_filter_func in utils.py is one example for this.
|
match_filter_func in utils.py is one example for this.
|
||||||
no_color: Do not emit color codes in output.
|
no_color: Do not emit color codes in output.
|
||||||
geo_bypass: Bypass geographic restriction via faking X-Forwarded-For
|
geo_bypass: Bypass geographic restriction via faking X-Forwarded-For
|
||||||
|
@ -489,6 +489,9 @@ class YoutubeDL:
|
||||||
|
|
||||||
The following options are deprecated and may be removed in the future:
|
The following options are deprecated and may be removed in the future:
|
||||||
|
|
||||||
|
break_on_reject: Stop the download process when encountering a video that
|
||||||
|
has been filtered out.
|
||||||
|
- `raise DownloadCancelled(msg)` in match_filter instead
|
||||||
force_generic_extractor: Force downloader to use the generic extractor
|
force_generic_extractor: Force downloader to use the generic extractor
|
||||||
- Use allowed_extractors = ['generic', 'default']
|
- Use allowed_extractors = ['generic', 'default']
|
||||||
playliststart: - Use playlist_items
|
playliststart: - Use playlist_items
|
||||||
|
@ -560,7 +563,7 @@ class YoutubeDL:
|
||||||
'vbr', 'fps', 'vcodec', 'container', 'filesize', 'filesize_approx', 'rows', 'columns',
|
'vbr', 'fps', 'vcodec', 'container', 'filesize', 'filesize_approx', 'rows', 'columns',
|
||||||
'player_url', 'protocol', 'fragment_base_url', 'fragments', 'is_from_start',
|
'player_url', 'protocol', 'fragment_base_url', 'fragments', 'is_from_start',
|
||||||
'preference', 'language', 'language_preference', 'quality', 'source_preference',
|
'preference', 'language', 'language_preference', 'quality', 'source_preference',
|
||||||
'http_headers', 'stretched_ratio', 'no_resume', 'has_drm', 'downloader_options',
|
'http_headers', 'stretched_ratio', 'no_resume', 'has_drm', 'extra_param_to_segment_url', 'hls_aes', 'downloader_options',
|
||||||
'page_url', 'app', 'play_path', 'tc_url', 'flash_version', 'rtmp_live', 'rtmp_conn', 'rtmp_protocol', 'rtmp_real_time'
|
'page_url', 'app', 'play_path', 'tc_url', 'flash_version', 'rtmp_live', 'rtmp_conn', 'rtmp_protocol', 'rtmp_real_time'
|
||||||
}
|
}
|
||||||
_format_selection_exts = {
|
_format_selection_exts = {
|
||||||
|
@ -620,7 +623,7 @@ def __init__(self, params=None, auto_init=True):
|
||||||
'\n You will no longer receive updates on this version')
|
'\n You will no longer receive updates on this version')
|
||||||
if current_version < MIN_SUPPORTED:
|
if current_version < MIN_SUPPORTED:
|
||||||
msg = 'Python version %d.%d is no longer supported'
|
msg = 'Python version %d.%d is no longer supported'
|
||||||
self.deprecation_warning(
|
self.deprecated_feature(
|
||||||
f'{msg}! Please update to Python %d.%d or above' % (*current_version, *MIN_RECOMMENDED))
|
f'{msg}! Please update to Python %d.%d or above' % (*current_version, *MIN_RECOMMENDED))
|
||||||
|
|
||||||
if self.params.get('allow_unplayable_formats'):
|
if self.params.get('allow_unplayable_formats'):
|
||||||
|
@ -1413,31 +1416,44 @@ def check_filter():
|
||||||
return 'Skipping "%s" because it is age restricted' % video_title
|
return 'Skipping "%s" because it is age restricted' % video_title
|
||||||
|
|
||||||
match_filter = self.params.get('match_filter')
|
match_filter = self.params.get('match_filter')
|
||||||
if match_filter is not None:
|
if match_filter is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
cancelled = None
|
||||||
|
try:
|
||||||
try:
|
try:
|
||||||
ret = match_filter(info_dict, incomplete=incomplete)
|
ret = match_filter(info_dict, incomplete=incomplete)
|
||||||
except TypeError:
|
except TypeError:
|
||||||
# For backward compatibility
|
# For backward compatibility
|
||||||
ret = None if incomplete else match_filter(info_dict)
|
ret = None if incomplete else match_filter(info_dict)
|
||||||
if ret is NO_DEFAULT:
|
except DownloadCancelled as err:
|
||||||
while True:
|
if err.msg is not NO_DEFAULT:
|
||||||
filename = self._format_screen(self.prepare_filename(info_dict), self.Styles.FILENAME)
|
raise
|
||||||
reply = input(self._format_screen(
|
ret, cancelled = err.msg, err
|
||||||
f'Download "{filename}"? (Y/n): ', self.Styles.EMPHASIS)).lower().strip()
|
|
||||||
if reply in {'y', ''}:
|
if ret is NO_DEFAULT:
|
||||||
return None
|
while True:
|
||||||
elif reply == 'n':
|
filename = self._format_screen(self.prepare_filename(info_dict), self.Styles.FILENAME)
|
||||||
return f'Skipping {video_title}'
|
reply = input(self._format_screen(
|
||||||
elif ret is not None:
|
f'Download "{filename}"? (Y/n): ', self.Styles.EMPHASIS)).lower().strip()
|
||||||
return ret
|
if reply in {'y', ''}:
|
||||||
return None
|
return None
|
||||||
|
elif reply == 'n':
|
||||||
|
if cancelled:
|
||||||
|
raise type(cancelled)(f'Skipping {video_title}')
|
||||||
|
return f'Skipping {video_title}'
|
||||||
|
return ret
|
||||||
|
|
||||||
if self.in_download_archive(info_dict):
|
if self.in_download_archive(info_dict):
|
||||||
reason = '%s has already been recorded in the archive' % video_title
|
reason = '%s has already been recorded in the archive' % video_title
|
||||||
break_opt, break_err = 'break_on_existing', ExistingVideoReached
|
break_opt, break_err = 'break_on_existing', ExistingVideoReached
|
||||||
else:
|
else:
|
||||||
reason = check_filter()
|
try:
|
||||||
break_opt, break_err = 'break_on_reject', RejectedVideoReached
|
reason = check_filter()
|
||||||
|
except DownloadCancelled as e:
|
||||||
|
reason, break_opt, break_err = e.msg, 'match_filter', type(e)
|
||||||
|
else:
|
||||||
|
break_opt, break_err = 'break_on_reject', RejectedVideoReached
|
||||||
if reason is not None:
|
if reason is not None:
|
||||||
if not silent:
|
if not silent:
|
||||||
self.to_screen('[download] ' + reason)
|
self.to_screen('[download] ' + reason)
|
||||||
|
@ -1783,7 +1799,7 @@ def _playlist_infodict(ie_result, strict=False, **kwargs):
|
||||||
return {
|
return {
|
||||||
**info,
|
**info,
|
||||||
'playlist_index': 0,
|
'playlist_index': 0,
|
||||||
'__last_playlist_index': max(ie_result['requested_entries'] or (0, 0)),
|
'__last_playlist_index': max(ie_result.get('requested_entries') or (0, 0)),
|
||||||
'extractor': ie_result['extractor'],
|
'extractor': ie_result['extractor'],
|
||||||
'extractor_key': ie_result['extractor_key'],
|
'extractor_key': ie_result['extractor_key'],
|
||||||
}
|
}
|
||||||
|
@ -2417,11 +2433,7 @@ def check_thumbnails(thumbnails):
|
||||||
def _fill_common_fields(self, info_dict, final=True):
|
def _fill_common_fields(self, info_dict, final=True):
|
||||||
# TODO: move sanitization here
|
# TODO: move sanitization here
|
||||||
if final:
|
if final:
|
||||||
title = info_dict.get('title', NO_DEFAULT)
|
title = info_dict['fulltitle'] = info_dict.get('title')
|
||||||
if title is NO_DEFAULT:
|
|
||||||
raise ExtractorError('Missing "title" field in extractor result',
|
|
||||||
video_id=info_dict['id'], ie=info_dict['extractor'])
|
|
||||||
info_dict['fulltitle'] = title
|
|
||||||
if not title:
|
if not title:
|
||||||
if title == '':
|
if title == '':
|
||||||
self.write_debug('Extractor gave empty title. Creating a generic title')
|
self.write_debug('Extractor gave empty title. Creating a generic title')
|
||||||
|
@ -2476,15 +2488,8 @@ def _raise_pending_errors(self, info):
|
||||||
|
|
||||||
def sort_formats(self, info_dict):
|
def sort_formats(self, info_dict):
|
||||||
formats = self._get_formats(info_dict)
|
formats = self._get_formats(info_dict)
|
||||||
if not formats:
|
|
||||||
return
|
|
||||||
# Backward compatibility with InfoExtractor._sort_formats
|
|
||||||
field_preference = formats[0].pop('__sort_fields', None)
|
|
||||||
if field_preference:
|
|
||||||
info_dict['_format_sort_fields'] = field_preference
|
|
||||||
|
|
||||||
formats.sort(key=FormatSorter(
|
formats.sort(key=FormatSorter(
|
||||||
self, info_dict.get('_format_sort_fields', [])).calculate_preference)
|
self, info_dict.get('_format_sort_fields') or []).calculate_preference)
|
||||||
|
|
||||||
def process_video_result(self, info_dict, download=True):
|
def process_video_result(self, info_dict, download=True):
|
||||||
assert info_dict.get('_type', 'video') == 'video'
|
assert info_dict.get('_type', 'video') == 'video'
|
||||||
|
@ -2571,9 +2576,13 @@ def sanitize_numeric_fields(info):
|
||||||
info_dict['requested_subtitles'] = self.process_subtitles(
|
info_dict['requested_subtitles'] = self.process_subtitles(
|
||||||
info_dict['id'], subtitles, automatic_captions)
|
info_dict['id'], subtitles, automatic_captions)
|
||||||
|
|
||||||
self.sort_formats(info_dict)
|
|
||||||
formats = self._get_formats(info_dict)
|
formats = self._get_formats(info_dict)
|
||||||
|
|
||||||
|
# Backward compatibility with InfoExtractor._sort_formats
|
||||||
|
field_preference = (formats or [{}])[0].pop('__sort_fields', None)
|
||||||
|
if field_preference:
|
||||||
|
info_dict['_format_sort_fields'] = field_preference
|
||||||
|
|
||||||
# or None ensures --clean-infojson removes it
|
# or None ensures --clean-infojson removes it
|
||||||
info_dict['_has_drm'] = any(f.get('has_drm') for f in formats) or None
|
info_dict['_has_drm'] = any(f.get('has_drm') for f in formats) or None
|
||||||
if not self.params.get('allow_unplayable_formats'):
|
if not self.params.get('allow_unplayable_formats'):
|
||||||
|
@ -2611,44 +2620,12 @@ def is_wellformed(f):
|
||||||
if not formats:
|
if not formats:
|
||||||
self.raise_no_formats(info_dict)
|
self.raise_no_formats(info_dict)
|
||||||
|
|
||||||
formats_dict = {}
|
for format in formats:
|
||||||
|
|
||||||
# We check that all the formats have the format and format_id fields
|
|
||||||
for i, format in enumerate(formats):
|
|
||||||
sanitize_string_field(format, 'format_id')
|
sanitize_string_field(format, 'format_id')
|
||||||
sanitize_numeric_fields(format)
|
sanitize_numeric_fields(format)
|
||||||
format['url'] = sanitize_url(format['url'])
|
format['url'] = sanitize_url(format['url'])
|
||||||
if not format.get('format_id'):
|
if format.get('ext') is None:
|
||||||
format['format_id'] = str(i)
|
format['ext'] = determine_ext(format['url']).lower()
|
||||||
else:
|
|
||||||
# Sanitize format_id from characters used in format selector expression
|
|
||||||
format['format_id'] = re.sub(r'[\s,/+\[\]()]', '_', format['format_id'])
|
|
||||||
format_id = format['format_id']
|
|
||||||
if format_id not in formats_dict:
|
|
||||||
formats_dict[format_id] = []
|
|
||||||
formats_dict[format_id].append(format)
|
|
||||||
|
|
||||||
# Make sure all formats have unique format_id
|
|
||||||
common_exts = set(itertools.chain(*self._format_selection_exts.values()))
|
|
||||||
for format_id, ambiguous_formats in formats_dict.items():
|
|
||||||
ambigious_id = len(ambiguous_formats) > 1
|
|
||||||
for i, format in enumerate(ambiguous_formats):
|
|
||||||
if ambigious_id:
|
|
||||||
format['format_id'] = '%s-%d' % (format_id, i)
|
|
||||||
if format.get('ext') is None:
|
|
||||||
format['ext'] = determine_ext(format['url']).lower()
|
|
||||||
# Ensure there is no conflict between id and ext in format selection
|
|
||||||
# See https://github.com/yt-dlp/yt-dlp/issues/1282
|
|
||||||
if format['format_id'] != format['ext'] and format['format_id'] in common_exts:
|
|
||||||
format['format_id'] = 'f%s' % format['format_id']
|
|
||||||
|
|
||||||
for i, format in enumerate(formats):
|
|
||||||
if format.get('format') is None:
|
|
||||||
format['format'] = '{id} - {res}{note}'.format(
|
|
||||||
id=format['format_id'],
|
|
||||||
res=self.format_resolution(format),
|
|
||||||
note=format_field(format, 'format_note', ' (%s)'),
|
|
||||||
)
|
|
||||||
if format.get('protocol') is None:
|
if format.get('protocol') is None:
|
||||||
format['protocol'] = determine_protocol(format)
|
format['protocol'] = determine_protocol(format)
|
||||||
if format.get('resolution') is None:
|
if format.get('resolution') is None:
|
||||||
|
@ -2660,16 +2637,46 @@ def is_wellformed(f):
|
||||||
if (info_dict.get('duration') and format.get('tbr')
|
if (info_dict.get('duration') and format.get('tbr')
|
||||||
and not format.get('filesize') and not format.get('filesize_approx')):
|
and not format.get('filesize') and not format.get('filesize_approx')):
|
||||||
format['filesize_approx'] = int(info_dict['duration'] * format['tbr'] * (1024 / 8))
|
format['filesize_approx'] = int(info_dict['duration'] * format['tbr'] * (1024 / 8))
|
||||||
|
format['http_headers'] = self._calc_headers(collections.ChainMap(format, info_dict))
|
||||||
|
|
||||||
# Add HTTP headers, so that external programs can use them from the
|
# This is copied to http_headers by the above _calc_headers and can now be removed
|
||||||
# json output
|
|
||||||
full_format_info = info_dict.copy()
|
|
||||||
full_format_info.update(format)
|
|
||||||
format['http_headers'] = self._calc_headers(full_format_info)
|
|
||||||
# Remove private housekeeping stuff
|
|
||||||
if '__x_forwarded_for_ip' in info_dict:
|
if '__x_forwarded_for_ip' in info_dict:
|
||||||
del info_dict['__x_forwarded_for_ip']
|
del info_dict['__x_forwarded_for_ip']
|
||||||
|
|
||||||
|
self.sort_formats({
|
||||||
|
'formats': formats,
|
||||||
|
'_format_sort_fields': info_dict.get('_format_sort_fields')
|
||||||
|
})
|
||||||
|
|
||||||
|
# Sanitize and group by format_id
|
||||||
|
formats_dict = {}
|
||||||
|
for i, format in enumerate(formats):
|
||||||
|
if not format.get('format_id'):
|
||||||
|
format['format_id'] = str(i)
|
||||||
|
else:
|
||||||
|
# Sanitize format_id from characters used in format selector expression
|
||||||
|
format['format_id'] = re.sub(r'[\s,/+\[\]()]', '_', format['format_id'])
|
||||||
|
formats_dict.setdefault(format['format_id'], []).append(format)
|
||||||
|
|
||||||
|
# Make sure all formats have unique format_id
|
||||||
|
common_exts = set(itertools.chain(*self._format_selection_exts.values()))
|
||||||
|
for format_id, ambiguous_formats in formats_dict.items():
|
||||||
|
ambigious_id = len(ambiguous_formats) > 1
|
||||||
|
for i, format in enumerate(ambiguous_formats):
|
||||||
|
if ambigious_id:
|
||||||
|
format['format_id'] = '%s-%d' % (format_id, i)
|
||||||
|
# Ensure there is no conflict between id and ext in format selection
|
||||||
|
# See https://github.com/yt-dlp/yt-dlp/issues/1282
|
||||||
|
if format['format_id'] != format['ext'] and format['format_id'] in common_exts:
|
||||||
|
format['format_id'] = 'f%s' % format['format_id']
|
||||||
|
|
||||||
|
if format.get('format') is None:
|
||||||
|
format['format'] = '{id} - {res}{note}'.format(
|
||||||
|
id=format['format_id'],
|
||||||
|
res=self.format_resolution(format),
|
||||||
|
note=format_field(format, 'format_note', ' (%s)'),
|
||||||
|
)
|
||||||
|
|
||||||
if self.params.get('check_formats') is True:
|
if self.params.get('check_formats') is True:
|
||||||
formats = LazyList(self._check_formats(formats[::-1]), reverse=True)
|
formats = LazyList(self._check_formats(formats[::-1]), reverse=True)
|
||||||
|
|
||||||
|
@ -2825,10 +2832,14 @@ def process_subtitles(self, video_id, normal_subtitles, automatic_captions):
|
||||||
self.params.get('subtitleslangs'), {'all': all_sub_langs}, use_regex=True)
|
self.params.get('subtitleslangs'), {'all': all_sub_langs}, use_regex=True)
|
||||||
except re.error as e:
|
except re.error as e:
|
||||||
raise ValueError(f'Wrong regex for subtitlelangs: {e.pattern}')
|
raise ValueError(f'Wrong regex for subtitlelangs: {e.pattern}')
|
||||||
elif normal_sub_langs:
|
|
||||||
requested_langs = ['en'] if 'en' in normal_sub_langs else normal_sub_langs[:1]
|
|
||||||
else:
|
else:
|
||||||
requested_langs = ['en'] if 'en' in all_sub_langs else all_sub_langs[:1]
|
requested_langs = LazyList(itertools.chain(
|
||||||
|
['en'] if 'en' in normal_sub_langs else [],
|
||||||
|
filter(lambda f: f.startswith('en'), normal_sub_langs),
|
||||||
|
['en'] if 'en' in all_sub_langs else [],
|
||||||
|
filter(lambda f: f.startswith('en'), all_sub_langs),
|
||||||
|
normal_sub_langs, all_sub_langs,
|
||||||
|
))[:1]
|
||||||
if requested_langs:
|
if requested_langs:
|
||||||
self.to_screen(f'[info] {video_id}: Downloading subtitles: {", ".join(requested_langs)}')
|
self.to_screen(f'[info] {video_id}: Downloading subtitles: {", ".join(requested_langs)}')
|
||||||
|
|
||||||
|
@ -3371,18 +3382,19 @@ def download_with_info_file(self, info_filename):
|
||||||
[info_filename], mode='r',
|
[info_filename], mode='r',
|
||||||
openhook=fileinput.hook_encoded('utf-8'))) as f:
|
openhook=fileinput.hook_encoded('utf-8'))) as f:
|
||||||
# FileInput doesn't have a read method, we can't call json.load
|
# FileInput doesn't have a read method, we can't call json.load
|
||||||
info = self.sanitize_info(json.loads('\n'.join(f)), self.params.get('clean_infojson', True))
|
infos = [self.sanitize_info(info, self.params.get('clean_infojson', True))
|
||||||
try:
|
for info in variadic(json.loads('\n'.join(f)))]
|
||||||
self.__download_wrapper(self.process_ie_result)(info, download=True)
|
for info in infos:
|
||||||
except (DownloadError, EntryNotInPlaylist, ReExtractInfo) as e:
|
try:
|
||||||
if not isinstance(e, EntryNotInPlaylist):
|
self.__download_wrapper(self.process_ie_result)(info, download=True)
|
||||||
self.to_stderr('\r')
|
except (DownloadError, EntryNotInPlaylist, ReExtractInfo) as e:
|
||||||
webpage_url = info.get('webpage_url')
|
if not isinstance(e, EntryNotInPlaylist):
|
||||||
if webpage_url is not None:
|
self.to_stderr('\r')
|
||||||
|
webpage_url = info.get('webpage_url')
|
||||||
|
if webpage_url is None:
|
||||||
|
raise
|
||||||
self.report_warning(f'The info failed to download: {e}; trying with URL {webpage_url}')
|
self.report_warning(f'The info failed to download: {e}; trying with URL {webpage_url}')
|
||||||
return self.download([webpage_url])
|
self.download([webpage_url])
|
||||||
else:
|
|
||||||
raise
|
|
||||||
return self._download_retcode
|
return self._download_retcode
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
|
@ -3676,6 +3688,7 @@ def simplified_codec(f, field):
|
||||||
format_field(f, 'asr', '\t%s', func=format_decimal_suffix),
|
format_field(f, 'asr', '\t%s', func=format_decimal_suffix),
|
||||||
join_nonempty(
|
join_nonempty(
|
||||||
self._format_out('UNSUPPORTED', 'light red') if f.get('ext') in ('f4f', 'f4m') else None,
|
self._format_out('UNSUPPORTED', 'light red') if f.get('ext') in ('f4f', 'f4m') else None,
|
||||||
|
self._format_out('DRM', 'light red') if f.get('has_drm') else None,
|
||||||
format_field(f, 'language', '[%s]'),
|
format_field(f, 'language', '[%s]'),
|
||||||
join_nonempty(format_field(f, 'format_note'),
|
join_nonempty(format_field(f, 'format_note'),
|
||||||
format_field(f, 'container', ignore=(None, f.get('ext'))),
|
format_field(f, 'container', ignore=(None, f.get('ext'))),
|
||||||
|
@ -3766,12 +3779,13 @@ def get_encoding(stream):
|
||||||
source = detect_variant()
|
source = detect_variant()
|
||||||
if VARIANT not in (None, 'pip'):
|
if VARIANT not in (None, 'pip'):
|
||||||
source += '*'
|
source += '*'
|
||||||
|
klass = type(self)
|
||||||
write_debug(join_nonempty(
|
write_debug(join_nonempty(
|
||||||
f'{"yt-dlp" if REPOSITORY == "yt-dlp/yt-dlp" else REPOSITORY} version',
|
f'{"yt-dlp" if REPOSITORY == "yt-dlp/yt-dlp" else REPOSITORY} version',
|
||||||
__version__,
|
f'{CHANNEL}@{__version__}',
|
||||||
f'[{RELEASE_GIT_HEAD}]' if RELEASE_GIT_HEAD else '',
|
f'[{RELEASE_GIT_HEAD[:9]}]' if RELEASE_GIT_HEAD else '',
|
||||||
'' if source == 'unknown' else f'({source})',
|
'' if source == 'unknown' else f'({source})',
|
||||||
'' if IN_CLI.get() else 'API',
|
'' if IN_CLI.get() else 'API' if klass == YoutubeDL else f'API:{self.__module__}.{klass.__qualname__}',
|
||||||
delim=' '))
|
delim=' '))
|
||||||
|
|
||||||
if not IN_CLI.get():
|
if not IN_CLI.get():
|
||||||
|
|
|
@ -317,10 +317,6 @@ def validate_outtmpl(tmpl, msg):
|
||||||
if outtmpl_default == '':
|
if outtmpl_default == '':
|
||||||
opts.skip_download = None
|
opts.skip_download = None
|
||||||
del opts.outtmpl['default']
|
del opts.outtmpl['default']
|
||||||
if outtmpl_default and not os.path.splitext(outtmpl_default)[1] and opts.extractaudio:
|
|
||||||
raise ValueError(
|
|
||||||
'Cannot download a video and extract audio into the same file! '
|
|
||||||
f'Use "{outtmpl_default}.%(ext)s" instead of "{outtmpl_default}" as the output template')
|
|
||||||
|
|
||||||
def parse_chapters(name, value):
|
def parse_chapters(name, value):
|
||||||
chapters, ranges = [], []
|
chapters, ranges = [], []
|
||||||
|
@ -410,7 +406,7 @@ def metadataparser_actions(f):
|
||||||
except Exception:
|
except Exception:
|
||||||
raise ValueError('unsupported geo-bypass country or ip-block')
|
raise ValueError('unsupported geo-bypass country or ip-block')
|
||||||
|
|
||||||
opts.match_filter = match_filter_func(opts.match_filter)
|
opts.match_filter = match_filter_func(opts.match_filter, opts.breaking_match_filter)
|
||||||
|
|
||||||
if opts.download_archive is not None:
|
if opts.download_archive is not None:
|
||||||
opts.download_archive = expand_path(opts.download_archive)
|
opts.download_archive = expand_path(opts.download_archive)
|
||||||
|
@ -711,6 +707,7 @@ def parse_options(argv=None):
|
||||||
'dumpjson', 'dump_single_json', 'getdescription', 'getduration', 'getfilename',
|
'dumpjson', 'dump_single_json', 'getdescription', 'getduration', 'getfilename',
|
||||||
'getformat', 'getid', 'getthumbnail', 'gettitle', 'geturl'
|
'getformat', 'getid', 'getthumbnail', 'gettitle', 'geturl'
|
||||||
))
|
))
|
||||||
|
opts.quiet = opts.quiet or any_getting or opts.print_json or bool(opts.forceprint)
|
||||||
|
|
||||||
playlist_pps = [pp for pp in postprocessors if pp.get('when') == 'playlist']
|
playlist_pps = [pp for pp in postprocessors if pp.get('when') == 'playlist']
|
||||||
write_playlist_infojson = (opts.writeinfojson and not opts.clean_infojson
|
write_playlist_infojson = (opts.writeinfojson and not opts.clean_infojson
|
||||||
|
@ -746,7 +743,7 @@ def parse_options(argv=None):
|
||||||
'client_certificate': opts.client_certificate,
|
'client_certificate': opts.client_certificate,
|
||||||
'client_certificate_key': opts.client_certificate_key,
|
'client_certificate_key': opts.client_certificate_key,
|
||||||
'client_certificate_password': opts.client_certificate_password,
|
'client_certificate_password': opts.client_certificate_password,
|
||||||
'quiet': opts.quiet or any_getting or opts.print_json or bool(opts.forceprint),
|
'quiet': opts.quiet,
|
||||||
'no_warnings': opts.no_warnings,
|
'no_warnings': opts.no_warnings,
|
||||||
'forceurl': opts.geturl,
|
'forceurl': opts.geturl,
|
||||||
'forcetitle': opts.gettitle,
|
'forcetitle': opts.gettitle,
|
||||||
|
@ -941,7 +938,7 @@ def _real_main(argv=None):
|
||||||
if opts.rm_cachedir:
|
if opts.rm_cachedir:
|
||||||
ydl.cache.remove()
|
ydl.cache.remove()
|
||||||
|
|
||||||
updater = Updater(ydl)
|
updater = Updater(ydl, opts.update_self if isinstance(opts.update_self, str) else None)
|
||||||
if opts.update_self and updater.update() and actual_use:
|
if opts.update_self and updater.update() and actual_use:
|
||||||
if updater.cmd:
|
if updater.cmd:
|
||||||
return updater.restart()
|
return updater.restart()
|
||||||
|
@ -962,6 +959,8 @@ def _real_main(argv=None):
|
||||||
parser.destroy()
|
parser.destroy()
|
||||||
try:
|
try:
|
||||||
if opts.load_info_filename is not None:
|
if opts.load_info_filename is not None:
|
||||||
|
if all_urls:
|
||||||
|
ydl.report_warning('URLs are ignored due to --load-info-json')
|
||||||
return ydl.download_with_info_file(expand_path(opts.load_info_filename))
|
return ydl.download_with_info_file(expand_path(opts.load_info_filename))
|
||||||
else:
|
else:
|
||||||
return ydl.download(all_urls)
|
return ydl.download(all_urls)
|
||||||
|
|
5
yt_dlp/__pyinstaller/__init__.py
Normal file
5
yt_dlp/__pyinstaller/__init__.py
Normal file
|
@ -0,0 +1,5 @@
|
||||||
|
import os
|
||||||
|
|
||||||
|
|
||||||
|
def get_hook_dirs():
|
||||||
|
return [os.path.dirname(__file__)]
|
31
yt_dlp/__pyinstaller/hook-yt_dlp.py
Normal file
31
yt_dlp/__pyinstaller/hook-yt_dlp.py
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from PyInstaller.utils.hooks import collect_submodules
|
||||||
|
|
||||||
|
|
||||||
|
def pycryptodome_module():
|
||||||
|
try:
|
||||||
|
import Cryptodome # noqa: F401
|
||||||
|
except ImportError:
|
||||||
|
try:
|
||||||
|
import Crypto # noqa: F401
|
||||||
|
print('WARNING: Using Crypto since Cryptodome is not available. '
|
||||||
|
'Install with: pip install pycryptodomex', file=sys.stderr)
|
||||||
|
return 'Crypto'
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
return 'Cryptodome'
|
||||||
|
|
||||||
|
|
||||||
|
def get_hidden_imports():
|
||||||
|
yield 'yt_dlp.compat._legacy'
|
||||||
|
yield pycryptodome_module()
|
||||||
|
yield from collect_submodules('websockets')
|
||||||
|
# These are auto-detected, but explicitly add them just in case
|
||||||
|
yield from ('mutagen', 'brotli', 'certifi')
|
||||||
|
|
||||||
|
|
||||||
|
hiddenimports = list(get_hidden_imports())
|
||||||
|
print(f'Adding imports: {hiddenimports}')
|
||||||
|
|
||||||
|
excludedimports = ['youtube_dl', 'youtube_dlc', 'test', 'ytdlp_plugins', 'devscripts']
|
|
@ -2,17 +2,17 @@
|
||||||
from math import ceil
|
from math import ceil
|
||||||
|
|
||||||
from .compat import compat_ord
|
from .compat import compat_ord
|
||||||
from .dependencies import Cryptodome_AES
|
from .dependencies import Cryptodome
|
||||||
from .utils import bytes_to_intlist, intlist_to_bytes
|
from .utils import bytes_to_intlist, intlist_to_bytes
|
||||||
|
|
||||||
if Cryptodome_AES:
|
if Cryptodome.AES:
|
||||||
def aes_cbc_decrypt_bytes(data, key, iv):
|
def aes_cbc_decrypt_bytes(data, key, iv):
|
||||||
""" Decrypt bytes with AES-CBC using pycryptodome """
|
""" Decrypt bytes with AES-CBC using pycryptodome """
|
||||||
return Cryptodome_AES.new(key, Cryptodome_AES.MODE_CBC, iv).decrypt(data)
|
return Cryptodome.AES.new(key, Cryptodome.AES.MODE_CBC, iv).decrypt(data)
|
||||||
|
|
||||||
def aes_gcm_decrypt_and_verify_bytes(data, key, tag, nonce):
|
def aes_gcm_decrypt_and_verify_bytes(data, key, tag, nonce):
|
||||||
""" Decrypt bytes with AES-GCM using pycryptodome """
|
""" Decrypt bytes with AES-GCM using pycryptodome """
|
||||||
return Cryptodome_AES.new(key, Cryptodome_AES.MODE_GCM, nonce).decrypt_and_verify(data, tag)
|
return Cryptodome.AES.new(key, Cryptodome.AES.MODE_GCM, nonce).decrypt_and_verify(data, tag)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
def aes_cbc_decrypt_bytes(data, key, iv):
|
def aes_cbc_decrypt_bytes(data, key, iv):
|
||||||
|
|
|
@ -1,5 +1,4 @@
|
||||||
import contextlib
|
import contextlib
|
||||||
import errno
|
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
|
@ -39,11 +38,7 @@ def store(self, section, key, data, dtype='json'):
|
||||||
|
|
||||||
fn = self._get_cache_fn(section, key, dtype)
|
fn = self._get_cache_fn(section, key, dtype)
|
||||||
try:
|
try:
|
||||||
try:
|
os.makedirs(os.path.dirname(fn), exist_ok=True)
|
||||||
os.makedirs(os.path.dirname(fn))
|
|
||||||
except OSError as ose:
|
|
||||||
if ose.errno != errno.EEXIST:
|
|
||||||
raise
|
|
||||||
self._ydl.write_debug(f'Saving {section}.{key} to cache')
|
self._ydl.write_debug(f'Saving {section}.{key} to cache')
|
||||||
write_json_file({'yt-dlp_version': __version__, 'data': data}, fn)
|
write_json_file({'yt-dlp_version': __version__, 'data': data}, fn)
|
||||||
except Exception:
|
except Exception:
|
||||||
|
|
|
@ -8,7 +8,7 @@
|
||||||
|
|
||||||
# XXX: Implement this the same way as other DeprecationWarnings without circular import
|
# XXX: Implement this the same way as other DeprecationWarnings without circular import
|
||||||
passthrough_module(__name__, '._legacy', callback=lambda attr: warnings.warn(
|
passthrough_module(__name__, '._legacy', callback=lambda attr: warnings.warn(
|
||||||
DeprecationWarning(f'{__name__}.{attr} is deprecated'), stacklevel=3))
|
DeprecationWarning(f'{__name__}.{attr} is deprecated'), stacklevel=5))
|
||||||
|
|
||||||
|
|
||||||
# HTMLParseError has been deprecated in Python 3.3 and removed in
|
# HTMLParseError has been deprecated in Python 3.3 and removed in
|
||||||
|
@ -70,9 +70,3 @@ def compat_expanduser(path):
|
||||||
return userhome + path[i:]
|
return userhome + path[i:]
|
||||||
else:
|
else:
|
||||||
compat_expanduser = os.path.expanduser
|
compat_expanduser = os.path.expanduser
|
||||||
|
|
||||||
|
|
||||||
# NB: Add modules that are imported dynamically here so that PyInstaller can find them
|
|
||||||
# See https://github.com/pyinstaller/pyinstaller-hooks-contrib/issues/438
|
|
||||||
if False:
|
|
||||||
from . import _legacy # noqa: F401
|
|
||||||
|
|
|
@ -1,5 +1,6 @@
|
||||||
""" Do not use! """
|
""" Do not use! """
|
||||||
|
|
||||||
|
import base64
|
||||||
import collections
|
import collections
|
||||||
import ctypes
|
import ctypes
|
||||||
import getpass
|
import getpass
|
||||||
|
@ -29,10 +30,11 @@
|
||||||
from re import Pattern as compat_Pattern # noqa: F401
|
from re import Pattern as compat_Pattern # noqa: F401
|
||||||
from re import match as compat_Match # noqa: F401
|
from re import match as compat_Match # noqa: F401
|
||||||
|
|
||||||
|
from . import compat_expanduser, compat_HTMLParseError, compat_realpath
|
||||||
from .compat_utils import passthrough_module
|
from .compat_utils import passthrough_module
|
||||||
from ..dependencies import Cryptodome_AES as compat_pycrypto_AES # noqa: F401
|
|
||||||
from ..dependencies import brotli as compat_brotli # noqa: F401
|
from ..dependencies import brotli as compat_brotli # noqa: F401
|
||||||
from ..dependencies import websockets as compat_websockets # noqa: F401
|
from ..dependencies import websockets as compat_websockets # noqa: F401
|
||||||
|
from ..dependencies.Cryptodome import AES as compat_pycrypto_AES # noqa: F401
|
||||||
|
|
||||||
passthrough_module(__name__, '...utils', ('WINDOWS_VT_MODE', 'windows_enable_vt_mode'))
|
passthrough_module(__name__, '...utils', ('WINDOWS_VT_MODE', 'windows_enable_vt_mode'))
|
||||||
|
|
||||||
|
@ -47,23 +49,25 @@ def compat_setenv(key, value, env=os.environ):
|
||||||
env[key] = value
|
env[key] = value
|
||||||
|
|
||||||
|
|
||||||
|
compat_base64_b64decode = base64.b64decode
|
||||||
compat_basestring = str
|
compat_basestring = str
|
||||||
compat_casefold = str.casefold
|
compat_casefold = str.casefold
|
||||||
compat_chr = chr
|
compat_chr = chr
|
||||||
compat_collections_abc = collections.abc
|
compat_collections_abc = collections.abc
|
||||||
compat_cookiejar = http.cookiejar
|
compat_cookiejar = compat_http_cookiejar = http.cookiejar
|
||||||
compat_cookiejar_Cookie = http.cookiejar.Cookie
|
compat_cookiejar_Cookie = compat_http_cookiejar_Cookie = http.cookiejar.Cookie
|
||||||
compat_cookies = http.cookies
|
compat_cookies = compat_http_cookies = http.cookies
|
||||||
compat_cookies_SimpleCookie = http.cookies.SimpleCookie
|
compat_cookies_SimpleCookie = compat_http_cookies_SimpleCookie = http.cookies.SimpleCookie
|
||||||
compat_etree_Element = etree.Element
|
compat_etree_Element = compat_xml_etree_ElementTree_Element = etree.Element
|
||||||
compat_etree_register_namespace = etree.register_namespace
|
compat_etree_register_namespace = compat_xml_etree_register_namespace = etree.register_namespace
|
||||||
compat_filter = filter
|
compat_filter = filter
|
||||||
compat_get_terminal_size = shutil.get_terminal_size
|
compat_get_terminal_size = shutil.get_terminal_size
|
||||||
compat_getenv = os.getenv
|
compat_getenv = os.getenv
|
||||||
compat_getpass = getpass.getpass
|
compat_getpass = compat_getpass_getpass = getpass.getpass
|
||||||
compat_html_entities = html.entities
|
compat_html_entities = html.entities
|
||||||
compat_html_entities_html5 = html.entities.html5
|
compat_html_entities_html5 = html.entities.html5
|
||||||
compat_HTMLParser = html.parser.HTMLParser
|
compat_html_parser_HTMLParseError = compat_HTMLParseError
|
||||||
|
compat_HTMLParser = compat_html_parser_HTMLParser = html.parser.HTMLParser
|
||||||
compat_http_client = http.client
|
compat_http_client = http.client
|
||||||
compat_http_server = http.server
|
compat_http_server = http.server
|
||||||
compat_input = input
|
compat_input = input
|
||||||
|
@ -72,6 +76,8 @@ def compat_setenv(key, value, env=os.environ):
|
||||||
compat_kwargs = lambda kwargs: kwargs
|
compat_kwargs = lambda kwargs: kwargs
|
||||||
compat_map = map
|
compat_map = map
|
||||||
compat_numeric_types = (int, float, complex)
|
compat_numeric_types = (int, float, complex)
|
||||||
|
compat_os_path_expanduser = compat_expanduser
|
||||||
|
compat_os_path_realpath = compat_realpath
|
||||||
compat_print = print
|
compat_print = print
|
||||||
compat_shlex_split = shlex.split
|
compat_shlex_split = shlex.split
|
||||||
compat_socket_create_connection = socket.create_connection
|
compat_socket_create_connection = socket.create_connection
|
||||||
|
@ -81,7 +87,9 @@ def compat_setenv(key, value, env=os.environ):
|
||||||
compat_subprocess_get_DEVNULL = lambda: DEVNULL
|
compat_subprocess_get_DEVNULL = lambda: DEVNULL
|
||||||
compat_tokenize_tokenize = tokenize.tokenize
|
compat_tokenize_tokenize = tokenize.tokenize
|
||||||
compat_urllib_error = urllib.error
|
compat_urllib_error = urllib.error
|
||||||
|
compat_urllib_HTTPError = urllib.error.HTTPError
|
||||||
compat_urllib_parse = urllib.parse
|
compat_urllib_parse = urllib.parse
|
||||||
|
compat_urllib_parse_parse_qs = urllib.parse.parse_qs
|
||||||
compat_urllib_parse_quote = urllib.parse.quote
|
compat_urllib_parse_quote = urllib.parse.quote
|
||||||
compat_urllib_parse_quote_plus = urllib.parse.quote_plus
|
compat_urllib_parse_quote_plus = urllib.parse.quote_plus
|
||||||
compat_urllib_parse_unquote_plus = urllib.parse.unquote_plus
|
compat_urllib_parse_unquote_plus = urllib.parse.unquote_plus
|
||||||
|
@ -90,8 +98,10 @@ def compat_setenv(key, value, env=os.environ):
|
||||||
compat_urllib_request = urllib.request
|
compat_urllib_request = urllib.request
|
||||||
compat_urllib_request_DataHandler = urllib.request.DataHandler
|
compat_urllib_request_DataHandler = urllib.request.DataHandler
|
||||||
compat_urllib_response = urllib.response
|
compat_urllib_response = urllib.response
|
||||||
compat_urlretrieve = urllib.request.urlretrieve
|
compat_urlretrieve = compat_urllib_request_urlretrieve = urllib.request.urlretrieve
|
||||||
compat_xml_parse_error = etree.ParseError
|
compat_xml_parse_error = compat_xml_etree_ElementTree_ParseError = etree.ParseError
|
||||||
compat_xpath = lambda xpath: xpath
|
compat_xpath = lambda xpath: xpath
|
||||||
compat_zip = zip
|
compat_zip = zip
|
||||||
workaround_optparse_bug9161 = lambda: None
|
workaround_optparse_bug9161 = lambda: None
|
||||||
|
|
||||||
|
legacy = []
|
||||||
|
|
|
@ -1,5 +1,6 @@
|
||||||
import collections
|
import collections
|
||||||
import contextlib
|
import contextlib
|
||||||
|
import functools
|
||||||
import importlib
|
import importlib
|
||||||
import sys
|
import sys
|
||||||
import types
|
import types
|
||||||
|
@ -10,61 +11,73 @@
|
||||||
|
|
||||||
|
|
||||||
def get_package_info(module):
|
def get_package_info(module):
|
||||||
parent = module.__name__.split('.')[0]
|
return _Package(
|
||||||
parent_module = None
|
name=getattr(module, '_yt_dlp__identifier', module.__name__),
|
||||||
with contextlib.suppress(ImportError):
|
version=str(next(filter(None, (
|
||||||
parent_module = importlib.import_module(parent)
|
getattr(module, attr, None)
|
||||||
|
for attr in ('__version__', 'version_string', 'version')
|
||||||
for attr in ('__version__', 'version_string', 'version'):
|
)), None)))
|
||||||
version = getattr(parent_module, attr, None)
|
|
||||||
if version is not None:
|
|
||||||
break
|
|
||||||
return _Package(getattr(module, '_yt_dlp__identifier', parent), str(version))
|
|
||||||
|
|
||||||
|
|
||||||
def _is_package(module):
|
def _is_package(module):
|
||||||
try:
|
return '__path__' in vars(module)
|
||||||
module.__getattribute__('__path__')
|
|
||||||
except AttributeError:
|
|
||||||
return False
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
def passthrough_module(parent, child, allowed_attributes=None, *, callback=lambda _: None):
|
def _is_dunder(name):
|
||||||
parent_module = importlib.import_module(parent)
|
return name.startswith('__') and name.endswith('__')
|
||||||
child_module = None # Import child module only as needed
|
|
||||||
|
|
||||||
class PassthroughModule(types.ModuleType):
|
|
||||||
def __getattr__(self, attr):
|
|
||||||
if _is_package(parent_module):
|
|
||||||
with contextlib.suppress(ImportError):
|
|
||||||
return importlib.import_module(f'.{attr}', parent)
|
|
||||||
|
|
||||||
ret = self.__from_child(attr)
|
class EnhancedModule(types.ModuleType):
|
||||||
if ret is _NO_ATTRIBUTE:
|
def __bool__(self):
|
||||||
raise AttributeError(f'module {parent} has no attribute {attr}')
|
return vars(self).get('__bool__', lambda: True)()
|
||||||
callback(attr)
|
|
||||||
return ret
|
|
||||||
|
|
||||||
def __from_child(self, attr):
|
def __getattribute__(self, attr):
|
||||||
if allowed_attributes is None:
|
try:
|
||||||
if attr.startswith('__') and attr.endswith('__'):
|
ret = super().__getattribute__(attr)
|
||||||
return _NO_ATTRIBUTE
|
except AttributeError:
|
||||||
elif attr not in allowed_attributes:
|
if _is_dunder(attr):
|
||||||
|
raise
|
||||||
|
getter = getattr(self, '__getattr__', None)
|
||||||
|
if not getter:
|
||||||
|
raise
|
||||||
|
ret = getter(attr)
|
||||||
|
return ret.fget() if isinstance(ret, property) else ret
|
||||||
|
|
||||||
|
|
||||||
|
def passthrough_module(parent, child, allowed_attributes=(..., ), *, callback=lambda _: None):
|
||||||
|
"""Passthrough parent module into a child module, creating the parent if necessary"""
|
||||||
|
def __getattr__(attr):
|
||||||
|
if _is_package(parent):
|
||||||
|
with contextlib.suppress(ModuleNotFoundError):
|
||||||
|
return importlib.import_module(f'.{attr}', parent.__name__)
|
||||||
|
|
||||||
|
ret = from_child(attr)
|
||||||
|
if ret is _NO_ATTRIBUTE:
|
||||||
|
raise AttributeError(f'module {parent.__name__} has no attribute {attr}')
|
||||||
|
callback(attr)
|
||||||
|
return ret
|
||||||
|
|
||||||
|
@functools.lru_cache(maxsize=None)
|
||||||
|
def from_child(attr):
|
||||||
|
nonlocal child
|
||||||
|
if attr not in allowed_attributes:
|
||||||
|
if ... not in allowed_attributes or _is_dunder(attr):
|
||||||
return _NO_ATTRIBUTE
|
return _NO_ATTRIBUTE
|
||||||
|
|
||||||
nonlocal child_module
|
if isinstance(child, str):
|
||||||
child_module = child_module or importlib.import_module(child, parent)
|
child = importlib.import_module(child, parent.__name__)
|
||||||
|
|
||||||
with contextlib.suppress(AttributeError):
|
if _is_package(child):
|
||||||
return getattr(child_module, attr)
|
with contextlib.suppress(ImportError):
|
||||||
|
return passthrough_module(f'{parent.__name__}.{attr}',
|
||||||
|
importlib.import_module(f'.{attr}', child.__name__))
|
||||||
|
|
||||||
if _is_package(child_module):
|
with contextlib.suppress(AttributeError):
|
||||||
with contextlib.suppress(ImportError):
|
return getattr(child, attr)
|
||||||
return importlib.import_module(f'.{attr}', child)
|
|
||||||
|
|
||||||
return _NO_ATTRIBUTE
|
return _NO_ATTRIBUTE
|
||||||
|
|
||||||
# Python 3.6 does not have module level __getattr__
|
parent = sys.modules.get(parent, types.ModuleType(parent))
|
||||||
# https://peps.python.org/pep-0562/
|
parent.__class__ = EnhancedModule
|
||||||
sys.modules[parent].__class__ = PassthroughModule
|
parent.__getattr__ = __getattr__
|
||||||
|
return parent
|
||||||
|
|
|
@ -20,6 +20,7 @@
|
||||||
aes_gcm_decrypt_and_verify_bytes,
|
aes_gcm_decrypt_and_verify_bytes,
|
||||||
unpad_pkcs7,
|
unpad_pkcs7,
|
||||||
)
|
)
|
||||||
|
from .compat import functools
|
||||||
from .dependencies import (
|
from .dependencies import (
|
||||||
_SECRETSTORAGE_UNAVAILABLE_REASON,
|
_SECRETSTORAGE_UNAVAILABLE_REASON,
|
||||||
secretstorage,
|
secretstorage,
|
||||||
|
@ -383,9 +384,14 @@ class LinuxChromeCookieDecryptor(ChromeCookieDecryptor):
|
||||||
def __init__(self, browser_keyring_name, logger, *, keyring=None):
|
def __init__(self, browser_keyring_name, logger, *, keyring=None):
|
||||||
self._logger = logger
|
self._logger = logger
|
||||||
self._v10_key = self.derive_key(b'peanuts')
|
self._v10_key = self.derive_key(b'peanuts')
|
||||||
password = _get_linux_keyring_password(browser_keyring_name, keyring, logger)
|
|
||||||
self._v11_key = None if password is None else self.derive_key(password)
|
|
||||||
self._cookie_counts = {'v10': 0, 'v11': 0, 'other': 0}
|
self._cookie_counts = {'v10': 0, 'v11': 0, 'other': 0}
|
||||||
|
self._browser_keyring_name = browser_keyring_name
|
||||||
|
self._keyring = keyring
|
||||||
|
|
||||||
|
@functools.cached_property
|
||||||
|
def _v11_key(self):
|
||||||
|
password = _get_linux_keyring_password(self._browser_keyring_name, self._keyring, self._logger)
|
||||||
|
return None if password is None else self.derive_key(password)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def derive_key(password):
|
def derive_key(password):
|
||||||
|
|
38
yt_dlp/dependencies/Cryptodome.py
Normal file
38
yt_dlp/dependencies/Cryptodome.py
Normal file
|
@ -0,0 +1,38 @@
|
||||||
|
from ..compat.compat_utils import passthrough_module
|
||||||
|
|
||||||
|
try:
|
||||||
|
import Cryptodome as _parent
|
||||||
|
except ImportError:
|
||||||
|
try:
|
||||||
|
import Crypto as _parent
|
||||||
|
except (ImportError, SyntaxError): # Old Crypto gives SyntaxError in newer Python
|
||||||
|
_parent = passthrough_module(__name__, 'no_Cryptodome')
|
||||||
|
__bool__ = lambda: False
|
||||||
|
|
||||||
|
del passthrough_module
|
||||||
|
|
||||||
|
__version__ = ''
|
||||||
|
AES = PKCS1_v1_5 = Blowfish = PKCS1_OAEP = SHA1 = CMAC = RSA = None
|
||||||
|
try:
|
||||||
|
if _parent.__name__ == 'Cryptodome':
|
||||||
|
from Cryptodome import __version__
|
||||||
|
from Cryptodome.Cipher import AES, PKCS1_OAEP, Blowfish, PKCS1_v1_5
|
||||||
|
from Cryptodome.Hash import CMAC, SHA1
|
||||||
|
from Cryptodome.PublicKey import RSA
|
||||||
|
elif _parent.__name__ == 'Crypto':
|
||||||
|
from Crypto import __version__
|
||||||
|
from Crypto.Cipher import AES, PKCS1_OAEP, Blowfish, PKCS1_v1_5 # noqa: F401
|
||||||
|
from Crypto.Hash import CMAC, SHA1 # noqa: F401
|
||||||
|
from Crypto.PublicKey import RSA # noqa: F401
|
||||||
|
except ImportError:
|
||||||
|
__version__ = f'broken {__version__}'.strip()
|
||||||
|
|
||||||
|
|
||||||
|
_yt_dlp__identifier = _parent.__name__
|
||||||
|
if AES and _yt_dlp__identifier == 'Crypto':
|
||||||
|
try:
|
||||||
|
# In pycrypto, mode defaults to ECB. See:
|
||||||
|
# https://www.pycryptodome.org/en/latest/src/vs_pycrypto.html#:~:text=not%20have%20ECB%20as%20default%20mode
|
||||||
|
AES.new(b'abcdefghijklmnop')
|
||||||
|
except TypeError:
|
||||||
|
_yt_dlp__identifier = 'pycrypto'
|
|
@ -23,24 +23,6 @@
|
||||||
certifi = None
|
certifi = None
|
||||||
|
|
||||||
|
|
||||||
try:
|
|
||||||
from Cryptodome.Cipher import AES as Cryptodome_AES
|
|
||||||
except ImportError:
|
|
||||||
try:
|
|
||||||
from Crypto.Cipher import AES as Cryptodome_AES
|
|
||||||
except (ImportError, SyntaxError): # Old Crypto gives SyntaxError in newer Python
|
|
||||||
Cryptodome_AES = None
|
|
||||||
else:
|
|
||||||
try:
|
|
||||||
# In pycrypto, mode defaults to ECB. See:
|
|
||||||
# https://www.pycryptodome.org/en/latest/src/vs_pycrypto.html#:~:text=not%20have%20ECB%20as%20default%20mode
|
|
||||||
Cryptodome_AES.new(b'abcdefghijklmnop')
|
|
||||||
except TypeError:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
Cryptodome_AES._yt_dlp__identifier = 'pycrypto'
|
|
||||||
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
import mutagen
|
import mutagen
|
||||||
except ImportError:
|
except ImportError:
|
||||||
|
@ -84,12 +66,16 @@
|
||||||
xattr._yt_dlp__identifier = 'pyxattr'
|
xattr._yt_dlp__identifier = 'pyxattr'
|
||||||
|
|
||||||
|
|
||||||
|
from . import Cryptodome
|
||||||
|
|
||||||
all_dependencies = {k: v for k, v in globals().items() if not k.startswith('_')}
|
all_dependencies = {k: v for k, v in globals().items() if not k.startswith('_')}
|
||||||
|
|
||||||
|
|
||||||
available_dependencies = {k: v for k, v in all_dependencies.items() if v}
|
available_dependencies = {k: v for k, v in all_dependencies.items() if v}
|
||||||
|
|
||||||
|
|
||||||
|
# Deprecated
|
||||||
|
Cryptodome_AES = Cryptodome.AES
|
||||||
|
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
'all_dependencies',
|
'all_dependencies',
|
||||||
'available_dependencies',
|
'available_dependencies',
|
|
@ -104,6 +104,7 @@ def supports(cls, info_dict):
|
||||||
return all((
|
return all((
|
||||||
not info_dict.get('to_stdout') or Features.TO_STDOUT in cls.SUPPORTED_FEATURES,
|
not info_dict.get('to_stdout') or Features.TO_STDOUT in cls.SUPPORTED_FEATURES,
|
||||||
'+' not in info_dict['protocol'] or Features.MULTIPLE_FORMATS in cls.SUPPORTED_FEATURES,
|
'+' not in info_dict['protocol'] or Features.MULTIPLE_FORMATS in cls.SUPPORTED_FEATURES,
|
||||||
|
not traverse_obj(info_dict, ('hls_aes', ...), 'extra_param_to_segment_url'),
|
||||||
all(proto in cls.SUPPORTED_PROTOCOLS for proto in info_dict['protocol'].split('+')),
|
all(proto in cls.SUPPORTED_PROTOCOLS for proto in info_dict['protocol'].split('+')),
|
||||||
))
|
))
|
||||||
|
|
||||||
|
@ -175,7 +176,7 @@ def _call_downloader(self, tmpfilename, info_dict):
|
||||||
return 0
|
return 0
|
||||||
|
|
||||||
def _call_process(self, cmd, info_dict):
|
def _call_process(self, cmd, info_dict):
|
||||||
return Popen.run(cmd, text=True, stderr=subprocess.PIPE)
|
return Popen.run(cmd, text=True, stderr=subprocess.PIPE if self._CAPTURE_STDERR else None)
|
||||||
|
|
||||||
|
|
||||||
class CurlFD(ExternalFD):
|
class CurlFD(ExternalFD):
|
||||||
|
|
|
@ -360,7 +360,8 @@ def decrypt_fragment(fragment, frag_content):
|
||||||
if not decrypt_info or decrypt_info['METHOD'] != 'AES-128':
|
if not decrypt_info or decrypt_info['METHOD'] != 'AES-128':
|
||||||
return frag_content
|
return frag_content
|
||||||
iv = decrypt_info.get('IV') or struct.pack('>8xq', fragment['media_sequence'])
|
iv = decrypt_info.get('IV') or struct.pack('>8xq', fragment['media_sequence'])
|
||||||
decrypt_info['KEY'] = decrypt_info.get('KEY') or _get_key(info_dict.get('_decryption_key_url') or decrypt_info['URI'])
|
decrypt_info['KEY'] = (decrypt_info.get('KEY')
|
||||||
|
or _get_key(traverse_obj(info_dict, ('hls_aes', 'uri')) or decrypt_info['URI']))
|
||||||
# Don't decrypt the content in tests since the data is explicitly truncated and it's not to a valid block
|
# Don't decrypt the content in tests since the data is explicitly truncated and it's not to a valid block
|
||||||
# size (see https://github.com/ytdl-org/youtube-dl/pull/27660). Tests only care that the correct data downloaded,
|
# size (see https://github.com/ytdl-org/youtube-dl/pull/27660). Tests only care that the correct data downloaded,
|
||||||
# not what it decrypts to.
|
# not what it decrypts to.
|
||||||
|
@ -382,7 +383,7 @@ def download_and_append_fragments_multiple(self, *args, **kwargs):
|
||||||
max_workers = self.params.get('concurrent_fragment_downloads', 1)
|
max_workers = self.params.get('concurrent_fragment_downloads', 1)
|
||||||
if max_progress > 1:
|
if max_progress > 1:
|
||||||
self._prepare_multiline_status(max_progress)
|
self._prepare_multiline_status(max_progress)
|
||||||
is_live = any(traverse_obj(args, (..., 2, 'is_live'), default=[]))
|
is_live = any(traverse_obj(args, (..., 2, 'is_live')))
|
||||||
|
|
||||||
def thread_func(idx, ctx, fragments, info_dict, tpe):
|
def thread_func(idx, ctx, fragments, info_dict, tpe):
|
||||||
ctx['max_progress'] = max_progress
|
ctx['max_progress'] = max_progress
|
||||||
|
@ -465,7 +466,8 @@ def error_callback(err, count, retries):
|
||||||
for retry in RetryManager(self.params.get('fragment_retries'), error_callback):
|
for retry in RetryManager(self.params.get('fragment_retries'), error_callback):
|
||||||
try:
|
try:
|
||||||
ctx['fragment_count'] = fragment.get('fragment_count')
|
ctx['fragment_count'] = fragment.get('fragment_count')
|
||||||
if not self._download_fragment(ctx, fragment['url'], info_dict, headers):
|
if not self._download_fragment(
|
||||||
|
ctx, fragment['url'], info_dict, headers, info_dict.get('request_data')):
|
||||||
return
|
return
|
||||||
except (urllib.error.HTTPError, http.client.IncompleteRead) as err:
|
except (urllib.error.HTTPError, http.client.IncompleteRead) as err:
|
||||||
retry.error = err
|
retry.error = err
|
||||||
|
@ -495,7 +497,7 @@ def _download_fragment(fragment):
|
||||||
download_fragment(fragment, ctx_copy)
|
download_fragment(fragment, ctx_copy)
|
||||||
return fragment, fragment['frag_index'], ctx_copy.get('fragment_filename_sanitized')
|
return fragment, fragment['frag_index'], ctx_copy.get('fragment_filename_sanitized')
|
||||||
|
|
||||||
self.report_warning('The download speed shown is only of one thread. This is a known issue and patches are welcome')
|
self.report_warning('The download speed shown is only of one thread. This is a known issue')
|
||||||
with tpe or concurrent.futures.ThreadPoolExecutor(max_workers) as pool:
|
with tpe or concurrent.futures.ThreadPoolExecutor(max_workers) as pool:
|
||||||
try:
|
try:
|
||||||
for fragment, frag_index, frag_filename in pool.map(_download_fragment, fragments):
|
for fragment, frag_index, frag_filename in pool.map(_download_fragment, fragments):
|
||||||
|
|
|
@ -7,8 +7,15 @@
|
||||||
from .external import FFmpegFD
|
from .external import FFmpegFD
|
||||||
from .fragment import FragmentFD
|
from .fragment import FragmentFD
|
||||||
from .. import webvtt
|
from .. import webvtt
|
||||||
from ..dependencies import Cryptodome_AES
|
from ..dependencies import Cryptodome
|
||||||
from ..utils import bug_reports_message, parse_m3u8_attributes, update_url_query
|
from ..utils import (
|
||||||
|
bug_reports_message,
|
||||||
|
parse_m3u8_attributes,
|
||||||
|
remove_start,
|
||||||
|
traverse_obj,
|
||||||
|
update_url_query,
|
||||||
|
urljoin,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class HlsFD(FragmentFD):
|
class HlsFD(FragmentFD):
|
||||||
|
@ -63,7 +70,7 @@ def real_download(self, filename, info_dict):
|
||||||
can_download, message = self.can_download(s, info_dict, self.params.get('allow_unplayable_formats')), None
|
can_download, message = self.can_download(s, info_dict, self.params.get('allow_unplayable_formats')), None
|
||||||
if can_download:
|
if can_download:
|
||||||
has_ffmpeg = FFmpegFD.available()
|
has_ffmpeg = FFmpegFD.available()
|
||||||
no_crypto = not Cryptodome_AES and '#EXT-X-KEY:METHOD=AES-128' in s
|
no_crypto = not Cryptodome.AES and '#EXT-X-KEY:METHOD=AES-128' in s
|
||||||
if no_crypto and has_ffmpeg:
|
if no_crypto and has_ffmpeg:
|
||||||
can_download, message = False, 'The stream has AES-128 encryption and pycryptodomex is not available'
|
can_download, message = False, 'The stream has AES-128 encryption and pycryptodomex is not available'
|
||||||
elif no_crypto:
|
elif no_crypto:
|
||||||
|
@ -150,6 +157,13 @@ def is_ad_fragment_end(s):
|
||||||
i = 0
|
i = 0
|
||||||
media_sequence = 0
|
media_sequence = 0
|
||||||
decrypt_info = {'METHOD': 'NONE'}
|
decrypt_info = {'METHOD': 'NONE'}
|
||||||
|
external_aes_key = traverse_obj(info_dict, ('hls_aes', 'key'))
|
||||||
|
if external_aes_key:
|
||||||
|
external_aes_key = binascii.unhexlify(remove_start(external_aes_key, '0x'))
|
||||||
|
assert len(external_aes_key) in (16, 24, 32), 'Invalid length for HLS AES-128 key'
|
||||||
|
external_aes_iv = traverse_obj(info_dict, ('hls_aes', 'iv'))
|
||||||
|
if external_aes_iv:
|
||||||
|
external_aes_iv = binascii.unhexlify(remove_start(external_aes_iv, '0x').zfill(32))
|
||||||
byte_range = {}
|
byte_range = {}
|
||||||
discontinuity_count = 0
|
discontinuity_count = 0
|
||||||
frag_index = 0
|
frag_index = 0
|
||||||
|
@ -165,10 +179,7 @@ def is_ad_fragment_end(s):
|
||||||
frag_index += 1
|
frag_index += 1
|
||||||
if frag_index <= ctx['fragment_index']:
|
if frag_index <= ctx['fragment_index']:
|
||||||
continue
|
continue
|
||||||
frag_url = (
|
frag_url = urljoin(man_url, line)
|
||||||
line
|
|
||||||
if re.match(r'^https?://', line)
|
|
||||||
else urllib.parse.urljoin(man_url, line))
|
|
||||||
if extra_query:
|
if extra_query:
|
||||||
frag_url = update_url_query(frag_url, extra_query)
|
frag_url = update_url_query(frag_url, extra_query)
|
||||||
|
|
||||||
|
@ -190,10 +201,7 @@ def is_ad_fragment_end(s):
|
||||||
return False
|
return False
|
||||||
frag_index += 1
|
frag_index += 1
|
||||||
map_info = parse_m3u8_attributes(line[11:])
|
map_info = parse_m3u8_attributes(line[11:])
|
||||||
frag_url = (
|
frag_url = urljoin(man_url, map_info.get('URI'))
|
||||||
map_info.get('URI')
|
|
||||||
if re.match(r'^https?://', map_info.get('URI'))
|
|
||||||
else urllib.parse.urljoin(man_url, map_info.get('URI')))
|
|
||||||
if extra_query:
|
if extra_query:
|
||||||
frag_url = update_url_query(frag_url, extra_query)
|
frag_url = update_url_query(frag_url, extra_query)
|
||||||
|
|
||||||
|
@ -218,15 +226,18 @@ def is_ad_fragment_end(s):
|
||||||
decrypt_url = decrypt_info.get('URI')
|
decrypt_url = decrypt_info.get('URI')
|
||||||
decrypt_info = parse_m3u8_attributes(line[11:])
|
decrypt_info = parse_m3u8_attributes(line[11:])
|
||||||
if decrypt_info['METHOD'] == 'AES-128':
|
if decrypt_info['METHOD'] == 'AES-128':
|
||||||
if 'IV' in decrypt_info:
|
if external_aes_iv:
|
||||||
|
decrypt_info['IV'] = external_aes_iv
|
||||||
|
elif 'IV' in decrypt_info:
|
||||||
decrypt_info['IV'] = binascii.unhexlify(decrypt_info['IV'][2:].zfill(32))
|
decrypt_info['IV'] = binascii.unhexlify(decrypt_info['IV'][2:].zfill(32))
|
||||||
if not re.match(r'^https?://', decrypt_info['URI']):
|
if external_aes_key:
|
||||||
decrypt_info['URI'] = urllib.parse.urljoin(
|
decrypt_info['KEY'] = external_aes_key
|
||||||
man_url, decrypt_info['URI'])
|
else:
|
||||||
if extra_query:
|
decrypt_info['URI'] = urljoin(man_url, decrypt_info['URI'])
|
||||||
decrypt_info['URI'] = update_url_query(decrypt_info['URI'], extra_query)
|
if extra_query:
|
||||||
if decrypt_url != decrypt_info['URI']:
|
decrypt_info['URI'] = update_url_query(decrypt_info['URI'], extra_query)
|
||||||
decrypt_info['KEY'] = None
|
if decrypt_url != decrypt_info['URI']:
|
||||||
|
decrypt_info['KEY'] = None
|
||||||
|
|
||||||
elif line.startswith('#EXT-X-MEDIA-SEQUENCE'):
|
elif line.startswith('#EXT-X-MEDIA-SEQUENCE'):
|
||||||
media_sequence = int(line[22:])
|
media_sequence = int(line[22:])
|
||||||
|
|
|
@ -211,7 +211,12 @@ def close_stream():
|
||||||
ctx.stream = None
|
ctx.stream = None
|
||||||
|
|
||||||
def download():
|
def download():
|
||||||
data_len = ctx.data.info().get('Content-length', None)
|
data_len = ctx.data.info().get('Content-length')
|
||||||
|
|
||||||
|
if ctx.data.info().get('Content-encoding'):
|
||||||
|
# Content-encoding is present, Content-length is not reliable anymore as we are
|
||||||
|
# doing auto decompression. (See: https://github.com/yt-dlp/yt-dlp/pull/6176)
|
||||||
|
data_len = None
|
||||||
|
|
||||||
# Range HTTP header may be ignored/unsupported by a webserver
|
# Range HTTP header may be ignored/unsupported by a webserver
|
||||||
# (e.g. extractor/scivee.py, extractor/bambuser.py).
|
# (e.g. extractor/scivee.py, extractor/bambuser.py).
|
||||||
|
|
|
@ -21,7 +21,8 @@
|
||||||
YoutubeYtBeIE,
|
YoutubeYtBeIE,
|
||||||
YoutubeYtUserIE,
|
YoutubeYtUserIE,
|
||||||
YoutubeWatchLaterIE,
|
YoutubeWatchLaterIE,
|
||||||
YoutubeShortsAudioPivotIE
|
YoutubeShortsAudioPivotIE,
|
||||||
|
YoutubeConsentRedirectIE,
|
||||||
)
|
)
|
||||||
|
|
||||||
from .abc import (
|
from .abc import (
|
||||||
|
@ -101,6 +102,7 @@
|
||||||
AmericasTestKitchenIE,
|
AmericasTestKitchenIE,
|
||||||
AmericasTestKitchenSeasonIE,
|
AmericasTestKitchenSeasonIE,
|
||||||
)
|
)
|
||||||
|
from .anchorfm import AnchorFMEpisodeIE
|
||||||
from .angel import AngelIE
|
from .angel import AngelIE
|
||||||
from .anvato import AnvatoIE
|
from .anvato import AnvatoIE
|
||||||
from .aol import AolIE
|
from .aol import AolIE
|
||||||
|
@ -121,6 +123,7 @@
|
||||||
from .archiveorg import (
|
from .archiveorg import (
|
||||||
ArchiveOrgIE,
|
ArchiveOrgIE,
|
||||||
YoutubeWebArchiveIE,
|
YoutubeWebArchiveIE,
|
||||||
|
VLiveWebArchiveIE,
|
||||||
)
|
)
|
||||||
from .arcpublishing import ArcPublishingIE
|
from .arcpublishing import ArcPublishingIE
|
||||||
from .arkena import ArkenaIE
|
from .arkena import ArkenaIE
|
||||||
|
@ -236,12 +239,14 @@
|
||||||
BleacherReportIE,
|
BleacherReportIE,
|
||||||
BleacherReportCMSIE,
|
BleacherReportCMSIE,
|
||||||
)
|
)
|
||||||
|
from .blerp import BlerpIE
|
||||||
from .blogger import BloggerIE
|
from .blogger import BloggerIE
|
||||||
from .bloomberg import BloombergIE
|
from .bloomberg import BloombergIE
|
||||||
from .bokecc import BokeCCIE
|
from .bokecc import BokeCCIE
|
||||||
from .bongacams import BongaCamsIE
|
from .bongacams import BongaCamsIE
|
||||||
from .bostonglobe import BostonGlobeIE
|
from .bostonglobe import BostonGlobeIE
|
||||||
from .box import BoxIE
|
from .box import BoxIE
|
||||||
|
from .boxcast import BoxCastVideoIE
|
||||||
from .booyah import BooyahClipsIE
|
from .booyah import BooyahClipsIE
|
||||||
from .bpb import BpbIE
|
from .bpb import BpbIE
|
||||||
from .br import (
|
from .br import (
|
||||||
|
@ -505,6 +510,7 @@
|
||||||
)
|
)
|
||||||
from .eagleplatform import EaglePlatformIE, ClipYouEmbedIE
|
from .eagleplatform import EaglePlatformIE, ClipYouEmbedIE
|
||||||
from .ebaumsworld import EbaumsWorldIE
|
from .ebaumsworld import EbaumsWorldIE
|
||||||
|
from .ebay import EbayIE
|
||||||
from .echomsk import EchoMskIE
|
from .echomsk import EchoMskIE
|
||||||
from .egghead import (
|
from .egghead import (
|
||||||
EggheadCourseIE,
|
EggheadCourseIE,
|
||||||
|
@ -743,6 +749,7 @@
|
||||||
HungamaAlbumPlaylistIE,
|
HungamaAlbumPlaylistIE,
|
||||||
)
|
)
|
||||||
from .hypem import HypemIE
|
from .hypem import HypemIE
|
||||||
|
from .hypergryph import MonsterSirenHypergryphMusicIE
|
||||||
from .hytale import HytaleIE
|
from .hytale import HytaleIE
|
||||||
from .icareus import IcareusIE
|
from .icareus import IcareusIE
|
||||||
from .ichinanalive import (
|
from .ichinanalive import (
|
||||||
|
@ -855,6 +862,7 @@
|
||||||
from .kickstarter import KickStarterIE
|
from .kickstarter import KickStarterIE
|
||||||
from .kinja import KinjaEmbedIE
|
from .kinja import KinjaEmbedIE
|
||||||
from .kinopoisk import KinoPoiskIE
|
from .kinopoisk import KinoPoiskIE
|
||||||
|
from .kommunetv import KommunetvIE
|
||||||
from .kompas import KompasVideoIE
|
from .kompas import KompasVideoIE
|
||||||
from .konserthusetplay import KonserthusetPlayIE
|
from .konserthusetplay import KonserthusetPlayIE
|
||||||
from .koo import KooIE
|
from .koo import KooIE
|
||||||
|
@ -906,6 +914,10 @@
|
||||||
LePlaylistIE,
|
LePlaylistIE,
|
||||||
LetvCloudIE,
|
LetvCloudIE,
|
||||||
)
|
)
|
||||||
|
from .lefigaro import (
|
||||||
|
LeFigaroVideoEmbedIE,
|
||||||
|
LeFigaroVideoSectionIE,
|
||||||
|
)
|
||||||
from .lego import LEGOIE
|
from .lego import LEGOIE
|
||||||
from .lemonde import LemondeIE
|
from .lemonde import LemondeIE
|
||||||
from .lenta import LentaIE
|
from .lenta import LentaIE
|
||||||
|
@ -954,6 +966,9 @@
|
||||||
LRTVODIE,
|
LRTVODIE,
|
||||||
LRTStreamIE
|
LRTStreamIE
|
||||||
)
|
)
|
||||||
|
from .lumni import (
|
||||||
|
LumniIE
|
||||||
|
)
|
||||||
from .lynda import (
|
from .lynda import (
|
||||||
LyndaIE,
|
LyndaIE,
|
||||||
LyndaCourseIE
|
LyndaCourseIE
|
||||||
|
@ -1195,6 +1210,8 @@
|
||||||
from .nfl import (
|
from .nfl import (
|
||||||
NFLIE,
|
NFLIE,
|
||||||
NFLArticleIE,
|
NFLArticleIE,
|
||||||
|
NFLPlusEpisodeIE,
|
||||||
|
NFLPlusReplayIE,
|
||||||
)
|
)
|
||||||
from .nhk import (
|
from .nhk import (
|
||||||
NhkVodIE,
|
NhkVodIE,
|
||||||
|
@ -1278,6 +1295,7 @@
|
||||||
from .ntvcojp import NTVCoJpCUIE
|
from .ntvcojp import NTVCoJpCUIE
|
||||||
from .ntvde import NTVDeIE
|
from .ntvde import NTVDeIE
|
||||||
from .ntvru import NTVRuIE
|
from .ntvru import NTVRuIE
|
||||||
|
from .nubilesporn import NubilesPornIE
|
||||||
from .nytimes import (
|
from .nytimes import (
|
||||||
NYTimesIE,
|
NYTimesIE,
|
||||||
NYTimesArticleIE,
|
NYTimesArticleIE,
|
||||||
|
@ -1285,8 +1303,10 @@
|
||||||
)
|
)
|
||||||
from .nuvid import NuvidIE
|
from .nuvid import NuvidIE
|
||||||
from .nzherald import NZHeraldIE
|
from .nzherald import NZHeraldIE
|
||||||
|
from .nzonscreen import NZOnScreenIE
|
||||||
from .nzz import NZZIE
|
from .nzz import NZZIE
|
||||||
from .odatv import OdaTVIE
|
from .odatv import OdaTVIE
|
||||||
|
from .odkmedia import OnDemandChinaEpisodeIE
|
||||||
from .odnoklassniki import OdnoklassnikiIE
|
from .odnoklassniki import OdnoklassnikiIE
|
||||||
from .oftv import (
|
from .oftv import (
|
||||||
OfTVIE,
|
OfTVIE,
|
||||||
|
@ -1450,6 +1470,7 @@
|
||||||
PuhuTVIE,
|
PuhuTVIE,
|
||||||
PuhuTVSerieIE,
|
PuhuTVSerieIE,
|
||||||
)
|
)
|
||||||
|
from .pr0gramm import Pr0grammStaticIE, Pr0grammIE
|
||||||
from .prankcast import PrankCastIE
|
from .prankcast import PrankCastIE
|
||||||
from .premiershiprugby import PremiershipRugbyIE
|
from .premiershiprugby import PremiershipRugbyIE
|
||||||
from .presstv import PressTVIE
|
from .presstv import PressTVIE
|
||||||
|
@ -1511,6 +1532,10 @@
|
||||||
RayWenderlichCourseIE,
|
RayWenderlichCourseIE,
|
||||||
)
|
)
|
||||||
from .rbmaradio import RBMARadioIE
|
from .rbmaradio import RBMARadioIE
|
||||||
|
from .rbgtum import (
|
||||||
|
RbgTumIE,
|
||||||
|
RbgTumCourseIE,
|
||||||
|
)
|
||||||
from .rcs import (
|
from .rcs import (
|
||||||
RCSIE,
|
RCSIE,
|
||||||
RCSEmbedsIE,
|
RCSEmbedsIE,
|
||||||
|
@ -1555,7 +1580,10 @@
|
||||||
)
|
)
|
||||||
from .roosterteeth import RoosterTeethIE, RoosterTeethSeriesIE
|
from .roosterteeth import RoosterTeethIE, RoosterTeethSeriesIE
|
||||||
from .rottentomatoes import RottenTomatoesIE
|
from .rottentomatoes import RottenTomatoesIE
|
||||||
from .rozhlas import RozhlasIE
|
from .rozhlas import (
|
||||||
|
RozhlasIE,
|
||||||
|
RozhlasVltavaIE,
|
||||||
|
)
|
||||||
from .rte import RteIE, RteRadioIE
|
from .rte import RteIE, RteRadioIE
|
||||||
from .rtlnl import (
|
from .rtlnl import (
|
||||||
RtlNlIE,
|
RtlNlIE,
|
||||||
|
@ -1819,7 +1847,10 @@
|
||||||
TeacherTubeUserIE,
|
TeacherTubeUserIE,
|
||||||
)
|
)
|
||||||
from .teachingchannel import TeachingChannelIE
|
from .teachingchannel import TeachingChannelIE
|
||||||
from .teamcoco import TeamcocoIE
|
from .teamcoco import (
|
||||||
|
TeamcocoIE,
|
||||||
|
ConanClassicIE,
|
||||||
|
)
|
||||||
from .teamtreehouse import TeamTreeHouseIE
|
from .teamtreehouse import TeamTreeHouseIE
|
||||||
from .techtalks import TechTalksIE
|
from .techtalks import TechTalksIE
|
||||||
from .ted import (
|
from .ted import (
|
||||||
|
@ -1831,6 +1862,7 @@
|
||||||
from .tele5 import Tele5IE
|
from .tele5 import Tele5IE
|
||||||
from .tele13 import Tele13IE
|
from .tele13 import Tele13IE
|
||||||
from .telebruxelles import TeleBruxellesIE
|
from .telebruxelles import TeleBruxellesIE
|
||||||
|
from .telecaribe import TelecaribePlayIE
|
||||||
from .telecinco import TelecincoIE
|
from .telecinco import TelecincoIE
|
||||||
from .telegraaf import TelegraafIE
|
from .telegraaf import TelegraafIE
|
||||||
from .telegram import TelegramEmbedIE
|
from .telegram import TelegramEmbedIE
|
||||||
|
@ -1845,7 +1877,7 @@
|
||||||
)
|
)
|
||||||
from .teletask import TeleTaskIE
|
from .teletask import TeleTaskIE
|
||||||
from .telewebion import TelewebionIE
|
from .telewebion import TelewebionIE
|
||||||
from .tempo import TempoIE
|
from .tempo import TempoIE, IVXPlayerIE
|
||||||
from .tencent import (
|
from .tencent import (
|
||||||
IflixEpisodeIE,
|
IflixEpisodeIE,
|
||||||
IflixSeriesIE,
|
IflixSeriesIE,
|
||||||
|
@ -1943,10 +1975,9 @@
|
||||||
)
|
)
|
||||||
from .tumblr import TumblrIE
|
from .tumblr import TumblrIE
|
||||||
from .tunein import (
|
from .tunein import (
|
||||||
TuneInClipIE,
|
|
||||||
TuneInStationIE,
|
TuneInStationIE,
|
||||||
TuneInProgramIE,
|
TuneInPodcastIE,
|
||||||
TuneInTopicIE,
|
TuneInPodcastEpisodeIE,
|
||||||
TuneInShortenerIE,
|
TuneInShortenerIE,
|
||||||
)
|
)
|
||||||
from .tunepk import TunePkIE
|
from .tunepk import TunePkIE
|
||||||
|
@ -2044,6 +2075,10 @@
|
||||||
TwitterSpacesIE,
|
TwitterSpacesIE,
|
||||||
TwitterShortenerIE,
|
TwitterShortenerIE,
|
||||||
)
|
)
|
||||||
|
from .txxx import (
|
||||||
|
TxxxIE,
|
||||||
|
PornTopIE,
|
||||||
|
)
|
||||||
from .udemy import (
|
from .udemy import (
|
||||||
UdemyIE,
|
UdemyIE,
|
||||||
UdemyCourseIE
|
UdemyCourseIE
|
||||||
|
@ -2169,17 +2204,14 @@
|
||||||
ViuIE,
|
ViuIE,
|
||||||
ViuPlaylistIE,
|
ViuPlaylistIE,
|
||||||
ViuOTTIE,
|
ViuOTTIE,
|
||||||
|
ViuOTTIndonesiaIE,
|
||||||
)
|
)
|
||||||
from .vk import (
|
from .vk import (
|
||||||
VKIE,
|
VKIE,
|
||||||
VKUserVideosIE,
|
VKUserVideosIE,
|
||||||
VKWallPostIE,
|
VKWallPostIE,
|
||||||
)
|
)
|
||||||
from .vlive import (
|
from .vocaroo import VocarooIE
|
||||||
VLiveIE,
|
|
||||||
VLivePostIE,
|
|
||||||
VLiveChannelIE,
|
|
||||||
)
|
|
||||||
from .vodlocker import VodlockerIE
|
from .vodlocker import VodlockerIE
|
||||||
from .vodpl import VODPlIE
|
from .vodpl import VODPlIE
|
||||||
from .vodplatform import VODPlatformIE
|
from .vodplatform import VODPlatformIE
|
||||||
|
@ -2266,6 +2298,10 @@
|
||||||
WPPilotIE,
|
WPPilotIE,
|
||||||
WPPilotChannelsIE,
|
WPPilotChannelsIE,
|
||||||
)
|
)
|
||||||
|
from .wrestleuniverse import (
|
||||||
|
WrestleUniverseVODIE,
|
||||||
|
WrestleUniversePPVIE,
|
||||||
|
)
|
||||||
from .wsj import (
|
from .wsj import (
|
||||||
WSJIE,
|
WSJIE,
|
||||||
WSJArticleIE,
|
WSJArticleIE,
|
||||||
|
@ -2290,7 +2326,10 @@
|
||||||
from .xstream import XstreamIE
|
from .xstream import XstreamIE
|
||||||
from .xtube import XTubeUserIE, XTubeIE
|
from .xtube import XTubeUserIE, XTubeIE
|
||||||
from .xuite import XuiteIE
|
from .xuite import XuiteIE
|
||||||
from .xvideos import XVideosIE
|
from .xvideos import (
|
||||||
|
XVideosIE,
|
||||||
|
XVideosQuickiesIE
|
||||||
|
)
|
||||||
from .xxxymovies import XXXYMoviesIE
|
from .xxxymovies import XXXYMoviesIE
|
||||||
from .yahoo import (
|
from .yahoo import (
|
||||||
YahooIE,
|
YahooIE,
|
||||||
|
@ -2314,6 +2353,7 @@
|
||||||
ZenYandexChannelIE,
|
ZenYandexChannelIE,
|
||||||
)
|
)
|
||||||
from .yapfiles import YapFilesIE
|
from .yapfiles import YapFilesIE
|
||||||
|
from .yappy import YappyIE
|
||||||
from .yesjapan import YesJapanIE
|
from .yesjapan import YesJapanIE
|
||||||
from .yinyuetai import YinYueTaiIE
|
from .yinyuetai import YinYueTaiIE
|
||||||
from .yle_areena import YleAreenaIE
|
from .yle_areena import YleAreenaIE
|
||||||
|
|
|
@ -156,7 +156,7 @@ class AbemaTVBaseIE(InfoExtractor):
|
||||||
def _generate_aks(cls, deviceid):
|
def _generate_aks(cls, deviceid):
|
||||||
deviceid = deviceid.encode('utf-8')
|
deviceid = deviceid.encode('utf-8')
|
||||||
# add 1 hour and then drop minute and secs
|
# add 1 hour and then drop minute and secs
|
||||||
ts_1hour = int((time_seconds(hours=9) // 3600 + 1) * 3600)
|
ts_1hour = int((time_seconds() // 3600 + 1) * 3600)
|
||||||
time_struct = time.gmtime(ts_1hour)
|
time_struct = time.gmtime(ts_1hour)
|
||||||
ts_1hour_str = str(ts_1hour).encode('utf-8')
|
ts_1hour_str = str(ts_1hour).encode('utf-8')
|
||||||
|
|
||||||
|
@ -190,6 +190,16 @@ def _get_device_token(self):
|
||||||
if self._USERTOKEN:
|
if self._USERTOKEN:
|
||||||
return self._USERTOKEN
|
return self._USERTOKEN
|
||||||
|
|
||||||
|
username, _ = self._get_login_info()
|
||||||
|
AbemaTVBaseIE._USERTOKEN = username and self.cache.load(self._NETRC_MACHINE, username)
|
||||||
|
if AbemaTVBaseIE._USERTOKEN:
|
||||||
|
# try authentication with locally stored token
|
||||||
|
try:
|
||||||
|
self._get_media_token(True)
|
||||||
|
return
|
||||||
|
except ExtractorError as e:
|
||||||
|
self.report_warning(f'Failed to login with cached user token; obtaining a fresh one ({e})')
|
||||||
|
|
||||||
AbemaTVBaseIE._DEVICE_ID = str(uuid.uuid4())
|
AbemaTVBaseIE._DEVICE_ID = str(uuid.uuid4())
|
||||||
aks = self._generate_aks(self._DEVICE_ID)
|
aks = self._generate_aks(self._DEVICE_ID)
|
||||||
user_data = self._download_json(
|
user_data = self._download_json(
|
||||||
|
@ -300,6 +310,11 @@ class AbemaTVIE(AbemaTVBaseIE):
|
||||||
_TIMETABLE = None
|
_TIMETABLE = None
|
||||||
|
|
||||||
def _perform_login(self, username, password):
|
def _perform_login(self, username, password):
|
||||||
|
self._get_device_token()
|
||||||
|
if self.cache.load(self._NETRC_MACHINE, username) and self._get_media_token():
|
||||||
|
self.write_debug('Skipping logging in')
|
||||||
|
return
|
||||||
|
|
||||||
if '@' in username: # don't strictly check if it's email address or not
|
if '@' in username: # don't strictly check if it's email address or not
|
||||||
ep, method = 'user/email', 'email'
|
ep, method = 'user/email', 'email'
|
||||||
else:
|
else:
|
||||||
|
@ -319,6 +334,7 @@ def _perform_login(self, username, password):
|
||||||
|
|
||||||
AbemaTVBaseIE._USERTOKEN = login_response['token']
|
AbemaTVBaseIE._USERTOKEN = login_response['token']
|
||||||
self._get_media_token(True)
|
self._get_media_token(True)
|
||||||
|
self.cache.store(self._NETRC_MACHINE, username, AbemaTVBaseIE._USERTOKEN)
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
# starting download using infojson from this extractor is undefined behavior,
|
# starting download using infojson from this extractor is undefined behavior,
|
||||||
|
@ -416,7 +432,7 @@ def _real_extract(self, url):
|
||||||
f'https://api.abema.io/v1/video/programs/{video_id}', video_id,
|
f'https://api.abema.io/v1/video/programs/{video_id}', video_id,
|
||||||
note='Checking playability',
|
note='Checking playability',
|
||||||
headers=headers)
|
headers=headers)
|
||||||
ondemand_types = traverse_obj(api_response, ('terms', ..., 'onDemandType'), default=[])
|
ondemand_types = traverse_obj(api_response, ('terms', ..., 'onDemandType'))
|
||||||
if 3 not in ondemand_types:
|
if 3 not in ondemand_types:
|
||||||
# cannot acquire decryption key for these streams
|
# cannot acquire decryption key for these streams
|
||||||
self.report_warning('This is a premium-only stream')
|
self.report_warning('This is a premium-only stream')
|
||||||
|
@ -489,7 +505,7 @@ def _fetch_page(self, playlist_id, series_version, page):
|
||||||
})
|
})
|
||||||
yield from (
|
yield from (
|
||||||
self.url_result(f'https://abema.tv/video/episode/{x}')
|
self.url_result(f'https://abema.tv/video/episode/{x}')
|
||||||
for x in traverse_obj(programs, ('programs', ..., 'id'), default=[]))
|
for x in traverse_obj(programs, ('programs', ..., 'id')))
|
||||||
|
|
||||||
def _entries(self, playlist_id, series_version):
|
def _entries(self, playlist_id, series_version):
|
||||||
return OnDemandPagedList(
|
return OnDemandPagedList(
|
||||||
|
|
|
@ -191,7 +191,7 @@ def _real_extract(self, url):
|
||||||
class AmazonMiniTVSeasonIE(AmazonMiniTVBaseIE):
|
class AmazonMiniTVSeasonIE(AmazonMiniTVBaseIE):
|
||||||
IE_NAME = 'amazonminitv:season'
|
IE_NAME = 'amazonminitv:season'
|
||||||
_VALID_URL = r'amazonminitv:season:(?:amzn1\.dv\.gti\.)?(?P<id>[a-f0-9-]+)'
|
_VALID_URL = r'amazonminitv:season:(?:amzn1\.dv\.gti\.)?(?P<id>[a-f0-9-]+)'
|
||||||
IE_DESC = 'Amazon MiniTV Series, "minitv:season:" prefix'
|
IE_DESC = 'Amazon MiniTV Season, "minitv:season:" prefix'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'amazonminitv:season:amzn1.dv.gti.0aa996eb-6a1b-4886-a342-387fbd2f1db0',
|
'url': 'amazonminitv:season:amzn1.dv.gti.0aa996eb-6a1b-4886-a342-387fbd2f1db0',
|
||||||
'playlist_mincount': 6,
|
'playlist_mincount': 6,
|
||||||
|
@ -250,6 +250,7 @@ def _real_extract(self, url):
|
||||||
class AmazonMiniTVSeriesIE(AmazonMiniTVBaseIE):
|
class AmazonMiniTVSeriesIE(AmazonMiniTVBaseIE):
|
||||||
IE_NAME = 'amazonminitv:series'
|
IE_NAME = 'amazonminitv:series'
|
||||||
_VALID_URL = r'amazonminitv:series:(?:amzn1\.dv\.gti\.)?(?P<id>[a-f0-9-]+)'
|
_VALID_URL = r'amazonminitv:series:(?:amzn1\.dv\.gti\.)?(?P<id>[a-f0-9-]+)'
|
||||||
|
IE_DESC = 'Amazon MiniTV Series, "minitv:series:" prefix'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'amazonminitv:series:amzn1.dv.gti.56521d46-b040-4fd5-872e-3e70476a04b0',
|
'url': 'amazonminitv:series:amzn1.dv.gti.56521d46-b040-4fd5-872e-3e70476a04b0',
|
||||||
'playlist_mincount': 3,
|
'playlist_mincount': 3,
|
||||||
|
|
|
@ -11,7 +11,7 @@
|
||||||
|
|
||||||
|
|
||||||
class AmericasTestKitchenIE(InfoExtractor):
|
class AmericasTestKitchenIE(InfoExtractor):
|
||||||
_VALID_URL = r'https?://(?:www\.)?americastestkitchen\.com/(?:cooks(?:country|illustrated)/)?(?P<resource_type>episode|videos)/(?P<id>\d+)'
|
_VALID_URL = r'https?://(?:www\.)?(?:americastestkitchen|cooks(?:country|illustrated))\.com/(?:cooks(?:country|illustrated)/)?(?P<resource_type>episode|videos)/(?P<id>\d+)'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'https://www.americastestkitchen.com/episode/582-weeknight-japanese-suppers',
|
'url': 'https://www.americastestkitchen.com/episode/582-weeknight-japanese-suppers',
|
||||||
'md5': 'b861c3e365ac38ad319cfd509c30577f',
|
'md5': 'b861c3e365ac38ad319cfd509c30577f',
|
||||||
|
@ -72,6 +72,12 @@ class AmericasTestKitchenIE(InfoExtractor):
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://www.americastestkitchen.com/cooksillustrated/videos/4478-beef-wellington',
|
'url': 'https://www.americastestkitchen.com/cooksillustrated/videos/4478-beef-wellington',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.cookscountry.com/episode/564-when-only-chocolate-will-do',
|
||||||
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.cooksillustrated.com/videos/4478-beef-wellington',
|
||||||
|
'only_matching': True,
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
|
@ -100,7 +106,7 @@ def _real_extract(self, url):
|
||||||
|
|
||||||
|
|
||||||
class AmericasTestKitchenSeasonIE(InfoExtractor):
|
class AmericasTestKitchenSeasonIE(InfoExtractor):
|
||||||
_VALID_URL = r'https?://(?:www\.)?americastestkitchen\.com(?P<show>/cookscountry)?/episodes/browse/season_(?P<id>\d+)'
|
_VALID_URL = r'https?://(?:www\.)?(?P<show>americastestkitchen|(?P<cooks>cooks(?:country|illustrated)))\.com(?:(?:/(?P<show2>cooks(?:country|illustrated)))?(?:/?$|(?<!ated)(?<!ated\.com)/episodes/browse/season_(?P<season>\d+)))'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
# ATK Season
|
# ATK Season
|
||||||
'url': 'https://www.americastestkitchen.com/episodes/browse/season_1',
|
'url': 'https://www.americastestkitchen.com/episodes/browse/season_1',
|
||||||
|
@ -117,29 +123,73 @@ class AmericasTestKitchenSeasonIE(InfoExtractor):
|
||||||
'title': 'Season 12',
|
'title': 'Season 12',
|
||||||
},
|
},
|
||||||
'playlist_count': 13,
|
'playlist_count': 13,
|
||||||
|
}, {
|
||||||
|
# America's Test Kitchen Series
|
||||||
|
'url': 'https://www.americastestkitchen.com/',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'americastestkitchen',
|
||||||
|
'title': 'America\'s Test Kitchen',
|
||||||
|
},
|
||||||
|
'playlist_count': 558,
|
||||||
|
}, {
|
||||||
|
# Cooks Country Series
|
||||||
|
'url': 'https://www.americastestkitchen.com/cookscountry',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'cookscountry',
|
||||||
|
'title': 'Cook\'s Country',
|
||||||
|
},
|
||||||
|
'playlist_count': 199,
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.americastestkitchen.com/cookscountry/',
|
||||||
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.cookscountry.com/episodes/browse/season_12',
|
||||||
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.cookscountry.com',
|
||||||
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.americastestkitchen.com/cooksillustrated/',
|
||||||
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.cooksillustrated.com',
|
||||||
|
'only_matching': True,
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
show_path, season_number = self._match_valid_url(url).group('show', 'id')
|
season_number, show1, show = self._match_valid_url(url).group('season', 'show', 'show2')
|
||||||
season_number = int(season_number)
|
show_path = ('/' + show) if show else ''
|
||||||
|
show = show or show1
|
||||||
|
season_number = int_or_none(season_number)
|
||||||
|
|
||||||
slug = 'cco' if show_path == '/cookscountry' else 'atk'
|
slug, title = {
|
||||||
|
'americastestkitchen': ('atk', 'America\'s Test Kitchen'),
|
||||||
|
'cookscountry': ('cco', 'Cook\'s Country'),
|
||||||
|
'cooksillustrated': ('cio', 'Cook\'s Illustrated'),
|
||||||
|
}[show]
|
||||||
|
|
||||||
season = 'Season %d' % season_number
|
facet_filters = [
|
||||||
|
'search_document_klass:episode',
|
||||||
|
'search_show_slug:' + slug,
|
||||||
|
]
|
||||||
|
|
||||||
|
if season_number:
|
||||||
|
playlist_id = 'season_%d' % season_number
|
||||||
|
playlist_title = 'Season %d' % season_number
|
||||||
|
facet_filters.append('search_season_list:' + playlist_title)
|
||||||
|
else:
|
||||||
|
playlist_id = show
|
||||||
|
playlist_title = title
|
||||||
|
|
||||||
season_search = self._download_json(
|
season_search = self._download_json(
|
||||||
'https://y1fnzxui30-dsn.algolia.net/1/indexes/everest_search_%s_season_desc_production' % slug,
|
'https://y1fnzxui30-dsn.algolia.net/1/indexes/everest_search_%s_season_desc_production' % slug,
|
||||||
season, headers={
|
playlist_id, headers={
|
||||||
'Origin': 'https://www.americastestkitchen.com',
|
'Origin': 'https://www.americastestkitchen.com',
|
||||||
'X-Algolia-API-Key': '8d504d0099ed27c1b73708d22871d805',
|
'X-Algolia-API-Key': '8d504d0099ed27c1b73708d22871d805',
|
||||||
'X-Algolia-Application-Id': 'Y1FNZXUI30',
|
'X-Algolia-Application-Id': 'Y1FNZXUI30',
|
||||||
}, query={
|
}, query={
|
||||||
'facetFilters': json.dumps([
|
'facetFilters': json.dumps(facet_filters),
|
||||||
'search_season_list:' + season,
|
'attributesToRetrieve': 'description,search_%s_episode_number,search_document_date,search_url,title,search_atk_episode_season' % slug,
|
||||||
'search_document_klass:episode',
|
|
||||||
'search_show_slug:' + slug,
|
|
||||||
]),
|
|
||||||
'attributesToRetrieve': 'description,search_%s_episode_number,search_document_date,search_url,title' % slug,
|
|
||||||
'attributesToHighlight': '',
|
'attributesToHighlight': '',
|
||||||
'hitsPerPage': 1000,
|
'hitsPerPage': 1000,
|
||||||
})
|
})
|
||||||
|
@ -162,4 +212,4 @@ def entries():
|
||||||
}
|
}
|
||||||
|
|
||||||
return self.playlist_result(
|
return self.playlist_result(
|
||||||
entries(), 'season_%d' % season_number, season)
|
entries(), playlist_id, playlist_title)
|
||||||
|
|
98
yt_dlp/extractor/anchorfm.py
Normal file
98
yt_dlp/extractor/anchorfm.py
Normal file
|
@ -0,0 +1,98 @@
|
||||||
|
from .common import InfoExtractor
|
||||||
|
from ..utils import (
|
||||||
|
clean_html,
|
||||||
|
float_or_none,
|
||||||
|
int_or_none,
|
||||||
|
str_or_none,
|
||||||
|
traverse_obj,
|
||||||
|
unified_timestamp
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class AnchorFMEpisodeIE(InfoExtractor):
|
||||||
|
_VALID_URL = r'https?://anchor\.fm/(?P<channel_name>\w+)/(?:embed/)?episodes/[\w-]+-(?P<episode_id>\w+)'
|
||||||
|
_EMBED_REGEX = [rf'<iframe[^>]+\bsrc=[\'"](?P<url>{_VALID_URL})']
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://anchor.fm/lovelyti/episodes/Chrisean-Rock-takes-to-twitter-to-announce-shes-pregnant--Blueface-denies-he-is-the-father-e1tpt3d',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'e1tpt3d',
|
||||||
|
'ext': 'mp3',
|
||||||
|
'title': ' Chrisean Rock takes to twitter to announce she\'s pregnant, Blueface denies he is the father!',
|
||||||
|
'description': 'md5:207d167de3e28ceb4ddc1ebf5a30044c',
|
||||||
|
'thumbnail': 'https://s3-us-west-2.amazonaws.com/anchor-generated-image-bank/production/podcast_uploaded_nologo/1034827/1034827-1658438968460-5f3bfdf3601e8.jpg',
|
||||||
|
'duration': 624.718,
|
||||||
|
'uploader': 'Lovelyti ',
|
||||||
|
'uploader_id': '991541',
|
||||||
|
'channel': 'lovelyti',
|
||||||
|
'modified_date': '20230121',
|
||||||
|
'modified_timestamp': 1674285178,
|
||||||
|
'release_date': '20230121',
|
||||||
|
'release_timestamp': 1674285179,
|
||||||
|
'episode_id': 'e1tpt3d',
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
# embed url
|
||||||
|
'url': 'https://anchor.fm/apakatatempo/embed/episodes/S2E75-Perang-Bintang-di-Balik-Kasus-Ferdy-Sambo-dan-Ismail-Bolong-e1shjqd',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'e1shjqd',
|
||||||
|
'ext': 'mp3',
|
||||||
|
'title': 'S2E75 Perang Bintang di Balik Kasus Ferdy Sambo dan Ismail Bolong',
|
||||||
|
'description': 'md5:9e95ad9293bf00178bf8d33e9cb92c41',
|
||||||
|
'duration': 1042.008,
|
||||||
|
'thumbnail': 'https://s3-us-west-2.amazonaws.com/anchor-generated-image-bank/production/podcast_uploaded_episode400/2627805/2627805-1671590688729-4db3882ac9e4b.jpg',
|
||||||
|
'release_date': '20221221',
|
||||||
|
'release_timestamp': 1671595916,
|
||||||
|
'modified_date': '20221221',
|
||||||
|
'modified_timestamp': 1671590834,
|
||||||
|
'channel': 'apakatatempo',
|
||||||
|
'uploader': 'Podcast Tempo',
|
||||||
|
'uploader_id': '2585461',
|
||||||
|
'season': 'Season 2',
|
||||||
|
'season_number': 2,
|
||||||
|
'episode_id': 'e1shjqd',
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
|
||||||
|
_WEBPAGE_TESTS = [{
|
||||||
|
'url': 'https://podcast.tempo.co/podcast/192/perang-bintang-di-balik-kasus-ferdy-sambo-dan-ismail-bolong',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'e1shjqd',
|
||||||
|
'ext': 'mp3',
|
||||||
|
'release_date': '20221221',
|
||||||
|
'duration': 1042.008,
|
||||||
|
'season': 'Season 2',
|
||||||
|
'modified_timestamp': 1671590834,
|
||||||
|
'uploader_id': '2585461',
|
||||||
|
'modified_date': '20221221',
|
||||||
|
'description': 'md5:9e95ad9293bf00178bf8d33e9cb92c41',
|
||||||
|
'season_number': 2,
|
||||||
|
'title': 'S2E75 Perang Bintang di Balik Kasus Ferdy Sambo dan Ismail Bolong',
|
||||||
|
'release_timestamp': 1671595916,
|
||||||
|
'episode_id': 'e1shjqd',
|
||||||
|
'thumbnail': 'https://s3-us-west-2.amazonaws.com/anchor-generated-image-bank/production/podcast_uploaded_episode400/2627805/2627805-1671590688729-4db3882ac9e4b.jpg',
|
||||||
|
'uploader': 'Podcast Tempo',
|
||||||
|
'channel': 'apakatatempo',
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
channel_name, episode_id = self._match_valid_url(url).group('channel_name', 'episode_id')
|
||||||
|
api_data = self._download_json(f'https://anchor.fm/api/v3/episodes/{episode_id}', episode_id)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'id': episode_id,
|
||||||
|
'title': traverse_obj(api_data, ('episode', 'title')),
|
||||||
|
'url': traverse_obj(api_data, ('episode', 'episodeEnclosureUrl'), ('episodeAudios', 0, 'url')),
|
||||||
|
'ext': 'mp3',
|
||||||
|
'vcodec': 'none',
|
||||||
|
'thumbnail': traverse_obj(api_data, ('episode', 'episodeImage')),
|
||||||
|
'description': clean_html(traverse_obj(api_data, ('episode', ('description', 'descriptionPreview')), get_all=False)),
|
||||||
|
'duration': float_or_none(traverse_obj(api_data, ('episode', 'duration')), 1000),
|
||||||
|
'modified_timestamp': unified_timestamp(traverse_obj(api_data, ('episode', 'modified'))),
|
||||||
|
'release_timestamp': int_or_none(traverse_obj(api_data, ('episode', 'publishOnUnixTimestamp'))),
|
||||||
|
'episode_id': episode_id,
|
||||||
|
'uploader': traverse_obj(api_data, ('creator', 'name')),
|
||||||
|
'uploader_id': str_or_none(traverse_obj(api_data, ('creator', 'userId'))),
|
||||||
|
'season_number': int_or_none(traverse_obj(api_data, ('episode', 'podcastSeasonNumber'))),
|
||||||
|
'channel': channel_name or traverse_obj(api_data, ('creator', 'vanitySlug')),
|
||||||
|
}
|
|
@ -1,8 +1,10 @@
|
||||||
import json
|
import json
|
||||||
import re
|
import re
|
||||||
|
import urllib.error
|
||||||
import urllib.parse
|
import urllib.parse
|
||||||
|
|
||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
|
from .naver import NaverBaseIE
|
||||||
from .youtube import YoutubeBaseInfoExtractor, YoutubeIE
|
from .youtube import YoutubeBaseInfoExtractor, YoutubeIE
|
||||||
from ..compat import compat_HTTPError, compat_urllib_parse_unquote
|
from ..compat import compat_HTTPError, compat_urllib_parse_unquote
|
||||||
from ..utils import (
|
from ..utils import (
|
||||||
|
@ -945,3 +947,237 @@ def _real_extract(self, url):
|
||||||
if not info.get('title'):
|
if not info.get('title'):
|
||||||
info['title'] = video_id
|
info['title'] = video_id
|
||||||
return info
|
return info
|
||||||
|
|
||||||
|
|
||||||
|
class VLiveWebArchiveIE(InfoExtractor):
|
||||||
|
IE_NAME = 'web.archive:vlive'
|
||||||
|
IE_DESC = 'web.archive.org saved vlive videos'
|
||||||
|
_VALID_URL = r'''(?x)
|
||||||
|
(?:https?://)?web\.archive\.org/
|
||||||
|
(?:web/)?(?:(?P<date>[0-9]{14})?[0-9A-Za-z_*]*/)? # /web and the version index is optional
|
||||||
|
(?:https?(?::|%3[Aa])//)?(?:
|
||||||
|
(?:(?:www|m)\.)?vlive\.tv(?::(?:80|443))?/(?:video|embed)/(?P<id>[0-9]+) # VLive URL
|
||||||
|
)
|
||||||
|
'''
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://web.archive.org/web/20221221144331/http://www.vlive.tv/video/1326',
|
||||||
|
'md5': 'cc7314812855ce56de70a06a27314983',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '1326',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': "Girl's Day's Broadcast",
|
||||||
|
'creator': "Girl's Day",
|
||||||
|
'view_count': int,
|
||||||
|
'uploader_id': 'muploader_a',
|
||||||
|
'uploader_url': None,
|
||||||
|
'uploader': None,
|
||||||
|
'upload_date': '20150817',
|
||||||
|
'thumbnail': r're:^https?://.*\.(?:jpg|png)$',
|
||||||
|
'timestamp': 1439816449,
|
||||||
|
'like_count': int,
|
||||||
|
'channel': 'Girl\'s Day',
|
||||||
|
'channel_id': 'FDF27',
|
||||||
|
'comment_count': int,
|
||||||
|
'release_timestamp': 1439818140,
|
||||||
|
'release_date': '20150817',
|
||||||
|
'duration': 1014,
|
||||||
|
},
|
||||||
|
'params': {
|
||||||
|
'skip_download': True,
|
||||||
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://web.archive.org/web/20221221182103/http://www.vlive.tv/video/16937',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '16937',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': '첸백시 걍방',
|
||||||
|
'creator': 'EXO',
|
||||||
|
'view_count': int,
|
||||||
|
'subtitles': 'mincount:12',
|
||||||
|
'uploader_id': 'muploader_j',
|
||||||
|
'uploader_url': 'http://vlive.tv',
|
||||||
|
'uploader': None,
|
||||||
|
'upload_date': '20161112',
|
||||||
|
'thumbnail': r're:^https?://.*\.(?:jpg|png)$',
|
||||||
|
'timestamp': 1478923074,
|
||||||
|
'like_count': int,
|
||||||
|
'channel': 'EXO',
|
||||||
|
'channel_id': 'F94BD',
|
||||||
|
'comment_count': int,
|
||||||
|
'release_timestamp': 1478924280,
|
||||||
|
'release_date': '20161112',
|
||||||
|
'duration': 906,
|
||||||
|
},
|
||||||
|
'params': {
|
||||||
|
'skip_download': True,
|
||||||
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://web.archive.org/web/20221127190050/http://www.vlive.tv/video/101870',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '101870',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': '[ⓓ xV] “레벨이들 매력에 반해? 안 반해?” 움직이는 HD 포토 (레드벨벳:Red Velvet)',
|
||||||
|
'creator': 'Dispatch',
|
||||||
|
'view_count': int,
|
||||||
|
'subtitles': 'mincount:6',
|
||||||
|
'uploader_id': 'V__FRA08071',
|
||||||
|
'uploader_url': 'http://vlive.tv',
|
||||||
|
'uploader': None,
|
||||||
|
'upload_date': '20181130',
|
||||||
|
'thumbnail': r're:^https?://.*\.(?:jpg|png)$',
|
||||||
|
'timestamp': 1543601327,
|
||||||
|
'like_count': int,
|
||||||
|
'channel': 'Dispatch',
|
||||||
|
'channel_id': 'C796F3',
|
||||||
|
'comment_count': int,
|
||||||
|
'release_timestamp': 1543601040,
|
||||||
|
'release_date': '20181130',
|
||||||
|
'duration': 279,
|
||||||
|
},
|
||||||
|
'params': {
|
||||||
|
'skip_download': True,
|
||||||
|
},
|
||||||
|
}]
|
||||||
|
|
||||||
|
# The wayback machine has special timestamp and "mode" values:
|
||||||
|
# timestamp:
|
||||||
|
# 1 = the first capture
|
||||||
|
# 2 = the last capture
|
||||||
|
# mode:
|
||||||
|
# id_ = Identity - perform no alterations of the original resource, return it as it was archived.
|
||||||
|
_WAYBACK_BASE_URL = 'https://web.archive.org/web/2id_/'
|
||||||
|
|
||||||
|
def _download_archived_page(self, url, video_id, *, timestamp='2', **kwargs):
|
||||||
|
for retry in self.RetryManager():
|
||||||
|
try:
|
||||||
|
return self._download_webpage(f'https://web.archive.org/web/{timestamp}id_/{url}', video_id, **kwargs)
|
||||||
|
except ExtractorError as e:
|
||||||
|
if isinstance(e.cause, urllib.error.HTTPError) and e.cause.code == 404:
|
||||||
|
raise ExtractorError('Page was not archived', expected=True)
|
||||||
|
retry.error = e
|
||||||
|
continue
|
||||||
|
|
||||||
|
def _download_archived_json(self, url, video_id, **kwargs):
|
||||||
|
page = self._download_archived_page(url, video_id, **kwargs)
|
||||||
|
if not page:
|
||||||
|
raise ExtractorError('Page was not archived', expected=True)
|
||||||
|
else:
|
||||||
|
return self._parse_json(page, video_id)
|
||||||
|
|
||||||
|
def _extract_formats_from_m3u8(self, m3u8_url, params, video_id):
|
||||||
|
m3u8_doc = self._download_archived_page(m3u8_url, video_id, note='Downloading m3u8', query=params, fatal=False)
|
||||||
|
if not m3u8_doc:
|
||||||
|
return
|
||||||
|
|
||||||
|
# M3U8 document should be changed to archive domain
|
||||||
|
m3u8_doc = m3u8_doc.splitlines()
|
||||||
|
url_base = m3u8_url.rsplit('/', 1)[0]
|
||||||
|
first_segment = None
|
||||||
|
for i, line in enumerate(m3u8_doc):
|
||||||
|
if not line.startswith('#'):
|
||||||
|
m3u8_doc[i] = f'{self._WAYBACK_BASE_URL}{url_base}/{line}?{urllib.parse.urlencode(params)}'
|
||||||
|
first_segment = first_segment or m3u8_doc[i]
|
||||||
|
|
||||||
|
# Segments may not have been archived. See https://web.archive.org/web/20221127190050/http://www.vlive.tv/video/101870
|
||||||
|
urlh = self._request_webpage(HEADRequest(first_segment), video_id, errnote=False,
|
||||||
|
fatal=False, note='Check first segment availablity')
|
||||||
|
if urlh:
|
||||||
|
formats, subtitles = self._parse_m3u8_formats_and_subtitles('\n'.join(m3u8_doc), ext='mp4', video_id=video_id)
|
||||||
|
if subtitles:
|
||||||
|
self._report_ignoring_subs('m3u8')
|
||||||
|
return formats
|
||||||
|
|
||||||
|
# Closely follows the logic of the ArchiveTeam grab script
|
||||||
|
# See: https://github.com/ArchiveTeam/vlive-grab/blob/master/vlive.lua
|
||||||
|
def _real_extract(self, url):
|
||||||
|
video_id, url_date = self._match_valid_url(url).group('id', 'date')
|
||||||
|
|
||||||
|
webpage = self._download_archived_page(f'https://www.vlive.tv/video/{video_id}', video_id, timestamp=url_date)
|
||||||
|
|
||||||
|
player_info = self._search_json(r'__PRELOADED_STATE__\s*=', webpage, 'player info', video_id)
|
||||||
|
user_country = traverse_obj(player_info, ('common', 'userCountry'))
|
||||||
|
|
||||||
|
main_script_url = self._search_regex(r'<script\s+src="([^"]+/js/main\.[^"]+\.js)"', webpage, 'main script url')
|
||||||
|
main_script = self._download_archived_page(main_script_url, video_id, note='Downloading main script')
|
||||||
|
app_id = self._search_regex(r'appId\s*=\s*"([^"]+)"', main_script, 'app id')
|
||||||
|
|
||||||
|
inkey = self._download_archived_json(
|
||||||
|
f'https://www.vlive.tv/globalv-web/vam-web/video/v1.0/vod/{video_id}/inkey', video_id, note='Fetching inkey', query={
|
||||||
|
'appId': app_id,
|
||||||
|
'platformType': 'PC',
|
||||||
|
'gcc': user_country,
|
||||||
|
'locale': 'en_US',
|
||||||
|
}, fatal=False)
|
||||||
|
|
||||||
|
vod_id = traverse_obj(player_info, ('postDetail', 'post', 'officialVideo', 'vodId'))
|
||||||
|
|
||||||
|
vod_data = self._download_archived_json(
|
||||||
|
f'https://apis.naver.com/rmcnmv/rmcnmv/vod/play/v2.0/{vod_id}', video_id, note='Fetching vod data', query={
|
||||||
|
'key': inkey.get('inkey'),
|
||||||
|
'pid': 'rmcPlayer_16692457559726800', # partially unix time and partially random. Fixed value used by archiveteam project
|
||||||
|
'sid': '2024',
|
||||||
|
'ver': '2.0',
|
||||||
|
'devt': 'html5_pc',
|
||||||
|
'doct': 'json',
|
||||||
|
'ptc': 'https',
|
||||||
|
'sptc': 'https',
|
||||||
|
'cpt': 'vtt',
|
||||||
|
'ctls': '%7B%22visible%22%3A%7B%22fullscreen%22%3Atrue%2C%22logo%22%3Afalse%2C%22playbackRate%22%3Afalse%2C%22scrap%22%3Afalse%2C%22playCount%22%3Atrue%2C%22commentCount%22%3Atrue%2C%22title%22%3Atrue%2C%22writer%22%3Atrue%2C%22expand%22%3Afalse%2C%22subtitles%22%3Atrue%2C%22thumbnails%22%3Atrue%2C%22quality%22%3Atrue%2C%22setting%22%3Atrue%2C%22script%22%3Afalse%2C%22logoDimmed%22%3Atrue%2C%22badge%22%3Atrue%2C%22seekingTime%22%3Atrue%2C%22muted%22%3Atrue%2C%22muteButton%22%3Afalse%2C%22viewerNotice%22%3Afalse%2C%22linkCount%22%3Afalse%2C%22createTime%22%3Afalse%2C%22thumbnail%22%3Atrue%7D%2C%22clicked%22%3A%7B%22expand%22%3Afalse%2C%22subtitles%22%3Afalse%7D%7D',
|
||||||
|
'pv': '4.26.9',
|
||||||
|
'dr': '1920x1080',
|
||||||
|
'cpl': 'en_US',
|
||||||
|
'lc': 'en_US',
|
||||||
|
'adi': '%5B%7B%22type%22%3A%22pre%22%2C%22exposure%22%3Afalse%2C%22replayExposure%22%3Afalse%7D%5D',
|
||||||
|
'adu': '%2F',
|
||||||
|
'videoId': vod_id,
|
||||||
|
'cc': user_country,
|
||||||
|
})
|
||||||
|
|
||||||
|
formats = []
|
||||||
|
|
||||||
|
streams = traverse_obj(vod_data, ('streams', ...))
|
||||||
|
if len(streams) > 1:
|
||||||
|
self.report_warning('Multiple streams found. Only the first stream will be downloaded.')
|
||||||
|
stream = streams[0]
|
||||||
|
|
||||||
|
max_stream = max(
|
||||||
|
stream.get('videos') or [],
|
||||||
|
key=lambda v: traverse_obj(v, ('bitrate', 'video'), default=0), default=None)
|
||||||
|
if max_stream is not None:
|
||||||
|
params = {arg.get('name'): arg.get('value') for arg in stream.get('keys', []) if arg.get('type') == 'param'}
|
||||||
|
formats = self._extract_formats_from_m3u8(max_stream.get('source'), params, video_id) or []
|
||||||
|
|
||||||
|
# For parts of the project MP4 files were archived
|
||||||
|
max_video = max(
|
||||||
|
traverse_obj(vod_data, ('videos', 'list', ...)),
|
||||||
|
key=lambda v: traverse_obj(v, ('bitrate', 'video'), default=0), default=None)
|
||||||
|
if max_video is not None:
|
||||||
|
video_url = self._WAYBACK_BASE_URL + max_video.get('source')
|
||||||
|
urlh = self._request_webpage(HEADRequest(video_url), video_id, errnote=False,
|
||||||
|
fatal=False, note='Check video availablity')
|
||||||
|
if urlh:
|
||||||
|
formats.append({'url': video_url})
|
||||||
|
|
||||||
|
return {
|
||||||
|
'id': video_id,
|
||||||
|
'formats': formats,
|
||||||
|
**traverse_obj(player_info, ('postDetail', 'post', {
|
||||||
|
'title': ('officialVideo', 'title', {str}),
|
||||||
|
'creator': ('author', 'nickname', {str}),
|
||||||
|
'channel': ('channel', 'channelName', {str}),
|
||||||
|
'channel_id': ('channel', 'channelCode', {str}),
|
||||||
|
'duration': ('officialVideo', 'playTime', {int_or_none}),
|
||||||
|
'view_count': ('officialVideo', 'playCount', {int_or_none}),
|
||||||
|
'like_count': ('officialVideo', 'likeCount', {int_or_none}),
|
||||||
|
'comment_count': ('officialVideo', 'commentCount', {int_or_none}),
|
||||||
|
'timestamp': ('officialVideo', 'createdAt', {lambda x: int_or_none(x, scale=1000)}),
|
||||||
|
'release_timestamp': ('officialVideo', 'willStartAt', {lambda x: int_or_none(x, scale=1000)}),
|
||||||
|
})),
|
||||||
|
**traverse_obj(vod_data, ('meta', {
|
||||||
|
'uploader_id': ('user', 'id', {str}),
|
||||||
|
'uploader': ('user', 'name', {str}),
|
||||||
|
'uploader_url': ('user', 'url', {url_or_none}),
|
||||||
|
'thumbnail': ('cover', 'source', {url_or_none}),
|
||||||
|
}), expected_type=lambda x: x or None),
|
||||||
|
**NaverBaseIE.process_subtitles(vod_data, lambda x: [self._WAYBACK_BASE_URL + x]),
|
||||||
|
}
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
|
|
||||||
|
|
||||||
class BFMTVBaseIE(InfoExtractor):
|
class BFMTVBaseIE(InfoExtractor):
|
||||||
_VALID_URL_BASE = r'https?://(?:www\.)?bfmtv\.com/'
|
_VALID_URL_BASE = r'https?://(?:www\.|rmc\.)?bfmtv\.com/'
|
||||||
_VALID_URL_TMPL = _VALID_URL_BASE + r'(?:[^/]+/)*[^/?&#]+_%s[A-Z]-(?P<id>\d{12})\.html'
|
_VALID_URL_TMPL = _VALID_URL_BASE + r'(?:[^/]+/)*[^/?&#]+_%s[A-Z]-(?P<id>\d{12})\.html'
|
||||||
_VIDEO_BLOCK_REGEX = r'(<div[^>]+class="video_block"[^>]*>)'
|
_VIDEO_BLOCK_REGEX = r'(<div[^>]+class="video_block"[^>]*>)'
|
||||||
BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/%s_default/index.html?videoId=%s'
|
BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/%s_default/index.html?videoId=%s'
|
||||||
|
@ -31,6 +31,9 @@ class BFMTVIE(BFMTVBaseIE):
|
||||||
'uploader_id': '876450610001',
|
'uploader_id': '876450610001',
|
||||||
'upload_date': '20201002',
|
'upload_date': '20201002',
|
||||||
'timestamp': 1601629620,
|
'timestamp': 1601629620,
|
||||||
|
'duration': 44.757,
|
||||||
|
'tags': ['bfmactu', 'politique'],
|
||||||
|
'thumbnail': 'https://cf-images.eu-west-1.prod.boltdns.net/v1/static/876450610001/5041f4c1-bc48-4af8-a256-1b8300ad8ef0/cf2f9114-e8e2-4494-82b4-ab794ea4bc7d/1920x1080/match/image.jpg',
|
||||||
},
|
},
|
||||||
}]
|
}]
|
||||||
|
|
||||||
|
@ -81,6 +84,20 @@ class BFMTVArticleIE(BFMTVBaseIE):
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://www.bfmtv.com/sante/covid-19-oui-le-vaccin-de-pfizer-distribue-en-france-a-bien-ete-teste-sur-des-personnes-agees_AN-202101060275.html',
|
'url': 'https://www.bfmtv.com/sante/covid-19-oui-le-vaccin-de-pfizer-distribue-en-france-a-bien-ete-teste-sur-des-personnes-agees_AN-202101060275.html',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
'url': 'https://rmc.bfmtv.com/actualites/societe/transports/ce-n-est-plus-tout-rentable-le-bioethanol-e85-depasse-1eu-le-litre-des-automobilistes-regrettent_AV-202301100268.html',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '6318445464112',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Le plein de bioéthanol fait de plus en plus mal à la pompe',
|
||||||
|
'description': None,
|
||||||
|
'uploader_id': '876630703001',
|
||||||
|
'upload_date': '20230110',
|
||||||
|
'timestamp': 1673341692,
|
||||||
|
'duration': 109.269,
|
||||||
|
'tags': ['rmc', 'show', 'apolline de malherbe', 'info', 'talk', 'matinale', 'radio'],
|
||||||
|
'thumbnail': 'https://cf-images.eu-west-1.prod.boltdns.net/v1/static/876630703001/5bef74b8-9d5e-4480-a21f-60c2e2480c46/96c88b74-f9db-45e1-8040-e199c5da216c/1920x1080/match/image.jpg'
|
||||||
|
}
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
|
|
|
@ -6,6 +6,7 @@
|
||||||
import urllib.parse
|
import urllib.parse
|
||||||
|
|
||||||
from .common import InfoExtractor, SearchInfoExtractor
|
from .common import InfoExtractor, SearchInfoExtractor
|
||||||
|
from ..dependencies import Cryptodome
|
||||||
from ..utils import (
|
from ..utils import (
|
||||||
ExtractorError,
|
ExtractorError,
|
||||||
GeoRestrictedError,
|
GeoRestrictedError,
|
||||||
|
@ -80,7 +81,7 @@ def json2srt(self, json_data):
|
||||||
f'{line["content"]}\n\n')
|
f'{line["content"]}\n\n')
|
||||||
return srt_data
|
return srt_data
|
||||||
|
|
||||||
def _get_subtitles(self, video_id, initial_state, cid):
|
def _get_subtitles(self, video_id, aid, cid):
|
||||||
subtitles = {
|
subtitles = {
|
||||||
'danmaku': [{
|
'danmaku': [{
|
||||||
'ext': 'xml',
|
'ext': 'xml',
|
||||||
|
@ -88,7 +89,8 @@ def _get_subtitles(self, video_id, initial_state, cid):
|
||||||
}]
|
}]
|
||||||
}
|
}
|
||||||
|
|
||||||
for s in traverse_obj(initial_state, ('videoData', 'subtitle', 'list')) or []:
|
video_info_json = self._download_json(f'https://api.bilibili.com/x/player/v2?aid={aid}&cid={cid}', video_id)
|
||||||
|
for s in traverse_obj(video_info_json, ('data', 'subtitle', 'subtitles', ...)):
|
||||||
subtitles.setdefault(s['lan'], []).append({
|
subtitles.setdefault(s['lan'], []).append({
|
||||||
'ext': 'srt',
|
'ext': 'srt',
|
||||||
'data': self.json2srt(self._download_json(s['subtitle_url'], video_id))
|
'data': self.json2srt(self._download_json(s['subtitle_url'], video_id))
|
||||||
|
@ -330,7 +332,7 @@ def _real_extract(self, url):
|
||||||
'timestamp': traverse_obj(initial_state, ('videoData', 'pubdate')),
|
'timestamp': traverse_obj(initial_state, ('videoData', 'pubdate')),
|
||||||
'duration': float_or_none(play_info.get('timelength'), scale=1000),
|
'duration': float_or_none(play_info.get('timelength'), scale=1000),
|
||||||
'chapters': self._get_chapters(aid, cid),
|
'chapters': self._get_chapters(aid, cid),
|
||||||
'subtitles': self.extract_subtitles(video_id, initial_state, cid),
|
'subtitles': self.extract_subtitles(video_id, aid, cid),
|
||||||
'__post_extractor': self.extract_comments(aid),
|
'__post_extractor': self.extract_comments(aid),
|
||||||
'http_headers': {'Referer': url},
|
'http_headers': {'Referer': url},
|
||||||
}
|
}
|
||||||
|
@ -893,22 +895,15 @@ def _parse_video_metadata(self, video_data):
|
||||||
}
|
}
|
||||||
|
|
||||||
def _perform_login(self, username, password):
|
def _perform_login(self, username, password):
|
||||||
try:
|
if not Cryptodome.RSA:
|
||||||
from Cryptodome.PublicKey import RSA
|
raise ExtractorError('pycryptodomex not found. Please install', expected=True)
|
||||||
from Cryptodome.Cipher import PKCS1_v1_5
|
|
||||||
except ImportError:
|
|
||||||
try:
|
|
||||||
from Crypto.PublicKey import RSA
|
|
||||||
from Crypto.Cipher import PKCS1_v1_5
|
|
||||||
except ImportError:
|
|
||||||
raise ExtractorError('pycryptodomex not found. Please install', expected=True)
|
|
||||||
|
|
||||||
key_data = self._download_json(
|
key_data = self._download_json(
|
||||||
'https://passport.bilibili.tv/x/intl/passport-login/web/key?lang=en-US', None,
|
'https://passport.bilibili.tv/x/intl/passport-login/web/key?lang=en-US', None,
|
||||||
note='Downloading login key', errnote='Unable to download login key')['data']
|
note='Downloading login key', errnote='Unable to download login key')['data']
|
||||||
|
|
||||||
public_key = RSA.importKey(key_data['key'])
|
public_key = Cryptodome.RSA.importKey(key_data['key'])
|
||||||
password_hash = PKCS1_v1_5.new(public_key).encrypt((key_data['hash'] + password).encode('utf-8'))
|
password_hash = Cryptodome.PKCS1_v1_5.new(public_key).encrypt((key_data['hash'] + password).encode('utf-8'))
|
||||||
login_post = self._download_json(
|
login_post = self._download_json(
|
||||||
'https://passport.bilibili.tv/x/intl/passport-login/web/login/password?lang=en-US', None, data=urlencode_postdata({
|
'https://passport.bilibili.tv/x/intl/passport-login/web/login/password?lang=en-US', None, data=urlencode_postdata({
|
||||||
'username': username,
|
'username': username,
|
||||||
|
@ -939,6 +934,19 @@ class BiliIntlIE(BiliIntlBaseIE):
|
||||||
'episode': 'Episode 2',
|
'episode': 'Episode 2',
|
||||||
'timestamp': 1602259500,
|
'timestamp': 1602259500,
|
||||||
'description': 'md5:297b5a17155eb645e14a14b385ab547e',
|
'description': 'md5:297b5a17155eb645e14a14b385ab547e',
|
||||||
|
'chapters': [{
|
||||||
|
'start_time': 0,
|
||||||
|
'end_time': 76.242,
|
||||||
|
'title': '<Untitled Chapter 1>'
|
||||||
|
}, {
|
||||||
|
'start_time': 76.242,
|
||||||
|
'end_time': 161.161,
|
||||||
|
'title': 'Intro'
|
||||||
|
}, {
|
||||||
|
'start_time': 1325.742,
|
||||||
|
'end_time': 1403.903,
|
||||||
|
'title': 'Outro'
|
||||||
|
}],
|
||||||
}
|
}
|
||||||
}, {
|
}, {
|
||||||
# Non-Bstation page
|
# Non-Bstation page
|
||||||
|
@ -953,6 +961,19 @@ class BiliIntlIE(BiliIntlBaseIE):
|
||||||
'episode': 'Episode 3',
|
'episode': 'Episode 3',
|
||||||
'upload_date': '20211219',
|
'upload_date': '20211219',
|
||||||
'timestamp': 1639928700,
|
'timestamp': 1639928700,
|
||||||
|
'chapters': [{
|
||||||
|
'start_time': 0,
|
||||||
|
'end_time': 88.0,
|
||||||
|
'title': '<Untitled Chapter 1>'
|
||||||
|
}, {
|
||||||
|
'start_time': 88.0,
|
||||||
|
'end_time': 156.0,
|
||||||
|
'title': 'Intro'
|
||||||
|
}, {
|
||||||
|
'start_time': 1173.0,
|
||||||
|
'end_time': 1259.535,
|
||||||
|
'title': 'Outro'
|
||||||
|
}],
|
||||||
}
|
}
|
||||||
}, {
|
}, {
|
||||||
# Subtitle with empty content
|
# Subtitle with empty content
|
||||||
|
@ -976,6 +997,20 @@ class BiliIntlIE(BiliIntlBaseIE):
|
||||||
'upload_date': '20221212',
|
'upload_date': '20221212',
|
||||||
'title': 'Kimetsu no Yaiba Season 3 Official Trailer - Bstation',
|
'title': 'Kimetsu no Yaiba Season 3 Official Trailer - Bstation',
|
||||||
}
|
}
|
||||||
|
}, {
|
||||||
|
# episode id without intro and outro
|
||||||
|
'url': 'https://www.bilibili.tv/en/play/1048837/11246489',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '11246489',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'E1 - Operation \'Strix\' <Owl>',
|
||||||
|
'description': 'md5:b4434eb1a9a97ad2bccb779514b89f17',
|
||||||
|
'timestamp': 1649516400,
|
||||||
|
'thumbnail': 'https://pic.bstarstatic.com/ogv/62cb1de23ada17fb70fbe7bdd6ff29c29da02a64.png',
|
||||||
|
'episode': 'Episode 1',
|
||||||
|
'episode_number': 1,
|
||||||
|
'upload_date': '20220409',
|
||||||
|
},
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://www.biliintl.com/en/play/34613/341736',
|
'url': 'https://www.biliintl.com/en/play/34613/341736',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
|
@ -1028,12 +1063,31 @@ def _extract_video_metadata(self, url, video_id, season_id):
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
season_id, ep_id, aid = self._match_valid_url(url).group('season_id', 'ep_id', 'aid')
|
season_id, ep_id, aid = self._match_valid_url(url).group('season_id', 'ep_id', 'aid')
|
||||||
video_id = ep_id or aid
|
video_id = ep_id or aid
|
||||||
|
chapters = None
|
||||||
|
|
||||||
|
if ep_id:
|
||||||
|
intro_ending_json = self._call_api(
|
||||||
|
f'/web/v2/ogv/play/episode?episode_id={ep_id}&platform=web',
|
||||||
|
video_id, fatal=False) or {}
|
||||||
|
if intro_ending_json.get('skip'):
|
||||||
|
# FIXME: start time and end time seems a bit off a few second even it corrext based on ogv.*.js
|
||||||
|
# ref: https://p.bstarstatic.com/fe-static/bstar-web-new/assets/ogv.2b147442.js
|
||||||
|
chapters = [{
|
||||||
|
'start_time': float_or_none(traverse_obj(intro_ending_json, ('skip', 'opening_start_time')), 1000),
|
||||||
|
'end_time': float_or_none(traverse_obj(intro_ending_json, ('skip', 'opening_end_time')), 1000),
|
||||||
|
'title': 'Intro'
|
||||||
|
}, {
|
||||||
|
'start_time': float_or_none(traverse_obj(intro_ending_json, ('skip', 'ending_start_time')), 1000),
|
||||||
|
'end_time': float_or_none(traverse_obj(intro_ending_json, ('skip', 'ending_end_time')), 1000),
|
||||||
|
'title': 'Outro'
|
||||||
|
}]
|
||||||
|
|
||||||
return {
|
return {
|
||||||
'id': video_id,
|
'id': video_id,
|
||||||
**self._extract_video_metadata(url, video_id, season_id),
|
**self._extract_video_metadata(url, video_id, season_id),
|
||||||
'formats': self._get_formats(ep_id=ep_id, aid=aid),
|
'formats': self._get_formats(ep_id=ep_id, aid=aid),
|
||||||
'subtitles': self.extract_subtitles(ep_id=ep_id, aid=aid),
|
'subtitles': self.extract_subtitles(ep_id=ep_id, aid=aid),
|
||||||
|
'chapters': chapters
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
167
yt_dlp/extractor/blerp.py
Normal file
167
yt_dlp/extractor/blerp.py
Normal file
|
@ -0,0 +1,167 @@
|
||||||
|
import json
|
||||||
|
|
||||||
|
from .common import InfoExtractor
|
||||||
|
from ..utils import strip_or_none, traverse_obj
|
||||||
|
|
||||||
|
|
||||||
|
class BlerpIE(InfoExtractor):
|
||||||
|
IE_NAME = 'blerp'
|
||||||
|
_VALID_URL = r'https?://(?:www\.)?blerp\.com/soundbites/(?P<id>[0-9a-zA-Z]+)'
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://blerp.com/soundbites/6320fe8745636cb4dd677a5a',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '6320fe8745636cb4dd677a5a',
|
||||||
|
'title': 'Samsung Galaxy S8 Over the Horizon Ringtone 2016',
|
||||||
|
'uploader': 'luminousaj',
|
||||||
|
'uploader_id': '5fb81e51aa66ae000c395478',
|
||||||
|
'ext': 'mp3',
|
||||||
|
'tags': ['samsung', 'galaxy', 's8', 'over the horizon', '2016', 'ringtone'],
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
'url': 'https://blerp.com/soundbites/5bc94ef4796001000498429f',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '5bc94ef4796001000498429f',
|
||||||
|
'title': 'Yee',
|
||||||
|
'uploader': '179617322678353920',
|
||||||
|
'uploader_id': '5ba99cf71386730004552c42',
|
||||||
|
'ext': 'mp3',
|
||||||
|
'tags': ['YEE', 'YEET', 'wo ha haah catchy tune yee', 'yee']
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
|
||||||
|
_GRAPHQL_OPERATIONNAME = "webBitePageGetBite"
|
||||||
|
_GRAPHQL_QUERY = (
|
||||||
|
'''query webBitePageGetBite($_id: MongoID!) {
|
||||||
|
web {
|
||||||
|
biteById(_id: $_id) {
|
||||||
|
...bitePageFrag
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fragment bitePageFrag on Bite {
|
||||||
|
_id
|
||||||
|
title
|
||||||
|
userKeywords
|
||||||
|
keywords
|
||||||
|
color
|
||||||
|
visibility
|
||||||
|
isPremium
|
||||||
|
owned
|
||||||
|
price
|
||||||
|
extraReview
|
||||||
|
isAudioExists
|
||||||
|
image {
|
||||||
|
filename
|
||||||
|
original {
|
||||||
|
url
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
userReactions {
|
||||||
|
_id
|
||||||
|
reactions
|
||||||
|
createdAt
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
topReactions
|
||||||
|
totalSaveCount
|
||||||
|
saved
|
||||||
|
blerpLibraryType
|
||||||
|
license
|
||||||
|
licenseMetaData
|
||||||
|
playCount
|
||||||
|
totalShareCount
|
||||||
|
totalFavoriteCount
|
||||||
|
totalAddedToBoardCount
|
||||||
|
userCategory
|
||||||
|
userAudioQuality
|
||||||
|
audioCreationState
|
||||||
|
transcription
|
||||||
|
userTranscription
|
||||||
|
description
|
||||||
|
createdAt
|
||||||
|
updatedAt
|
||||||
|
author
|
||||||
|
listingType
|
||||||
|
ownerObject {
|
||||||
|
_id
|
||||||
|
username
|
||||||
|
profileImage {
|
||||||
|
filename
|
||||||
|
original {
|
||||||
|
url
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
transcription
|
||||||
|
favorited
|
||||||
|
visibility
|
||||||
|
isCurated
|
||||||
|
sourceUrl
|
||||||
|
audienceRating
|
||||||
|
strictAudienceRating
|
||||||
|
ownerId
|
||||||
|
reportObject {
|
||||||
|
reportedContentStatus
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
giphy {
|
||||||
|
mp4
|
||||||
|
gif
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
audio {
|
||||||
|
filename
|
||||||
|
original {
|
||||||
|
url
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
mp3 {
|
||||||
|
url
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
__typename
|
||||||
|
}
|
||||||
|
|
||||||
|
''')
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
audio_id = self._match_id(url)
|
||||||
|
|
||||||
|
data = {
|
||||||
|
'operationName': self._GRAPHQL_OPERATIONNAME,
|
||||||
|
'query': self._GRAPHQL_QUERY,
|
||||||
|
'variables': {
|
||||||
|
'_id': audio_id
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
headers = {
|
||||||
|
'Content-Type': 'application/json'
|
||||||
|
}
|
||||||
|
|
||||||
|
json_result = self._download_json('https://api.blerp.com/graphql',
|
||||||
|
audio_id, data=json.dumps(data).encode('utf-8'), headers=headers)
|
||||||
|
|
||||||
|
bite_json = json_result['data']['web']['biteById']
|
||||||
|
|
||||||
|
info_dict = {
|
||||||
|
'id': bite_json['_id'],
|
||||||
|
'url': bite_json['audio']['mp3']['url'],
|
||||||
|
'title': bite_json['title'],
|
||||||
|
'uploader': traverse_obj(bite_json, ('ownerObject', 'username'), expected_type=strip_or_none),
|
||||||
|
'uploader_id': traverse_obj(bite_json, ('ownerObject', '_id'), expected_type=strip_or_none),
|
||||||
|
'ext': 'mp3',
|
||||||
|
'tags': list(filter(None, map(strip_or_none, (traverse_obj(bite_json, 'userKeywords', expected_type=list) or []))) or None)
|
||||||
|
}
|
||||||
|
|
||||||
|
return info_dict
|
102
yt_dlp/extractor/boxcast.py
Normal file
102
yt_dlp/extractor/boxcast.py
Normal file
|
@ -0,0 +1,102 @@
|
||||||
|
from .common import InfoExtractor
|
||||||
|
from ..utils import (
|
||||||
|
js_to_json,
|
||||||
|
traverse_obj,
|
||||||
|
unified_timestamp
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class BoxCastVideoIE(InfoExtractor):
|
||||||
|
_VALID_URL = r'''(?x)
|
||||||
|
https?://boxcast\.tv/(?:
|
||||||
|
view-embed/|
|
||||||
|
channel/\w+\?(?:[^#]+&)?b=|
|
||||||
|
video-portal/(?:\w+/){2}
|
||||||
|
)(?P<id>[\w-]+)'''
|
||||||
|
_EMBED_REGEX = [r'<iframe[^>]+src=["\'](?P<url>https?://boxcast\.tv/view-embed/[\w-]+)']
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://boxcast.tv/view-embed/in-the-midst-of-darkness-light-prevails-an-interdisciplinary-symposium-ozmq5eclj50ujl4bmpwx',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'da1eqqgkacngd5djlqld',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'thumbnail': r're:https?://uploads\.boxcast\.com/(?:[\w+-]+/){3}.+\.png$',
|
||||||
|
'title': 'In the Midst of Darkness Light Prevails: An Interdisciplinary Symposium',
|
||||||
|
'release_timestamp': 1670686812,
|
||||||
|
'release_date': '20221210',
|
||||||
|
'uploader_id': 're8w0v8hohhvpqtbskpe',
|
||||||
|
'uploader': 'Children\'s Health Defense',
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
'url': 'https://boxcast.tv/video-portal/vctwevwntun3o0ikq7af/rvyblnn0fxbfjx5nwxhl/otbpltj2kzkveo2qz3ad',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'otbpltj2kzkveo2qz3ad',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'uploader_id': 'vctwevwntun3o0ikq7af',
|
||||||
|
'uploader': 'Legacy Christian Church',
|
||||||
|
'title': 'The Quest | 1: Beginner\'s Bay | Jamie Schools',
|
||||||
|
'thumbnail': r're:https?://uploads.boxcast.com/(?:[\w-]+/){3}.+\.jpg'
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
'url': 'https://boxcast.tv/channel/z03fqwaeaby5lnaawox2?b=ssihlw5gvfij2by8tkev',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'ssihlw5gvfij2by8tkev',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'thumbnail': r're:https?://uploads.boxcast.com/(?:[\w-]+/){3}.+\.jpg$',
|
||||||
|
'release_date': '20230101',
|
||||||
|
'uploader_id': 'ds25vaazhlu4ygcvffid',
|
||||||
|
'release_timestamp': 1672543201,
|
||||||
|
'uploader': 'Lighthouse Ministries International - Beltsville, Maryland',
|
||||||
|
'description': 'md5:ac23e3d01b0b0be592e8f7fe0ec3a340',
|
||||||
|
'title': 'New Year\'s Eve CROSSOVER Service at LHMI | December 31, 2022',
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
_WEBPAGE_TESTS = [{
|
||||||
|
'url': 'https://childrenshealthdefense.eu/live-stream/',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'da1eqqgkacngd5djlqld',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'thumbnail': r're:https?://uploads\.boxcast\.com/(?:[\w+-]+/){3}.+\.png$',
|
||||||
|
'title': 'In the Midst of Darkness Light Prevails: An Interdisciplinary Symposium',
|
||||||
|
'release_timestamp': 1670686812,
|
||||||
|
'release_date': '20221210',
|
||||||
|
'uploader_id': 're8w0v8hohhvpqtbskpe',
|
||||||
|
'uploader': 'Children\'s Health Defense',
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
display_id = self._match_id(url)
|
||||||
|
webpage = self._download_webpage(url, display_id)
|
||||||
|
webpage_json_data = self._search_json(
|
||||||
|
r'var\s*BOXCAST_PRELOAD\s*=', webpage, 'broadcast data', display_id,
|
||||||
|
transform_source=js_to_json, default={})
|
||||||
|
|
||||||
|
# Ref: https://support.boxcast.com/en/articles/4235158-build-a-custom-viewer-experience-with-boxcast-api
|
||||||
|
broadcast_json_data = (
|
||||||
|
traverse_obj(webpage_json_data, ('broadcast', 'data'))
|
||||||
|
or self._download_json(f'https://api.boxcast.com/broadcasts/{display_id}', display_id))
|
||||||
|
view_json_data = (
|
||||||
|
traverse_obj(webpage_json_data, ('view', 'data'))
|
||||||
|
or self._download_json(f'https://api.boxcast.com/broadcasts/{display_id}/view',
|
||||||
|
display_id, fatal=False) or {})
|
||||||
|
|
||||||
|
formats, subtitles = [], {}
|
||||||
|
if view_json_data.get('status') == 'recorded':
|
||||||
|
formats, subtitles = self._extract_m3u8_formats_and_subtitles(
|
||||||
|
view_json_data['playlist'], display_id)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'id': str(broadcast_json_data['id']),
|
||||||
|
'title': (broadcast_json_data.get('name')
|
||||||
|
or self._html_search_meta(['og:title', 'twitter:title'], webpage)),
|
||||||
|
'description': (broadcast_json_data.get('description')
|
||||||
|
or self._html_search_meta(['og:description', 'twitter:description'], webpage)
|
||||||
|
or None),
|
||||||
|
'thumbnail': (broadcast_json_data.get('preview')
|
||||||
|
or self._html_search_meta(['og:image', 'twitter:image'], webpage)),
|
||||||
|
'formats': formats,
|
||||||
|
'subtitles': subtitles,
|
||||||
|
'release_timestamp': unified_timestamp(broadcast_json_data.get('streamed_at')),
|
||||||
|
'uploader': broadcast_json_data.get('account_name'),
|
||||||
|
'uploader_id': broadcast_json_data.get('account_id'),
|
||||||
|
}
|
|
@ -1,9 +1,5 @@
|
||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
from ..utils import (
|
from ..utils import float_or_none, int_or_none, make_archive_id, traverse_obj
|
||||||
traverse_obj,
|
|
||||||
float_or_none,
|
|
||||||
int_or_none
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class CallinIE(InfoExtractor):
|
class CallinIE(InfoExtractor):
|
||||||
|
@ -35,6 +31,54 @@ class CallinIE(InfoExtractor):
|
||||||
'episode_number': 1,
|
'episode_number': 1,
|
||||||
'episode_id': '218b979630a35ead12c6fd096f2996c56c37e4d0dc1f6dc0feada32dcf7b31cd'
|
'episode_id': '218b979630a35ead12c6fd096f2996c56c37e4d0dc1f6dc0feada32dcf7b31cd'
|
||||||
}
|
}
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.callin.com/episode/fcc-commissioner-brendan-carr-on-elons-PrumRdSQJW',
|
||||||
|
'md5': '14ede27ee2c957b7e4db93140fc0745c',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'c3dab47f237bf953d180d3f243477a84302798be0e0b29bc9ade6d60a69f04f5',
|
||||||
|
'ext': 'ts',
|
||||||
|
'title': 'FCC Commissioner Brendan Carr on Elon’s Starlink',
|
||||||
|
'description': 'Or, why the government doesn’t like SpaceX',
|
||||||
|
'channel': 'The Pull Request',
|
||||||
|
'channel_url': 'https://callin.com/show/the-pull-request-ucnDJmEKAa',
|
||||||
|
'duration': 3182.472,
|
||||||
|
'series_id': '7e9c23156e4aecfdcaef46bfb2ed7ca268509622ec006c0f0f25d90e34496638',
|
||||||
|
'uploader_url': 'http://thepullrequest.com',
|
||||||
|
'upload_date': '20220902',
|
||||||
|
'episode': 'FCC Commissioner Brendan Carr on Elon’s Starlink',
|
||||||
|
'display_id': 'fcc-commissioner-brendan-carr-on-elons-PrumRdSQJW',
|
||||||
|
'series': 'The Pull Request',
|
||||||
|
'channel_id': '7e9c23156e4aecfdcaef46bfb2ed7ca268509622ec006c0f0f25d90e34496638',
|
||||||
|
'view_count': int,
|
||||||
|
'uploader': 'Antonio García Martínez',
|
||||||
|
'thumbnail': 'https://d1z76fhpoqkd01.cloudfront.net/shows/legacy/1ade9142625344045dc17cf523469ced1d93610762f4c886d06aa190a2f979e8.png',
|
||||||
|
'episode_id': 'c3dab47f237bf953d180d3f243477a84302798be0e0b29bc9ade6d60a69f04f5',
|
||||||
|
'timestamp': 1662100688.005,
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.callin.com/episode/episode-81-elites-melt-down-over-student-debt-lzxMidUnjA',
|
||||||
|
'md5': '16f704ddbf82a27e3930533b12062f07',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '8d06f869798f93a7814e380bceabea72d501417e620180416ff6bd510596e83c',
|
||||||
|
'ext': 'ts',
|
||||||
|
'title': 'Episode 81- Elites MELT DOWN over Student Debt Victory? Rumble in NYC?',
|
||||||
|
'description': 'Let’s talk todays episode about the primary election shake up in NYC and the elites melting down over student debt cancelation.',
|
||||||
|
'channel': 'The DEBRIEF With Briahna Joy Gray',
|
||||||
|
'channel_url': 'https://callin.com/show/the-debrief-with-briahna-joy-gray-siiFDzGegm',
|
||||||
|
'duration': 10043.16,
|
||||||
|
'series_id': '61cea58444465fd26674069703bd8322993bc9e5b4f1a6d0872690554a046ff7',
|
||||||
|
'uploader_url': 'http://patreon.com/badfaithpodcast',
|
||||||
|
'upload_date': '20220826',
|
||||||
|
'episode': 'Episode 81- Elites MELT DOWN over Student Debt Victory? Rumble in NYC?',
|
||||||
|
'display_id': 'episode-',
|
||||||
|
'series': 'The DEBRIEF With Briahna Joy Gray',
|
||||||
|
'channel_id': '61cea58444465fd26674069703bd8322993bc9e5b4f1a6d0872690554a046ff7',
|
||||||
|
'view_count': int,
|
||||||
|
'uploader': 'Briahna Gray',
|
||||||
|
'thumbnail': 'https://d1z76fhpoqkd01.cloudfront.net/shows/legacy/461ea0d86172cb6aff7d6c80fd49259cf5e64bdf737a4650f8bc24cf392ca218.png',
|
||||||
|
'episode_id': '8d06f869798f93a7814e380bceabea72d501417e620180416ff6bd510596e83c',
|
||||||
|
'timestamp': 1661476708.282,
|
||||||
|
}
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def try_get_user_name(self, d):
|
def try_get_user_name(self, d):
|
||||||
|
@ -86,6 +130,7 @@ def _real_extract(self, url):
|
||||||
|
|
||||||
return {
|
return {
|
||||||
'id': id,
|
'id': id,
|
||||||
|
'_old_archive_ids': [make_archive_id(self, display_id.rsplit('-', 1)[-1])],
|
||||||
'display_id': display_id,
|
'display_id': display_id,
|
||||||
'title': title,
|
'title': title,
|
||||||
'formats': formats,
|
'formats': formats,
|
||||||
|
|
|
@ -1,9 +1,5 @@
|
||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
from ..utils import (
|
from ..utils import int_or_none, url_or_none
|
||||||
ExtractorError,
|
|
||||||
int_or_none,
|
|
||||||
url_or_none,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class CamModelsIE(InfoExtractor):
|
class CamModelsIE(InfoExtractor):
|
||||||
|
@ -17,32 +13,11 @@ class CamModelsIE(InfoExtractor):
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
user_id = self._match_id(url)
|
user_id = self._match_id(url)
|
||||||
|
|
||||||
webpage = self._download_webpage(
|
|
||||||
url, user_id, headers=self.geo_verification_headers())
|
|
||||||
|
|
||||||
manifest_root = self._html_search_regex(
|
|
||||||
r'manifestUrlRoot=([^&\']+)', webpage, 'manifest', default=None)
|
|
||||||
|
|
||||||
if not manifest_root:
|
|
||||||
ERRORS = (
|
|
||||||
("I'm offline, but let's stay connected", 'This user is currently offline'),
|
|
||||||
('in a private show', 'This user is in a private show'),
|
|
||||||
('is currently performing LIVE', 'This model is currently performing live'),
|
|
||||||
)
|
|
||||||
for pattern, message in ERRORS:
|
|
||||||
if pattern in webpage:
|
|
||||||
error = message
|
|
||||||
expected = True
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
error = 'Unable to find manifest URL root'
|
|
||||||
expected = False
|
|
||||||
raise ExtractorError(error, expected=expected)
|
|
||||||
|
|
||||||
manifest = self._download_json(
|
manifest = self._download_json(
|
||||||
'%s%s.json' % (manifest_root, user_id), user_id)
|
'https://manifest-server.naiadsystems.com/live/s:%s.json' % user_id, user_id)
|
||||||
|
|
||||||
formats = []
|
formats = []
|
||||||
|
thumbnails = []
|
||||||
for format_id, format_dict in manifest['formats'].items():
|
for format_id, format_dict in manifest['formats'].items():
|
||||||
if not isinstance(format_dict, dict):
|
if not isinstance(format_dict, dict):
|
||||||
continue
|
continue
|
||||||
|
@ -82,12 +57,20 @@ def _real_extract(self, url):
|
||||||
'quality': -10,
|
'quality': -10,
|
||||||
})
|
})
|
||||||
else:
|
else:
|
||||||
|
if format_id == 'jpeg':
|
||||||
|
thumbnails.append({
|
||||||
|
'url': f['url'],
|
||||||
|
'width': f['width'],
|
||||||
|
'height': f['height'],
|
||||||
|
'format_id': f['format_id'],
|
||||||
|
})
|
||||||
continue
|
continue
|
||||||
formats.append(f)
|
formats.append(f)
|
||||||
|
|
||||||
return {
|
return {
|
||||||
'id': user_id,
|
'id': user_id,
|
||||||
'title': user_id,
|
'title': user_id,
|
||||||
|
'thumbnails': thumbnails,
|
||||||
'is_live': True,
|
'is_live': True,
|
||||||
'formats': formats,
|
'formats': formats,
|
||||||
'age_limit': 18
|
'age_limit': 18
|
||||||
|
|
|
@ -202,7 +202,7 @@ def _real_extract(self, url):
|
||||||
|
|
||||||
class CBCGemIE(InfoExtractor):
|
class CBCGemIE(InfoExtractor):
|
||||||
IE_NAME = 'gem.cbc.ca'
|
IE_NAME = 'gem.cbc.ca'
|
||||||
_VALID_URL = r'https?://gem\.cbc\.ca/media/(?P<id>[0-9a-z-]+/s[0-9]+[a-z][0-9]+)'
|
_VALID_URL = r'https?://gem\.cbc\.ca/(?:media/)?(?P<id>[0-9a-z-]+/s[0-9]+[a-z][0-9]+)'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
# This is a normal, public, TV show video
|
# This is a normal, public, TV show video
|
||||||
'url': 'https://gem.cbc.ca/media/schitts-creek/s06e01',
|
'url': 'https://gem.cbc.ca/media/schitts-creek/s06e01',
|
||||||
|
@ -245,6 +245,9 @@ class CBCGemIE(InfoExtractor):
|
||||||
},
|
},
|
||||||
'params': {'format': 'bv'},
|
'params': {'format': 'bv'},
|
||||||
'skip': 'Geo-restricted to Canada',
|
'skip': 'Geo-restricted to Canada',
|
||||||
|
}, {
|
||||||
|
'url': 'https://gem.cbc.ca/nadiyas-family-favourites/s01e01',
|
||||||
|
'only_matching': True,
|
||||||
}]
|
}]
|
||||||
|
|
||||||
_GEO_COUNTRIES = ['CA']
|
_GEO_COUNTRIES = ['CA']
|
||||||
|
|
|
@ -1,93 +1,123 @@
|
||||||
import json
|
import base64
|
||||||
|
|
||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
from .youtube import YoutubeIE
|
|
||||||
from ..compat import compat_b64decode
|
|
||||||
from ..utils import (
|
from ..utils import (
|
||||||
clean_html,
|
clean_html,
|
||||||
ExtractorError
|
int_or_none,
|
||||||
|
traverse_obj,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class ChilloutzoneIE(InfoExtractor):
|
class ChilloutzoneIE(InfoExtractor):
|
||||||
_VALID_URL = r'https?://(?:www\.)?chilloutzone\.net/video/(?P<id>[\w|-]+)\.html'
|
_VALID_URL = r'https?://(?:www\.)?chilloutzone\.net/video/(?P<id>[\w-]+)\.html'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'http://www.chilloutzone.net/video/enemene-meck-alle-katzen-weg.html',
|
'url': 'https://www.chilloutzone.net/video/enemene-meck-alle-katzen-weg.html',
|
||||||
'md5': 'a76f3457e813ea0037e5244f509e66d1',
|
'md5': 'a76f3457e813ea0037e5244f509e66d1',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': 'enemene-meck-alle-katzen-weg',
|
'id': 'enemene-meck-alle-katzen-weg',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
'title': 'Enemene Meck - Alle Katzen weg',
|
'title': 'Enemene Meck - Alle Katzen weg',
|
||||||
'description': 'Ist das der Umkehrschluss des Niesenden Panda-Babys?',
|
'description': 'Ist das der Umkehrschluss des Niesenden Panda-Babys?',
|
||||||
|
'duration': 24,
|
||||||
},
|
},
|
||||||
}, {
|
}, {
|
||||||
'note': 'Video hosted at YouTube',
|
'note': 'Video hosted at YouTube',
|
||||||
'url': 'http://www.chilloutzone.net/video/eine-sekunde-bevor.html',
|
'url': 'https://www.chilloutzone.net/video/eine-sekunde-bevor.html',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': '1YVQaAgHyRU',
|
'id': '1YVQaAgHyRU',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
'title': '16 Photos Taken 1 Second Before Disaster',
|
'title': '16 Photos Taken 1 Second Before Disaster',
|
||||||
'description': 'md5:58a8fcf6a459fe0a08f54140f0ad1814',
|
'description': 'md5:58a8fcf6a459fe0a08f54140f0ad1814',
|
||||||
'uploader': 'BuzzFeedVideo',
|
'uploader': 'BuzzFeedVideo',
|
||||||
'uploader_id': 'BuzzFeedVideo',
|
'uploader_id': '@BuzzFeedVideo',
|
||||||
'upload_date': '20131105',
|
'upload_date': '20131105',
|
||||||
|
'availability': 'public',
|
||||||
|
'thumbnail': 'https://i.ytimg.com/vi/1YVQaAgHyRU/maxresdefault.jpg',
|
||||||
|
'tags': 'count:41',
|
||||||
|
'like_count': int,
|
||||||
|
'playable_in_embed': True,
|
||||||
|
'channel_url': 'https://www.youtube.com/channel/UCpko_-a4wgz2u_DgDgd9fqA',
|
||||||
|
'chapters': 'count:6',
|
||||||
|
'live_status': 'not_live',
|
||||||
|
'view_count': int,
|
||||||
|
'categories': ['Entertainment'],
|
||||||
|
'age_limit': 0,
|
||||||
|
'channel_id': 'UCpko_-a4wgz2u_DgDgd9fqA',
|
||||||
|
'duration': 100,
|
||||||
|
'uploader_url': 'http://www.youtube.com/@BuzzFeedVideo',
|
||||||
|
'channel_follower_count': int,
|
||||||
|
'channel': 'BuzzFeedVideo',
|
||||||
},
|
},
|
||||||
}, {
|
}, {
|
||||||
'note': 'Video hosted at Vimeo',
|
'url': 'https://www.chilloutzone.net/video/icon-blending.html',
|
||||||
'url': 'http://www.chilloutzone.net/video/icon-blending.html',
|
'md5': '2f9d6850ec567b24f0f4fa143b9aa2f9',
|
||||||
'md5': '2645c678b8dc4fefcc0e1b60db18dac1',
|
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': '85523671',
|
'id': 'LLNkHpSjBfc',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
'title': 'The Sunday Times - Icons',
|
'title': 'The Sunday Times Making of Icons',
|
||||||
'description': 're:(?s)^Watch the making of - makingoficons.com.{300,}',
|
'description': 'md5:b9259fcf63a1669e42001e5db677f02a',
|
||||||
'uploader': 'Us',
|
'uploader': 'MadFoxUA',
|
||||||
'uploader_id': 'usfilms',
|
'uploader_id': '@MadFoxUA',
|
||||||
'upload_date': '20140131'
|
'upload_date': '20140204',
|
||||||
|
'channel_id': 'UCSZa9Y6-Vl7c11kWMcbAfCw',
|
||||||
|
'channel_url': 'https://www.youtube.com/channel/UCSZa9Y6-Vl7c11kWMcbAfCw',
|
||||||
|
'comment_count': int,
|
||||||
|
'uploader_url': 'http://www.youtube.com/@MadFoxUA',
|
||||||
|
'duration': 66,
|
||||||
|
'live_status': 'not_live',
|
||||||
|
'channel_follower_count': int,
|
||||||
|
'playable_in_embed': True,
|
||||||
|
'view_count': int,
|
||||||
|
'like_count': int,
|
||||||
|
'thumbnail': 'https://i.ytimg.com/vi/LLNkHpSjBfc/maxresdefault.jpg',
|
||||||
|
'categories': ['Comedy'],
|
||||||
|
'availability': 'public',
|
||||||
|
'tags': [],
|
||||||
|
'channel': 'MadFoxUA',
|
||||||
|
'age_limit': 0,
|
||||||
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.chilloutzone.net/video/ordentlich-abgeschuettelt.html',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'ordentlich-abgeschuettelt',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Ordentlich abgeschüttelt',
|
||||||
|
'description': 'md5:d41541966b75d3d1e8ea77a94ea0d329',
|
||||||
|
'duration': 18,
|
||||||
},
|
},
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
mobj = self._match_valid_url(url)
|
video_id = self._match_id(url)
|
||||||
video_id = mobj.group('id')
|
|
||||||
|
|
||||||
webpage = self._download_webpage(url, video_id)
|
webpage = self._download_webpage(url, video_id)
|
||||||
|
b64_data = self._html_search_regex(
|
||||||
|
r'var cozVidData\s*=\s*"([^"]+)"', webpage, 'video data')
|
||||||
|
info = self._parse_json(base64.b64decode(b64_data).decode(), video_id)
|
||||||
|
|
||||||
base64_video_info = self._html_search_regex(
|
video_url = info.get('mediaUrl')
|
||||||
r'var cozVidData = "(.+?)";', webpage, 'video data')
|
native_platform = info.get('nativePlatform')
|
||||||
decoded_video_info = compat_b64decode(base64_video_info).decode('utf-8')
|
|
||||||
video_info_dict = json.loads(decoded_video_info)
|
|
||||||
|
|
||||||
# get video information from dict
|
if native_platform and info.get('sourcePriority') == 'native':
|
||||||
video_url = video_info_dict['mediaUrl']
|
native_video_id = info['nativeVideoId']
|
||||||
description = clean_html(video_info_dict.get('description'))
|
|
||||||
title = video_info_dict['title']
|
|
||||||
native_platform = video_info_dict['nativePlatform']
|
|
||||||
native_video_id = video_info_dict['nativeVideoId']
|
|
||||||
source_priority = video_info_dict['sourcePriority']
|
|
||||||
|
|
||||||
# If nativePlatform is None a fallback mechanism is used (i.e. youtube embed)
|
|
||||||
if native_platform is None:
|
|
||||||
youtube_url = YoutubeIE._extract_url(webpage)
|
|
||||||
if youtube_url:
|
|
||||||
return self.url_result(youtube_url, ie=YoutubeIE.ie_key())
|
|
||||||
|
|
||||||
# Non Fallback: Decide to use native source (e.g. youtube or vimeo) or
|
|
||||||
# the own CDN
|
|
||||||
if source_priority == 'native':
|
|
||||||
if native_platform == 'youtube':
|
if native_platform == 'youtube':
|
||||||
return self.url_result(native_video_id, ie='Youtube')
|
return self.url_result(native_video_id, 'Youtube')
|
||||||
if native_platform == 'vimeo':
|
elif native_platform == 'vimeo':
|
||||||
return self.url_result(
|
return self.url_result(f'https://vimeo.com/{native_video_id}', 'Vimeo')
|
||||||
'http://vimeo.com/' + native_video_id, ie='Vimeo')
|
|
||||||
|
|
||||||
if not video_url:
|
elif not video_url:
|
||||||
raise ExtractorError('No video found')
|
# Possibly a standard youtube embed?
|
||||||
|
# TODO: Investigate if site still does this (there are no tests for it)
|
||||||
|
return self.url_result(url, 'Generic')
|
||||||
|
|
||||||
return {
|
return {
|
||||||
'id': video_id,
|
'id': video_id,
|
||||||
'url': video_url,
|
'url': video_url,
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
'title': title,
|
**traverse_obj(info, {
|
||||||
'description': description,
|
'title': 'title',
|
||||||
|
'description': ('description', {clean_html}),
|
||||||
|
'duration': ('videoLength', {int_or_none}),
|
||||||
|
'width': ('videoWidth', {int_or_none}),
|
||||||
|
'height': ('videoHeight', {int_or_none}),
|
||||||
|
}),
|
||||||
}
|
}
|
||||||
|
|
|
@ -9,22 +9,22 @@
|
||||||
class ClypIE(InfoExtractor):
|
class ClypIE(InfoExtractor):
|
||||||
_VALID_URL = r'https?://(?:www\.)?clyp\.it/(?P<id>[a-z0-9]+)'
|
_VALID_URL = r'https?://(?:www\.)?clyp\.it/(?P<id>[a-z0-9]+)'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'https://clyp.it/ojz2wfah',
|
'url': 'https://clyp.it/iynkjk4b',
|
||||||
'md5': '1d4961036c41247ecfdcc439c0cddcbb',
|
'md5': '4bc6371c65210e7b372097fce4d92441',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': 'ojz2wfah',
|
'id': 'iynkjk4b',
|
||||||
'ext': 'mp3',
|
'ext': 'ogg',
|
||||||
'title': 'Krisson80 - bits wip wip',
|
'title': 'research',
|
||||||
'description': '#Krisson80BitsWipWip #chiptune\n#wip',
|
'description': '#Research',
|
||||||
'duration': 263.21,
|
'duration': 51.278,
|
||||||
'timestamp': 1443515251,
|
'timestamp': 1435524981,
|
||||||
'upload_date': '20150929',
|
'upload_date': '20150628',
|
||||||
},
|
},
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://clyp.it/b04p1odi?token=b0078e077e15835845c528a44417719d',
|
'url': 'https://clyp.it/b04p1odi?token=b0078e077e15835845c528a44417719d',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': 'b04p1odi',
|
'id': 'b04p1odi',
|
||||||
'ext': 'mp3',
|
'ext': 'ogg',
|
||||||
'title': 'GJ! (Reward Edit)',
|
'title': 'GJ! (Reward Edit)',
|
||||||
'description': 'Metal Resistance (THE ONE edition)',
|
'description': 'Metal Resistance (THE ONE edition)',
|
||||||
'duration': 177.789,
|
'duration': 177.789,
|
||||||
|
@ -34,6 +34,17 @@ class ClypIE(InfoExtractor):
|
||||||
'params': {
|
'params': {
|
||||||
'skip_download': True,
|
'skip_download': True,
|
||||||
},
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://clyp.it/v42214lc',
|
||||||
|
'md5': '4aca4dfc3236fb6d6ddc4ea08314f33f',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'v42214lc',
|
||||||
|
'ext': 'wav',
|
||||||
|
'title': 'i dont wanna go (old version)',
|
||||||
|
'duration': 113.528,
|
||||||
|
'timestamp': 1607348505,
|
||||||
|
'upload_date': '20201207',
|
||||||
|
},
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
|
@ -59,8 +70,20 @@ def _real_extract(self, url):
|
||||||
'url': format_url,
|
'url': format_url,
|
||||||
'format_id': format_id,
|
'format_id': format_id,
|
||||||
'vcodec': 'none',
|
'vcodec': 'none',
|
||||||
|
'acodec': ext.lower(),
|
||||||
})
|
})
|
||||||
|
|
||||||
|
page = self._download_webpage(url, video_id=audio_id)
|
||||||
|
wav_url = self._html_search_regex(
|
||||||
|
r'var\s*wavStreamUrl\s*=\s*["\'](?P<url>https?://[^\'"]+)', page, 'url', default=None)
|
||||||
|
if wav_url:
|
||||||
|
formats.append({
|
||||||
|
'url': wav_url,
|
||||||
|
'format_id': 'wavStreamUrl',
|
||||||
|
'vcodec': 'none',
|
||||||
|
'acodec': 'wav',
|
||||||
|
})
|
||||||
|
|
||||||
title = metadata['Title']
|
title = metadata['Title']
|
||||||
description = metadata.get('Description')
|
description = metadata.get('Description')
|
||||||
duration = float_or_none(metadata.get('Duration'))
|
duration = float_or_none(metadata.get('Duration'))
|
||||||
|
|
|
@ -31,6 +31,7 @@
|
||||||
FormatSorter,
|
FormatSorter,
|
||||||
GeoRestrictedError,
|
GeoRestrictedError,
|
||||||
GeoUtils,
|
GeoUtils,
|
||||||
|
HEADRequest,
|
||||||
LenientJSONDecoder,
|
LenientJSONDecoder,
|
||||||
RegexNotFoundError,
|
RegexNotFoundError,
|
||||||
RetryManager,
|
RetryManager,
|
||||||
|
@ -80,6 +81,7 @@
|
||||||
update_url_query,
|
update_url_query,
|
||||||
url_basename,
|
url_basename,
|
||||||
url_or_none,
|
url_or_none,
|
||||||
|
urlhandle_detect_ext,
|
||||||
urljoin,
|
urljoin,
|
||||||
variadic,
|
variadic,
|
||||||
xpath_element,
|
xpath_element,
|
||||||
|
@ -129,6 +131,7 @@ class InfoExtractor:
|
||||||
is parsed from a string (in case of
|
is parsed from a string (in case of
|
||||||
fragmented media)
|
fragmented media)
|
||||||
for MSS - URL of the ISM manifest.
|
for MSS - URL of the ISM manifest.
|
||||||
|
* request_data Data to send in POST request to the URL
|
||||||
* manifest_url
|
* manifest_url
|
||||||
The URL of the manifest file in case of
|
The URL of the manifest file in case of
|
||||||
fragmented media:
|
fragmented media:
|
||||||
|
@ -217,6 +220,17 @@ class InfoExtractor:
|
||||||
* no_resume The server does not support resuming the
|
* no_resume The server does not support resuming the
|
||||||
(HTTP or RTMP) download. Boolean.
|
(HTTP or RTMP) download. Boolean.
|
||||||
* has_drm The format has DRM and cannot be downloaded. Boolean
|
* has_drm The format has DRM and cannot be downloaded. Boolean
|
||||||
|
* extra_param_to_segment_url A query string to append to each
|
||||||
|
fragment's URL, or to update each existing query string
|
||||||
|
with. Only applied by the native HLS/DASH downloaders.
|
||||||
|
* hls_aes A dictionary of HLS AES-128 decryption information
|
||||||
|
used by the native HLS downloader to override the
|
||||||
|
values in the media playlist when an '#EXT-X-KEY' tag
|
||||||
|
is present in the playlist:
|
||||||
|
* uri The URI from which the key will be downloaded
|
||||||
|
* key The key (as hex) used to decrypt fragments.
|
||||||
|
If `key` is given, any key URI will be ignored
|
||||||
|
* iv The IV (as hex) used to decrypt fragments
|
||||||
* downloader_options A dictionary of downloader options
|
* downloader_options A dictionary of downloader options
|
||||||
(For internal use only)
|
(For internal use only)
|
||||||
* http_chunk_size Chunk size for HTTP downloads
|
* http_chunk_size Chunk size for HTTP downloads
|
||||||
|
@ -1324,7 +1338,7 @@ def _get_tfa_info(self, note='two-factor verification code'):
|
||||||
# Helper functions for extracting OpenGraph info
|
# Helper functions for extracting OpenGraph info
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _og_regexes(prop):
|
def _og_regexes(prop):
|
||||||
content_re = r'content=(?:"([^"]+?)"|\'([^\']+?)\'|\s*([^\s"\'=<>`]+?))'
|
content_re = r'content=(?:"([^"]+?)"|\'([^\']+?)\'|\s*([^\s"\'=<>`]+?)(?=\s|/?>))'
|
||||||
property_re = (r'(?:name|property)=(?:\'og%(sep)s%(prop)s\'|"og%(sep)s%(prop)s"|\s*og%(sep)s%(prop)s\b)'
|
property_re = (r'(?:name|property)=(?:\'og%(sep)s%(prop)s\'|"og%(sep)s%(prop)s"|\s*og%(sep)s%(prop)s\b)'
|
||||||
% {'prop': re.escape(prop), 'sep': '(?::|[:-])'})
|
% {'prop': re.escape(prop), 'sep': '(?::|[:-])'})
|
||||||
template = r'<meta[^>]+?%s[^>]+?%s'
|
template = r'<meta[^>]+?%s[^>]+?%s'
|
||||||
|
@ -1656,11 +1670,8 @@ def _search_nuxt_data(self, webpage, video_id, context_name='__NUXT__', *, fatal
|
||||||
if js is None:
|
if js is None:
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
args = dict(zip(arg_keys.split(','), arg_vals.split(',')))
|
args = dict(zip(arg_keys.split(','), map(json.dumps, self._parse_json(
|
||||||
|
f'[{arg_vals}]', video_id, transform_source=js_to_json, fatal=fatal) or ())))
|
||||||
for key, val in args.items():
|
|
||||||
if val in ('undefined', 'void 0'):
|
|
||||||
args[key] = 'null'
|
|
||||||
|
|
||||||
ret = self._parse_json(js, video_id, transform_source=functools.partial(js_to_json, vars=args), fatal=fatal)
|
ret = self._parse_json(js, video_id, transform_source=functools.partial(js_to_json, vars=args), fatal=fatal)
|
||||||
return traverse_obj(ret, traverse) or {}
|
return traverse_obj(ret, traverse) or {}
|
||||||
|
@ -2052,6 +2063,7 @@ def extract_media(x_media_line):
|
||||||
'protocol': entry_protocol,
|
'protocol': entry_protocol,
|
||||||
'preference': preference,
|
'preference': preference,
|
||||||
'quality': quality,
|
'quality': quality,
|
||||||
|
'has_drm': has_drm,
|
||||||
'vcodec': 'none' if media_type == 'AUDIO' else None,
|
'vcodec': 'none' if media_type == 'AUDIO' else None,
|
||||||
} for idx in _extract_m3u8_playlist_indices(manifest_url))
|
} for idx in _extract_m3u8_playlist_indices(manifest_url))
|
||||||
|
|
||||||
|
@ -2111,6 +2123,7 @@ def build_stream_name():
|
||||||
'protocol': entry_protocol,
|
'protocol': entry_protocol,
|
||||||
'preference': preference,
|
'preference': preference,
|
||||||
'quality': quality,
|
'quality': quality,
|
||||||
|
'has_drm': has_drm,
|
||||||
}
|
}
|
||||||
resolution = last_stream_inf.get('RESOLUTION')
|
resolution = last_stream_inf.get('RESOLUTION')
|
||||||
if resolution:
|
if resolution:
|
||||||
|
@ -2177,13 +2190,23 @@ def _extract_m3u8_vod_duration(
|
||||||
return self._parse_m3u8_vod_duration(m3u8_vod or '', video_id)
|
return self._parse_m3u8_vod_duration(m3u8_vod or '', video_id)
|
||||||
|
|
||||||
def _parse_m3u8_vod_duration(self, m3u8_vod, video_id):
|
def _parse_m3u8_vod_duration(self, m3u8_vod, video_id):
|
||||||
if '#EXT-X-PLAYLIST-TYPE:VOD' not in m3u8_vod:
|
if '#EXT-X-ENDLIST' not in m3u8_vod:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
return int(sum(
|
return int(sum(
|
||||||
float(line[len('#EXTINF:'):].split(',')[0])
|
float(line[len('#EXTINF:'):].split(',')[0])
|
||||||
for line in m3u8_vod.splitlines() if line.startswith('#EXTINF:'))) or None
|
for line in m3u8_vod.splitlines() if line.startswith('#EXTINF:'))) or None
|
||||||
|
|
||||||
|
def _extract_mpd_vod_duration(
|
||||||
|
self, mpd_url, video_id, note=None, errnote=None, data=None, headers={}, query={}):
|
||||||
|
|
||||||
|
mpd_doc = self._download_xml(
|
||||||
|
mpd_url, video_id,
|
||||||
|
note='Downloading MPD VOD manifest' if note is None else note,
|
||||||
|
errnote='Failed to download VOD manifest' if errnote is None else errnote,
|
||||||
|
fatal=False, data=data, headers=headers, query=query) or {}
|
||||||
|
return int_or_none(parse_duration(mpd_doc.get('mediaPresentationDuration')))
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _xpath_ns(path, namespace=None):
|
def _xpath_ns(path, namespace=None):
|
||||||
if not namespace:
|
if not namespace:
|
||||||
|
@ -2310,7 +2333,8 @@ def _parse_smil_formats(self, smil, smil_url, video_id, namespace=None, f4m_para
|
||||||
height = int_or_none(medium.get('height'))
|
height = int_or_none(medium.get('height'))
|
||||||
proto = medium.get('proto')
|
proto = medium.get('proto')
|
||||||
ext = medium.get('ext')
|
ext = medium.get('ext')
|
||||||
src_ext = determine_ext(src)
|
src_ext = determine_ext(src, default_ext=None) or ext or urlhandle_detect_ext(
|
||||||
|
self._request_webpage(HEADRequest(src), video_id, note='Requesting extension info', fatal=False))
|
||||||
streamer = medium.get('streamer') or base
|
streamer = medium.get('streamer') or base
|
||||||
|
|
||||||
if proto == 'rtmp' or streamer.startswith('rtmp'):
|
if proto == 'rtmp' or streamer.startswith('rtmp'):
|
||||||
|
@ -3502,7 +3526,7 @@ def description(cls, *, markdown=True, search_examples=None):
|
||||||
desc = ''
|
desc = ''
|
||||||
if cls._NETRC_MACHINE:
|
if cls._NETRC_MACHINE:
|
||||||
if markdown:
|
if markdown:
|
||||||
desc += f' [<abbr title="netrc machine"><em>{cls._NETRC_MACHINE}</em></abbr>]'
|
desc += f' [*{cls._NETRC_MACHINE}*](## "netrc machine")'
|
||||||
else:
|
else:
|
||||||
desc += f' [{cls._NETRC_MACHINE}]'
|
desc += f' [{cls._NETRC_MACHINE}]'
|
||||||
if cls.IE_DESC is False:
|
if cls.IE_DESC is False:
|
||||||
|
@ -3624,6 +3648,38 @@ def _generic_title(self, url='', webpage='', *, default=None):
|
||||||
or urllib.parse.unquote(os.path.splitext(url_basename(url))[0])
|
or urllib.parse.unquote(os.path.splitext(url_basename(url))[0])
|
||||||
or default)
|
or default)
|
||||||
|
|
||||||
|
def _extract_chapters_helper(self, chapter_list, start_function, title_function, duration, strict=True):
|
||||||
|
if not duration:
|
||||||
|
return
|
||||||
|
chapter_list = [{
|
||||||
|
'start_time': start_function(chapter),
|
||||||
|
'title': title_function(chapter),
|
||||||
|
} for chapter in chapter_list or []]
|
||||||
|
if not strict:
|
||||||
|
chapter_list.sort(key=lambda c: c['start_time'] or 0)
|
||||||
|
|
||||||
|
chapters = [{'start_time': 0}]
|
||||||
|
for idx, chapter in enumerate(chapter_list):
|
||||||
|
if chapter['start_time'] is None:
|
||||||
|
self.report_warning(f'Incomplete chapter {idx}')
|
||||||
|
elif chapters[-1]['start_time'] <= chapter['start_time'] <= duration:
|
||||||
|
chapters.append(chapter)
|
||||||
|
elif chapter not in chapters:
|
||||||
|
self.report_warning(
|
||||||
|
f'Invalid start time ({chapter["start_time"]} < {chapters[-1]["start_time"]}) for chapter "{chapter["title"]}"')
|
||||||
|
return chapters[1:]
|
||||||
|
|
||||||
|
def _extract_chapters_from_description(self, description, duration):
|
||||||
|
duration_re = r'(?:\d+:)?\d{1,2}:\d{2}'
|
||||||
|
sep_re = r'(?m)^\s*(%s)\b\W*\s(%s)\s*$'
|
||||||
|
return self._extract_chapters_helper(
|
||||||
|
re.findall(sep_re % (duration_re, r'.+?'), description or ''),
|
||||||
|
start_function=lambda x: parse_duration(x[0]), title_function=lambda x: x[1],
|
||||||
|
duration=duration, strict=False) or self._extract_chapters_helper(
|
||||||
|
re.findall(sep_re % (r'.+?', duration_re), description or ''),
|
||||||
|
start_function=lambda x: parse_duration(x[1]), title_function=lambda x: x[0],
|
||||||
|
duration=duration, strict=False)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _availability(is_private=None, needs_premium=None, needs_subscription=None, needs_auth=None, is_unlisted=None):
|
def _availability(is_private=None, needs_premium=None, needs_subscription=None, needs_auth=None, is_unlisted=None):
|
||||||
all_known = all(map(
|
all_known = all(map(
|
||||||
|
|
|
@ -20,8 +20,12 @@ class CrunchyrollBaseIE(InfoExtractor):
|
||||||
_NETRC_MACHINE = 'crunchyroll'
|
_NETRC_MACHINE = 'crunchyroll'
|
||||||
params = None
|
params = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def is_logged_in(self):
|
||||||
|
return self._get_cookies(self._LOGIN_URL).get('etp_rt')
|
||||||
|
|
||||||
def _perform_login(self, username, password):
|
def _perform_login(self, username, password):
|
||||||
if self._get_cookies(self._LOGIN_URL).get('etp_rt'):
|
if self.is_logged_in:
|
||||||
return
|
return
|
||||||
|
|
||||||
upsell_response = self._download_json(
|
upsell_response = self._download_json(
|
||||||
|
@ -46,7 +50,7 @@ def _perform_login(self, username, password):
|
||||||
}).encode('ascii'))
|
}).encode('ascii'))
|
||||||
if login_response['code'] != 'ok':
|
if login_response['code'] != 'ok':
|
||||||
raise ExtractorError('Login failed. Server message: %s' % login_response['message'], expected=True)
|
raise ExtractorError('Login failed. Server message: %s' % login_response['message'], expected=True)
|
||||||
if not self._get_cookies(self._LOGIN_URL).get('etp_rt'):
|
if not self.is_logged_in:
|
||||||
raise ExtractorError('Login succeeded but did not set etp_rt cookie')
|
raise ExtractorError('Login succeeded but did not set etp_rt cookie')
|
||||||
|
|
||||||
def _get_embedded_json(self, webpage, display_id):
|
def _get_embedded_json(self, webpage, display_id):
|
||||||
|
@ -116,6 +120,7 @@ class CrunchyrollBetaIE(CrunchyrollBaseIE):
|
||||||
'episode': 'To the Future',
|
'episode': 'To the Future',
|
||||||
'episode_number': 73,
|
'episode_number': 73,
|
||||||
'thumbnail': r're:^https://www.crunchyroll.com/imgsrv/.*\.jpeg$',
|
'thumbnail': r're:^https://www.crunchyroll.com/imgsrv/.*\.jpeg$',
|
||||||
|
'chapters': 'count:2',
|
||||||
},
|
},
|
||||||
'params': {'skip_download': 'm3u8', 'format': 'all[format_id~=hardsub]'},
|
'params': {'skip_download': 'm3u8', 'format': 'all[format_id~=hardsub]'},
|
||||||
}, {
|
}, {
|
||||||
|
@ -136,6 +141,7 @@ class CrunchyrollBetaIE(CrunchyrollBaseIE):
|
||||||
'episode': 'Porter Robinson presents Shelter the Animation',
|
'episode': 'Porter Robinson presents Shelter the Animation',
|
||||||
'episode_number': 0,
|
'episode_number': 0,
|
||||||
'thumbnail': r're:^https://www.crunchyroll.com/imgsrv/.*\.jpeg$',
|
'thumbnail': r're:^https://www.crunchyroll.com/imgsrv/.*\.jpeg$',
|
||||||
|
'chapters': 'count:0',
|
||||||
},
|
},
|
||||||
'params': {'skip_download': True},
|
'params': {'skip_download': True},
|
||||||
'skip': 'Video is Premium only',
|
'skip': 'Video is Premium only',
|
||||||
|
@ -154,8 +160,11 @@ def _real_extract(self, url):
|
||||||
episode_response = self._download_json(
|
episode_response = self._download_json(
|
||||||
f'{api_domain}/cms/v2{bucket}/episodes/{internal_id}', display_id,
|
f'{api_domain}/cms/v2{bucket}/episodes/{internal_id}', display_id,
|
||||||
note='Retrieving episode metadata', query=params)
|
note='Retrieving episode metadata', query=params)
|
||||||
if episode_response.get('is_premium_only') and not episode_response.get('playback'):
|
if episode_response.get('is_premium_only') and not bucket.endswith('crunchyroll'):
|
||||||
raise ExtractorError('This video is for premium members only.', expected=True)
|
if self.is_logged_in:
|
||||||
|
raise ExtractorError('This video is for premium members only', expected=True)
|
||||||
|
else:
|
||||||
|
self.raise_login_required('This video is for premium members only')
|
||||||
|
|
||||||
stream_response = self._download_json(
|
stream_response = self._download_json(
|
||||||
f'{api_domain}{episode_response["__links__"]["streams"]["href"]}', display_id,
|
f'{api_domain}{episode_response["__links__"]["streams"]["href"]}', display_id,
|
||||||
|
@ -209,6 +218,17 @@ def _real_extract(self, url):
|
||||||
f['quality'] = hardsub_preference(hardsub_lang.lower())
|
f['quality'] = hardsub_preference(hardsub_lang.lower())
|
||||||
formats.extend(adaptive_formats)
|
formats.extend(adaptive_formats)
|
||||||
|
|
||||||
|
chapters = None
|
||||||
|
# if no intro chapter is available, a 403 without usable data is returned
|
||||||
|
intro_chapter = self._download_json(f'https://static.crunchyroll.com/datalab-intro-v2/{internal_id}.json',
|
||||||
|
display_id, fatal=False, errnote=False)
|
||||||
|
if isinstance(intro_chapter, dict):
|
||||||
|
chapters = [{
|
||||||
|
'title': 'Intro',
|
||||||
|
'start_time': float_or_none(intro_chapter.get('startTime')),
|
||||||
|
'end_time': float_or_none(intro_chapter.get('endTime'))
|
||||||
|
}]
|
||||||
|
|
||||||
return {
|
return {
|
||||||
'id': internal_id,
|
'id': internal_id,
|
||||||
'title': '%s Episode %s – %s' % (
|
'title': '%s Episode %s – %s' % (
|
||||||
|
@ -235,6 +255,7 @@ def _real_extract(self, url):
|
||||||
'ext': subtitle_data.get('format')
|
'ext': subtitle_data.get('format')
|
||||||
}] for lang, subtitle_data in get_streams('subtitles')
|
}] for lang, subtitle_data in get_streams('subtitles')
|
||||||
},
|
},
|
||||||
|
'chapters': chapters
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
import time
|
import time
|
||||||
import hashlib
|
import hashlib
|
||||||
import re
|
import re
|
||||||
|
import urllib
|
||||||
|
|
||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
from ..utils import (
|
from ..utils import (
|
||||||
|
@ -13,7 +14,7 @@
|
||||||
|
|
||||||
class DouyuTVIE(InfoExtractor):
|
class DouyuTVIE(InfoExtractor):
|
||||||
IE_DESC = '斗鱼'
|
IE_DESC = '斗鱼'
|
||||||
_VALID_URL = r'https?://(?:www\.)?douyu(?:tv)?\.com/(?:[^/]+/)*(?P<id>[A-Za-z0-9]+)'
|
_VALID_URL = r'https?://(?:www\.)?douyu(?:tv)?\.com/(topic/\w+\?rid=|(?:[^/]+/))*(?P<id>[A-Za-z0-9]+)'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'http://www.douyutv.com/iseven',
|
'url': 'http://www.douyutv.com/iseven',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
|
@ -22,7 +23,7 @@ class DouyuTVIE(InfoExtractor):
|
||||||
'ext': 'flv',
|
'ext': 'flv',
|
||||||
'title': 're:^清晨醒脑!根本停不下来! [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$',
|
'title': 're:^清晨醒脑!根本停不下来! [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$',
|
||||||
'description': r're:.*m7show@163\.com.*',
|
'description': r're:.*m7show@163\.com.*',
|
||||||
'thumbnail': r're:^https?://.*\.jpg$',
|
'thumbnail': r're:^https?://.*\.png',
|
||||||
'uploader': '7师傅',
|
'uploader': '7师傅',
|
||||||
'is_live': True,
|
'is_live': True,
|
||||||
},
|
},
|
||||||
|
@ -37,7 +38,7 @@ class DouyuTVIE(InfoExtractor):
|
||||||
'ext': 'flv',
|
'ext': 'flv',
|
||||||
'title': 're:^小漠从零单排记!——CSOL2躲猫猫 [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$',
|
'title': 're:^小漠从零单排记!——CSOL2躲猫猫 [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$',
|
||||||
'description': 'md5:746a2f7a253966a06755a912f0acc0d2',
|
'description': 'md5:746a2f7a253966a06755a912f0acc0d2',
|
||||||
'thumbnail': r're:^https?://.*\.jpg$',
|
'thumbnail': r're:^https?://.*\.png',
|
||||||
'uploader': 'douyu小漠',
|
'uploader': 'douyu小漠',
|
||||||
'is_live': True,
|
'is_live': True,
|
||||||
},
|
},
|
||||||
|
@ -53,13 +54,28 @@ class DouyuTVIE(InfoExtractor):
|
||||||
'ext': 'flv',
|
'ext': 'flv',
|
||||||
'title': 're:^清晨醒脑!根本停不下来! [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$',
|
'title': 're:^清晨醒脑!根本停不下来! [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$',
|
||||||
'description': r're:.*m7show@163\.com.*',
|
'description': r're:.*m7show@163\.com.*',
|
||||||
'thumbnail': r're:^https?://.*\.jpg$',
|
'thumbnail': r're:^https?://.*\.png',
|
||||||
'uploader': '7师傅',
|
'uploader': '7师傅',
|
||||||
'is_live': True,
|
'is_live': True,
|
||||||
},
|
},
|
||||||
'params': {
|
'params': {
|
||||||
'skip_download': True,
|
'skip_download': True,
|
||||||
},
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.douyu.com/topic/ydxc?rid=6560603',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '6560603',
|
||||||
|
'display_id': '6560603',
|
||||||
|
'ext': 'flv',
|
||||||
|
'title': 're:^阿余:新年快乐恭喜发财! [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$',
|
||||||
|
'description': 're:.*直播时间.*',
|
||||||
|
'thumbnail': r're:^https?://.*\.png',
|
||||||
|
'uploader': '阿涛皎月Carry',
|
||||||
|
'live_status': 'is_live',
|
||||||
|
},
|
||||||
|
'params': {
|
||||||
|
'skip_download': True,
|
||||||
|
},
|
||||||
}, {
|
}, {
|
||||||
'url': 'http://www.douyu.com/xiaocang',
|
'url': 'http://www.douyu.com/xiaocang',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
|
@ -79,28 +95,24 @@ def _real_extract(self, url):
|
||||||
room_id = self._html_search_regex(
|
room_id = self._html_search_regex(
|
||||||
r'"room_id\\?"\s*:\s*(\d+),', page, 'room id')
|
r'"room_id\\?"\s*:\s*(\d+),', page, 'room id')
|
||||||
|
|
||||||
# Grab metadata from mobile API
|
# Grab metadata from API
|
||||||
|
params = {
|
||||||
|
'aid': 'wp',
|
||||||
|
'client_sys': 'wp',
|
||||||
|
'time': int(time.time()),
|
||||||
|
}
|
||||||
|
params['auth'] = hashlib.md5(
|
||||||
|
f'room/{video_id}?{urllib.parse.urlencode(params)}zNzMV1y4EMxOHS6I5WKm'.encode()).hexdigest()
|
||||||
room = self._download_json(
|
room = self._download_json(
|
||||||
'http://m.douyu.com/html5/live?roomId=%s' % room_id, video_id,
|
f'http://www.douyutv.com/api/v1/room/{room_id}', video_id,
|
||||||
note='Downloading room info')['data']
|
note='Downloading room info', query=params)['data']
|
||||||
|
|
||||||
# 1 = live, 2 = offline
|
# 1 = live, 2 = offline
|
||||||
if room.get('show_status') == '2':
|
if room.get('show_status') == '2':
|
||||||
raise ExtractorError('Live stream is offline', expected=True)
|
raise ExtractorError('Live stream is offline', expected=True)
|
||||||
|
|
||||||
# Grab the URL from PC client API
|
video_url = urljoin('https://hls3-akm.douyucdn.cn/', self._search_regex(r'(live/.*)', room['hls_url'], 'URL'))
|
||||||
# The m3u8 url from mobile API requires re-authentication every 5 minutes
|
formats, subs = self._extract_m3u8_formats_and_subtitles(video_url, room_id)
|
||||||
tt = int(time.time())
|
|
||||||
signContent = 'lapi/live/thirdPart/getPlay/%s?aid=pcclient&rate=0&time=%d9TUk5fjjUjg9qIMH3sdnh' % (room_id, tt)
|
|
||||||
sign = hashlib.md5(signContent.encode('ascii')).hexdigest()
|
|
||||||
video_url = self._download_json(
|
|
||||||
'http://coapi.douyucdn.cn/lapi/live/thirdPart/getPlay/' + room_id,
|
|
||||||
video_id, note='Downloading video URL info',
|
|
||||||
query={'rate': 0}, headers={
|
|
||||||
'auth': sign,
|
|
||||||
'time': str(tt),
|
|
||||||
'aid': 'pcclient'
|
|
||||||
})['data']['live_url']
|
|
||||||
|
|
||||||
title = unescapeHTML(room['room_name'])
|
title = unescapeHTML(room['room_name'])
|
||||||
description = room.get('show_details')
|
description = room.get('show_details')
|
||||||
|
@ -110,12 +122,13 @@ def _real_extract(self, url):
|
||||||
return {
|
return {
|
||||||
'id': room_id,
|
'id': room_id,
|
||||||
'display_id': video_id,
|
'display_id': video_id,
|
||||||
'url': video_url,
|
|
||||||
'title': title,
|
'title': title,
|
||||||
'description': description,
|
'description': description,
|
||||||
'thumbnail': thumbnail,
|
'thumbnail': thumbnail,
|
||||||
'uploader': uploader,
|
'uploader': uploader,
|
||||||
'is_live': True,
|
'is_live': True,
|
||||||
|
'subtitles': subs,
|
||||||
|
'formats': formats,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -12,7 +12,6 @@
|
||||||
mimetype2ext,
|
mimetype2ext,
|
||||||
str_or_none,
|
str_or_none,
|
||||||
traverse_obj,
|
traverse_obj,
|
||||||
try_get,
|
|
||||||
unified_timestamp,
|
unified_timestamp,
|
||||||
update_url_query,
|
update_url_query,
|
||||||
url_or_none,
|
url_or_none,
|
||||||
|
@ -25,7 +24,7 @@ class DRTVIE(InfoExtractor):
|
||||||
_VALID_URL = r'''(?x)
|
_VALID_URL = r'''(?x)
|
||||||
https?://
|
https?://
|
||||||
(?:
|
(?:
|
||||||
(?:www\.)?dr\.dk/(?:tv/se|nyheder|(?:radio|lyd)(?:/ondemand)?)/(?:[^/]+/)*|
|
(?:www\.)?dr\.dk/(?:tv/se|nyheder|(?P<radio>radio|lyd)(?:/ondemand)?)/(?:[^/]+/)*|
|
||||||
(?:www\.)?(?:dr\.dk|dr-massive\.com)/drtv/(?:se|episode|program)/
|
(?:www\.)?(?:dr\.dk|dr-massive\.com)/drtv/(?:se|episode|program)/
|
||||||
)
|
)
|
||||||
(?P<id>[\da-z_-]+)
|
(?P<id>[\da-z_-]+)
|
||||||
|
@ -80,7 +79,7 @@ class DRTVIE(InfoExtractor):
|
||||||
'description': 'md5:8c66dcbc1669bbc6f873879880f37f2a',
|
'description': 'md5:8c66dcbc1669bbc6f873879880f37f2a',
|
||||||
'timestamp': 1546628400,
|
'timestamp': 1546628400,
|
||||||
'upload_date': '20190104',
|
'upload_date': '20190104',
|
||||||
'duration': 3504.618,
|
'duration': 3504.619,
|
||||||
'formats': 'mincount:20',
|
'formats': 'mincount:20',
|
||||||
'release_year': 2017,
|
'release_year': 2017,
|
||||||
'season_id': 'urn:dr:mu:bundle:5afc03ad6187a4065ca5fd35',
|
'season_id': 'urn:dr:mu:bundle:5afc03ad6187a4065ca5fd35',
|
||||||
|
@ -101,14 +100,16 @@ class DRTVIE(InfoExtractor):
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
'title': 'Bonderøven 2019 (1:8)',
|
'title': 'Bonderøven 2019 (1:8)',
|
||||||
'description': 'md5:b6dcfe9b6f0bea6703e9a0092739a5bd',
|
'description': 'md5:b6dcfe9b6f0bea6703e9a0092739a5bd',
|
||||||
'timestamp': 1603188600,
|
'timestamp': 1654856100,
|
||||||
'upload_date': '20201020',
|
'upload_date': '20220610',
|
||||||
'duration': 2576.6,
|
'duration': 2576.6,
|
||||||
'season': 'Bonderøven 2019',
|
'season': 'Bonderøven 2019',
|
||||||
'season_id': 'urn:dr:mu:bundle:5c201667a11fa01ca4528ce5',
|
'season_id': 'urn:dr:mu:bundle:5c201667a11fa01ca4528ce5',
|
||||||
'release_year': 2019,
|
'release_year': 2019,
|
||||||
'season_number': 2019,
|
'season_number': 2019,
|
||||||
'series': 'Frank & Kastaniegaarden'
|
'series': 'Frank & Kastaniegaarden',
|
||||||
|
'episode_number': 1,
|
||||||
|
'episode': 'Episode 1',
|
||||||
},
|
},
|
||||||
'params': {
|
'params': {
|
||||||
'skip_download': True,
|
'skip_download': True,
|
||||||
|
@ -140,10 +141,26 @@ class DRTVIE(InfoExtractor):
|
||||||
'params': {
|
'params': {
|
||||||
'skip_download': True,
|
'skip_download': True,
|
||||||
},
|
},
|
||||||
|
'skip': 'this video has been removed',
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.dr.dk/lyd/p4kbh/regionale-nyheder-kh4/regionale-nyheder-2023-03-14-10-30-9',
|
||||||
|
'info_dict': {
|
||||||
|
'ext': 'mp4',
|
||||||
|
'id': '14802310112',
|
||||||
|
'timestamp': 1678786200,
|
||||||
|
'duration': 120.043,
|
||||||
|
'season_id': 'urn:dr:mu:bundle:63a4f7c87140143504b6710f',
|
||||||
|
'series': 'P4 København regionale nyheder',
|
||||||
|
'upload_date': '20230314',
|
||||||
|
'release_year': 0,
|
||||||
|
'description': 'Hør seneste regionale nyheder fra P4 København.',
|
||||||
|
'season': 'Regionale nyheder',
|
||||||
|
'title': 'Regionale nyheder',
|
||||||
|
},
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
raw_video_id = self._match_id(url)
|
raw_video_id, is_radio_url = self._match_valid_url(url).group('id', 'radio')
|
||||||
|
|
||||||
webpage = self._download_webpage(url, raw_video_id)
|
webpage = self._download_webpage(url, raw_video_id)
|
||||||
|
|
||||||
|
@ -170,23 +187,26 @@ def _real_extract(self, url):
|
||||||
programcard_url = '%s/%s' % (_PROGRAMCARD_BASE, video_id)
|
programcard_url = '%s/%s' % (_PROGRAMCARD_BASE, video_id)
|
||||||
else:
|
else:
|
||||||
programcard_url = _PROGRAMCARD_BASE
|
programcard_url = _PROGRAMCARD_BASE
|
||||||
page = self._parse_json(
|
if is_radio_url:
|
||||||
self._search_regex(
|
video_id = self._search_nextjs_data(
|
||||||
r'data\s*=\s*({.+?})\s*(?:;|</script)', webpage,
|
webpage, raw_video_id)['props']['pageProps']['episode']['productionNumber']
|
||||||
'data'), '1')['cache']['page']
|
else:
|
||||||
page = page[list(page.keys())[0]]
|
json_data = self._search_json(
|
||||||
item = try_get(
|
r'window\.__data\s*=', webpage, 'data', raw_video_id)
|
||||||
page, (lambda x: x['item'], lambda x: x['entries'][0]['item']),
|
video_id = traverse_obj(json_data, (
|
||||||
dict)
|
'cache', 'page', ..., (None, ('entries', 0)), 'item', 'customId',
|
||||||
video_id = item['customId'].split(':')[-1]
|
{lambda x: x.split(':')[-1]}), get_all=False)
|
||||||
|
if not video_id:
|
||||||
|
raise ExtractorError('Unable to extract video id')
|
||||||
query['productionnumber'] = video_id
|
query['productionnumber'] = video_id
|
||||||
|
|
||||||
data = self._download_json(
|
data = self._download_json(
|
||||||
programcard_url, video_id, 'Downloading video JSON', query=query)
|
programcard_url, video_id, 'Downloading video JSON', query=query)
|
||||||
|
|
||||||
supplementary_data = self._download_json(
|
supplementary_data = {}
|
||||||
SERIES_API % f'/episode/{raw_video_id}', raw_video_id,
|
if re.search(r'_\d+$', raw_video_id):
|
||||||
default={}) if re.search(r'_\d+$', raw_video_id) else {}
|
supplementary_data = self._download_json(
|
||||||
|
SERIES_API % f'/episode/{raw_video_id}', raw_video_id, fatal=False) or {}
|
||||||
|
|
||||||
title = str_or_none(data.get('Title')) or re.sub(
|
title = str_or_none(data.get('Title')) or re.sub(
|
||||||
r'\s*\|\s*(?:TV\s*\|\s*DR|DRTV)$', '',
|
r'\s*\|\s*(?:TV\s*\|\s*DR|DRTV)$', '',
|
||||||
|
@ -268,10 +288,11 @@ def decrypt_uri(e):
|
||||||
f['vcodec'] = 'none'
|
f['vcodec'] = 'none'
|
||||||
formats.extend(f4m_formats)
|
formats.extend(f4m_formats)
|
||||||
elif target == 'HLS':
|
elif target == 'HLS':
|
||||||
formats.extend(self._extract_m3u8_formats(
|
fmts, subs = self._extract_m3u8_formats_and_subtitles(
|
||||||
uri, video_id, 'mp4', entry_protocol='m3u8_native',
|
uri, video_id, 'mp4', entry_protocol='m3u8_native',
|
||||||
quality=preference, m3u8_id=format_id,
|
quality=preference, m3u8_id=format_id, fatal=False)
|
||||||
fatal=False))
|
formats.extend(fmts)
|
||||||
|
self._merge_subtitles(subs, target=subtitles)
|
||||||
else:
|
else:
|
||||||
bitrate = link.get('Bitrate')
|
bitrate = link.get('Bitrate')
|
||||||
if bitrate:
|
if bitrate:
|
||||||
|
|
36
yt_dlp/extractor/ebay.py
Normal file
36
yt_dlp/extractor/ebay.py
Normal file
|
@ -0,0 +1,36 @@
|
||||||
|
from .common import InfoExtractor
|
||||||
|
from ..utils import remove_end
|
||||||
|
|
||||||
|
|
||||||
|
class EbayIE(InfoExtractor):
|
||||||
|
_VALID_URL = r'https?://(?:www\.)?ebay\.com/itm/(?P<id>\d+)'
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://www.ebay.com/itm/194509326719',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '194509326719',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'WiFi internal antenna adhesive for wifi 2.4GHz wifi 5 wifi 6 wifi 6E full bands',
|
||||||
|
},
|
||||||
|
'params': {'skip_download': 'm3u8'}
|
||||||
|
}]
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
video_id = self._match_id(url)
|
||||||
|
webpage = self._download_webpage(url, video_id)
|
||||||
|
|
||||||
|
video_json = self._search_json(r'"video":', webpage, 'video json', video_id)
|
||||||
|
|
||||||
|
formats = []
|
||||||
|
for key, url in video_json['playlistMap'].items():
|
||||||
|
if key == 'HLS':
|
||||||
|
formats.extend(self._extract_m3u8_formats(url, video_id, fatal=False))
|
||||||
|
elif key == 'DASH':
|
||||||
|
formats.extend(self._extract_mpd_formats(url, video_id, fatal=False))
|
||||||
|
else:
|
||||||
|
self.report_warning(f'Unsupported format {key}', video_id)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'id': video_id,
|
||||||
|
'title': remove_end(self._html_extract_title(webpage), ' | eBay'),
|
||||||
|
'formats': formats
|
||||||
|
}
|
|
@ -61,14 +61,43 @@ class EmbedlyIE(InfoExtractor):
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
}]
|
}]
|
||||||
|
|
||||||
|
_WEBPAGE_TESTS = [{
|
||||||
|
'url': 'http://www.permacultureetc.com/2022/12/comment-greffer-facilement-les-arbres-fruitiers.html',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'pfUK_ADTvgY',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Comment greffer facilement les arbres fruitiers ? (mois par mois)',
|
||||||
|
'description': 'md5:d3a876995e522f138aabb48e040bfb4c',
|
||||||
|
'view_count': int,
|
||||||
|
'upload_date': '20221210',
|
||||||
|
'comment_count': int,
|
||||||
|
'live_status': 'not_live',
|
||||||
|
'channel_id': 'UCsM4_jihNFYe4CtSkXvDR-Q',
|
||||||
|
'channel_follower_count': int,
|
||||||
|
'tags': ['permaculture', 'jardinage', 'dekarz', 'autonomie', 'greffe', 'fruitiers', 'arbres', 'jardin forêt', 'forêt comestible', 'damien'],
|
||||||
|
'playable_in_embed': True,
|
||||||
|
'uploader': 'permaculture agroécologie etc...',
|
||||||
|
'channel': 'permaculture agroécologie etc...',
|
||||||
|
'thumbnail': 'https://i.ytimg.com/vi/pfUK_ADTvgY/sddefault.jpg',
|
||||||
|
'duration': 1526,
|
||||||
|
'channel_url': 'https://www.youtube.com/channel/UCsM4_jihNFYe4CtSkXvDR-Q',
|
||||||
|
'age_limit': 0,
|
||||||
|
'uploader_id': 'permacultureetc',
|
||||||
|
'like_count': int,
|
||||||
|
'uploader_url': 'http://www.youtube.com/user/permacultureetc',
|
||||||
|
'categories': ['Education'],
|
||||||
|
'availability': 'public',
|
||||||
|
},
|
||||||
|
}]
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def _extract_embed_urls(cls, url, webpage):
|
def _extract_from_webpage(cls, url, webpage):
|
||||||
# Bypass suitable check
|
# Bypass "ie=cls" and suitable check
|
||||||
for mobj in re.finditer(r'class=["\']embedly-card["\'][^>]href=["\'](?P<url>[^"\']+)', webpage):
|
for mobj in re.finditer(r'class=["\']embedly-card["\'][^>]href=["\'](?P<url>[^"\']+)', webpage):
|
||||||
yield mobj.group('url')
|
yield cls.url_result(mobj.group('url'))
|
||||||
|
|
||||||
for mobj in re.finditer(r'class=["\']embedly-embed["\'][^>]src=["\'][^"\']*url=(?P<url>[^&]+)', webpage):
|
for mobj in re.finditer(r'class=["\']embedly-embed["\'][^>]src=["\'][^"\']*url=(?P<url>[^&]+)', webpage):
|
||||||
yield urllib.parse.unquote(mobj.group('url'))
|
yield cls.url_result(urllib.parse.unquote(mobj.group('url')))
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
qs = parse_qs(url)
|
qs = parse_qs(url)
|
||||||
|
|
|
@ -240,7 +240,7 @@ def _real_extract(self, url):
|
||||||
|
|
||||||
|
|
||||||
class ESPNCricInfoIE(InfoExtractor):
|
class ESPNCricInfoIE(InfoExtractor):
|
||||||
_VALID_URL = r'https?://(?:www\.)?espncricinfo\.com/video/[^#$&?/]+-(?P<id>\d+)'
|
_VALID_URL = r'https?://(?:www\.)?espncricinfo\.com/(?:cricket-)?videos?/[^#$&?/]+-(?P<id>\d+)'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'https://www.espncricinfo.com/video/finch-chasing-comes-with-risks-despite-world-cup-trend-1289135',
|
'url': 'https://www.espncricinfo.com/video/finch-chasing-comes-with-risks-despite-world-cup-trend-1289135',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
|
@ -252,6 +252,17 @@ class ESPNCricInfoIE(InfoExtractor):
|
||||||
'duration': 96,
|
'duration': 96,
|
||||||
},
|
},
|
||||||
'params': {'skip_download': True}
|
'params': {'skip_download': True}
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.espncricinfo.com/cricket-videos/daryl-mitchell-mitchell-santner-is-one-of-the-best-white-ball-spinners-india-vs-new-zealand-1356225',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '1356225',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'description': '"Santner has done it for a long time for New Zealand - we\'re lucky to have him"',
|
||||||
|
'upload_date': '20230128',
|
||||||
|
'title': 'Mitchell: \'Santner is one of the best white-ball spinners at the moment\'',
|
||||||
|
'duration': 87,
|
||||||
|
},
|
||||||
|
'params': {'skip_download': 'm3u8'},
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
|
|
|
@ -52,6 +52,7 @@ def _real_extract(self, url):
|
||||||
tags_str = get_element_by_class('tags', webpage)
|
tags_str = get_element_by_class('tags', webpage)
|
||||||
tags = re.findall(r'<a[^>]+>([^<]+)', tags_str) if tags_str else None
|
tags = re.findall(r'<a[^>]+>([^<]+)', tags_str) if tags_str else None
|
||||||
|
|
||||||
|
audio_url = re.sub(r'^https?://freesound\.org(https?://)', r'\1', audio_url)
|
||||||
audio_urls = [audio_url]
|
audio_urls = [audio_url]
|
||||||
|
|
||||||
LQ_FORMAT = '-lq.mp3'
|
LQ_FORMAT = '-lq.mp3'
|
||||||
|
|
|
@ -48,7 +48,7 @@ def _get_comments(self, post_num_id, post_hash_id):
|
||||||
post_hash_id, note='Downloading comments list page %d' % page)
|
post_hash_id, note='Downloading comments list page %d' % page)
|
||||||
if not comments_data.get('comments'):
|
if not comments_data.get('comments'):
|
||||||
break
|
break
|
||||||
for comment in traverse_obj(comments_data, (('comments', 'childComments'), ...), expected_type=dict, default=[]):
|
for comment in traverse_obj(comments_data, (('comments', 'childComments'), ...), expected_type=dict):
|
||||||
yield {
|
yield {
|
||||||
'id': comment['id'],
|
'id': comment['id'],
|
||||||
'text': self._parse_content_as_text(
|
'text': self._parse_content_as_text(
|
||||||
|
|
|
@ -15,6 +15,7 @@
|
||||||
UnsupportedError,
|
UnsupportedError,
|
||||||
determine_ext,
|
determine_ext,
|
||||||
dict_get,
|
dict_get,
|
||||||
|
extract_basic_auth,
|
||||||
format_field,
|
format_field,
|
||||||
int_or_none,
|
int_or_none,
|
||||||
is_html,
|
is_html,
|
||||||
|
@ -864,20 +865,6 @@ class GenericIE(InfoExtractor):
|
||||||
'thumbnail': r're:^https?://.*\.jpg$',
|
'thumbnail': r're:^https?://.*\.jpg$',
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
|
||||||
# JWPlayer config passed as variable
|
|
||||||
'url': 'http://www.txxx.com/videos/3326530/ariele/',
|
|
||||||
'info_dict': {
|
|
||||||
'id': '3326530_hq',
|
|
||||||
'ext': 'mp4',
|
|
||||||
'title': 'ARIELE | Tube Cup',
|
|
||||||
'uploader': 'www.txxx.com',
|
|
||||||
'age_limit': 18,
|
|
||||||
},
|
|
||||||
'params': {
|
|
||||||
'skip_download': True,
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
# Video.js embed, multiple formats
|
# Video.js embed, multiple formats
|
||||||
'url': 'http://ortcam.com/solidworks-урок-6-настройка-чертежа_33f9b7351.html',
|
'url': 'http://ortcam.com/solidworks-урок-6-настройка-чертежа_33f9b7351.html',
|
||||||
|
@ -2386,9 +2373,8 @@ def _real_extract(self, url):
|
||||||
**smuggled_data.get('http_headers', {})
|
**smuggled_data.get('http_headers', {})
|
||||||
})
|
})
|
||||||
new_url = full_response.geturl()
|
new_url = full_response.geturl()
|
||||||
if new_url == urllib.parse.urlparse(url)._replace(scheme='https').geturl():
|
url = urllib.parse.urlparse(url)._replace(scheme=urllib.parse.urlparse(new_url).scheme).geturl()
|
||||||
url = new_url
|
if new_url != extract_basic_auth(url)[0]:
|
||||||
elif url != new_url:
|
|
||||||
self.report_following_redirect(new_url)
|
self.report_following_redirect(new_url)
|
||||||
if force_videoid:
|
if force_videoid:
|
||||||
new_url = smuggle_url(new_url, {'force_videoid': force_videoid})
|
new_url = smuggle_url(new_url, {'force_videoid': force_videoid})
|
||||||
|
@ -2407,14 +2393,15 @@ def _real_extract(self, url):
|
||||||
self.report_detected('direct video link')
|
self.report_detected('direct video link')
|
||||||
headers = smuggled_data.get('http_headers', {})
|
headers = smuggled_data.get('http_headers', {})
|
||||||
format_id = str(m.group('format_id'))
|
format_id = str(m.group('format_id'))
|
||||||
|
ext = determine_ext(url)
|
||||||
subtitles = {}
|
subtitles = {}
|
||||||
if format_id.endswith('mpegurl'):
|
if format_id.endswith('mpegurl') or ext == 'm3u8':
|
||||||
formats, subtitles = self._extract_m3u8_formats_and_subtitles(url, video_id, 'mp4', headers=headers)
|
formats, subtitles = self._extract_m3u8_formats_and_subtitles(url, video_id, 'mp4', headers=headers)
|
||||||
info_dict.update(self._fragment_query(url))
|
info_dict.update(self._fragment_query(url))
|
||||||
elif format_id.endswith('mpd') or format_id.endswith('dash+xml'):
|
elif format_id.endswith('mpd') or format_id.endswith('dash+xml') or ext == 'mpd':
|
||||||
formats, subtitles = self._extract_mpd_formats_and_subtitles(url, video_id, headers=headers)
|
formats, subtitles = self._extract_mpd_formats_and_subtitles(url, video_id, headers=headers)
|
||||||
info_dict.update(self._fragment_query(url))
|
info_dict.update(self._fragment_query(url))
|
||||||
elif format_id == 'f4m':
|
elif format_id == 'f4m' or ext == 'f4m':
|
||||||
formats = self._extract_f4m_formats(url, video_id, headers=headers)
|
formats = self._extract_f4m_formats(url, video_id, headers=headers)
|
||||||
else:
|
else:
|
||||||
formats = [{
|
formats = [{
|
||||||
|
@ -2637,11 +2624,11 @@ def _extract_embeds(self, url, webpage, *, urlh=None, info_dict={}):
|
||||||
|
|
||||||
# Look for generic KVS player (before json-ld bc of some urls that break otherwise)
|
# Look for generic KVS player (before json-ld bc of some urls that break otherwise)
|
||||||
found = self._search_regex((
|
found = self._search_regex((
|
||||||
r'<script\b[^>]+?\bsrc\s*=\s*(["\'])https?://(?:\S+?/)+kt_player\.js\?v=(?P<ver>\d+(?:\.\d+)+)\1[^>]*>',
|
r'<script\b[^>]+?\bsrc\s*=\s*(["\'])https?://(?:(?!\1)[^?#])+/kt_player\.js\?v=(?P<ver>\d+(?:\.\d+)+)\1[^>]*>',
|
||||||
r'kt_player\s*\(\s*(["\'])(?:(?!\1)[\w\W])+\1\s*,\s*(["\'])https?://(?:\S+?/)+kt_player\.swf\?v=(?P<ver>\d+(?:\.\d+)+)\2\s*,',
|
r'kt_player\s*\(\s*(["\'])(?:(?!\1)[\w\W])+\1\s*,\s*(["\'])https?://(?:(?!\2)[^?#])+/kt_player\.swf\?v=(?P<ver>\d+(?:\.\d+)+)\2\s*,',
|
||||||
), webpage, 'KVS player', group='ver', default=False)
|
), webpage, 'KVS player', group='ver', default=False)
|
||||||
if found:
|
if found:
|
||||||
self.report_detected('KWS Player')
|
self.report_detected('KVS Player')
|
||||||
if found.split('.')[0] not in ('4', '5', '6'):
|
if found.split('.')[0] not in ('4', '5', '6'):
|
||||||
self.report_warning(f'Untested major version ({found}) in player engine - download may fail.')
|
self.report_warning(f'Untested major version ({found}) in player engine - download may fail.')
|
||||||
return [self._extract_kvs(url, webpage, video_id)]
|
return [self._extract_kvs(url, webpage, video_id)]
|
||||||
|
|
|
@ -10,7 +10,7 @@
|
||||||
|
|
||||||
|
|
||||||
class GeniusIE(InfoExtractor):
|
class GeniusIE(InfoExtractor):
|
||||||
_VALID_URL = r'https?://(?:www\.)?genius\.com/videos/(?P<id>[^?/#]+)'
|
_VALID_URL = r'https?://(?:www\.)?genius\.com/(?:videos|(?P<article>a))/(?P<id>[^?/#]+)'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'https://genius.com/videos/Vince-staples-breaks-down-the-meaning-of-when-sparks-fly',
|
'url': 'https://genius.com/videos/Vince-staples-breaks-down-the-meaning-of-when-sparks-fly',
|
||||||
'md5': '64c2ad98cfafcfda23bfa0ad0c512f4c',
|
'md5': '64c2ad98cfafcfda23bfa0ad0c512f4c',
|
||||||
|
@ -41,19 +41,37 @@ class GeniusIE(InfoExtractor):
|
||||||
'timestamp': 1631209167,
|
'timestamp': 1631209167,
|
||||||
'thumbnail': r're:^https?://.*\.jpg$',
|
'thumbnail': r're:^https?://.*\.jpg$',
|
||||||
},
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://genius.com/a/cordae-anderson-paak-break-down-the-meaning-of-two-tens',
|
||||||
|
'md5': 'f98a4e03b16b0a2821bd6e52fb3cc9d7',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '6321509903112',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Cordae & Anderson .Paak Breaks Down The Meaning Of “Two Tens”',
|
||||||
|
'description': 'md5:1255f0e1161d07342ce56a8464ac339d',
|
||||||
|
'tags': ['song id: 5457554'],
|
||||||
|
'uploader_id': '4863540648001',
|
||||||
|
'duration': 361.813,
|
||||||
|
'upload_date': '20230301',
|
||||||
|
'timestamp': 1677703908,
|
||||||
|
'thumbnail': r're:^https?://.*\.jpg$',
|
||||||
|
},
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
display_id = self._match_id(url)
|
display_id, is_article = self._match_valid_url(url).group('id', 'article')
|
||||||
webpage = self._download_webpage(url, display_id)
|
webpage = self._download_webpage(url, display_id)
|
||||||
|
|
||||||
metadata = self._search_json(
|
metadata = self._search_json(
|
||||||
r'<meta content="', webpage, 'metadata', display_id, transform_source=unescapeHTML)
|
r'<meta content="', webpage, 'metadata', display_id,
|
||||||
video_id = traverse_obj(
|
end_pattern=r'"\s+itemprop="page_data"', transform_source=unescapeHTML)
|
||||||
metadata, ('video', 'provider_id'),
|
video_id = traverse_obj(metadata, (
|
||||||
('dfp_kv', lambda _, x: x['name'] == 'brightcove_video_id', 'values', 0), get_all=False)
|
(('article', 'media', ...), ('video', None)),
|
||||||
|
('provider_id', ('dfp_kv', lambda _, v: v['name'] == 'brightcove_video_id', 'values', ...))),
|
||||||
|
get_all=False)
|
||||||
if not video_id:
|
if not video_id:
|
||||||
raise ExtractorError('Brightcove video id not found in webpage')
|
# Not all article pages have videos, expect the error
|
||||||
|
raise ExtractorError('Brightcove video ID not found in webpage', expected=bool(is_article))
|
||||||
|
|
||||||
config = self._search_json(r'var\s*APP_CONFIG\s*=', webpage, 'config', video_id, default={})
|
config = self._search_json(r'var\s*APP_CONFIG\s*=', webpage, 'config', video_id, default={})
|
||||||
account_id = config.get('brightcove_account_id', '4863540648001')
|
account_id = config.get('brightcove_account_id', '4863540648001')
|
||||||
|
@ -68,7 +86,7 @@ def _real_extract(self, url):
|
||||||
|
|
||||||
|
|
||||||
class GeniusLyricsIE(InfoExtractor):
|
class GeniusLyricsIE(InfoExtractor):
|
||||||
_VALID_URL = r'https?://(?:www\.)?genius\.com/(?P<id>[^?/#]+)-lyrics[?/#]?'
|
_VALID_URL = r'https?://(?:www\.)?genius\.com/(?P<id>[^?/#]+)-lyrics(?:[?/#]|$)'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'https://genius.com/Lil-baby-heyy-lyrics',
|
'url': 'https://genius.com/Lil-baby-heyy-lyrics',
|
||||||
'playlist_mincount': 2,
|
'playlist_mincount': 2,
|
||||||
|
|
|
@ -3,8 +3,8 @@
|
||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
from ..compat import compat_parse_qs
|
from ..compat import compat_parse_qs
|
||||||
from ..utils import (
|
from ..utils import (
|
||||||
determine_ext,
|
|
||||||
ExtractorError,
|
ExtractorError,
|
||||||
|
determine_ext,
|
||||||
get_element_by_class,
|
get_element_by_class,
|
||||||
int_or_none,
|
int_or_none,
|
||||||
lowercase_escape,
|
lowercase_escape,
|
||||||
|
@ -163,15 +163,13 @@ def _real_extract(self, url):
|
||||||
video_id = self._match_id(url)
|
video_id = self._match_id(url)
|
||||||
video_info = compat_parse_qs(self._download_webpage(
|
video_info = compat_parse_qs(self._download_webpage(
|
||||||
'https://drive.google.com/get_video_info',
|
'https://drive.google.com/get_video_info',
|
||||||
video_id, query={'docid': video_id}))
|
video_id, 'Downloading video webpage', query={'docid': video_id}))
|
||||||
|
|
||||||
def get_value(key):
|
def get_value(key):
|
||||||
return try_get(video_info, lambda x: x[key][0])
|
return try_get(video_info, lambda x: x[key][0])
|
||||||
|
|
||||||
reason = get_value('reason')
|
reason = get_value('reason')
|
||||||
title = get_value('title')
|
title = get_value('title')
|
||||||
if not title and reason:
|
|
||||||
raise ExtractorError(reason, expected=True)
|
|
||||||
|
|
||||||
formats = []
|
formats = []
|
||||||
fmt_stream_map = (get_value('fmt_stream_map') or '').split(',')
|
fmt_stream_map = (get_value('fmt_stream_map') or '').split(',')
|
||||||
|
@ -216,6 +214,11 @@ def request_source_file(source_url, kind):
|
||||||
urlh = request_source_file(source_url, 'source')
|
urlh = request_source_file(source_url, 'source')
|
||||||
if urlh:
|
if urlh:
|
||||||
def add_source_format(urlh):
|
def add_source_format(urlh):
|
||||||
|
nonlocal title
|
||||||
|
if not title:
|
||||||
|
title = self._search_regex(
|
||||||
|
r'\bfilename="([^"]+)"', urlh.headers.get('Content-Disposition'),
|
||||||
|
'title', default=None)
|
||||||
formats.append({
|
formats.append({
|
||||||
# Use redirect URLs as download URLs in order to calculate
|
# Use redirect URLs as download URLs in order to calculate
|
||||||
# correct cookies in _calc_cookies.
|
# correct cookies in _calc_cookies.
|
||||||
|
@ -251,7 +254,10 @@ def add_source_format(urlh):
|
||||||
or 'unable to extract confirmation code')
|
or 'unable to extract confirmation code')
|
||||||
|
|
||||||
if not formats and reason:
|
if not formats and reason:
|
||||||
self.raise_no_formats(reason, expected=True)
|
if title:
|
||||||
|
self.raise_no_formats(reason, expected=True)
|
||||||
|
else:
|
||||||
|
raise ExtractorError(reason, expected=True)
|
||||||
|
|
||||||
hl = get_value('hl')
|
hl = get_value('hl')
|
||||||
subtitles_id = None
|
subtitles_id = None
|
||||||
|
|
|
@ -76,11 +76,11 @@ def _real_extract(self, url):
|
||||||
}
|
}
|
||||||
|
|
||||||
api = self._download_json(
|
api = self._download_json(
|
||||||
f'https://api.viervijfzes.be/content/{video_id}',
|
f'https://api.goplay.be/web/v1/videos/long-form/{video_id}',
|
||||||
video_id, headers={'Authorization': self._id_token})
|
video_id, headers={'Authorization': 'Bearer %s' % self._id_token})
|
||||||
|
|
||||||
formats, subs = self._extract_m3u8_formats_and_subtitles(
|
formats, subs = self._extract_m3u8_formats_and_subtitles(
|
||||||
api['video']['S'], video_id, ext='mp4', m3u8_id='HLS')
|
api['manifestUrls']['hls'], video_id, ext='mp4', m3u8_id='HLS')
|
||||||
|
|
||||||
info_dict.update({
|
info_dict.update({
|
||||||
'id': video_id,
|
'id': video_id,
|
||||||
|
|
|
@ -1,5 +1,3 @@
|
||||||
import re
|
|
||||||
|
|
||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
from ..utils import (
|
from ..utils import (
|
||||||
ExtractorError,
|
ExtractorError,
|
||||||
|
@ -39,15 +37,28 @@ def _perform_login(self, username, password):
|
||||||
form = self._search_regex(
|
form = self._search_regex(
|
||||||
r'(?s)<form[^>]+action="/account/login"[^>]*>(.+?)</form>',
|
r'(?s)<form[^>]+action="/account/login"[^>]*>(.+?)</form>',
|
||||||
webpage, 'login form', default=None)
|
webpage, 'login form', default=None)
|
||||||
if not form: # logged in
|
if not form:
|
||||||
return
|
return
|
||||||
data = self._hidden_inputs(form)
|
data = self._hidden_inputs(form)
|
||||||
data.update({
|
data.update({
|
||||||
'Email': username,
|
'Email': username,
|
||||||
'Password': password,
|
'Password': password,
|
||||||
})
|
})
|
||||||
self._download_webpage(
|
login_webpage = self._download_webpage(
|
||||||
self._LOGIN_URL, None, 'Logging in', data=urlencode_postdata(data))
|
self._LOGIN_URL, None, 'Logging in', data=urlencode_postdata(data))
|
||||||
|
# If the user has multiple profiles on their account, select one. For now pick the first profile.
|
||||||
|
profile_id = self._search_regex(
|
||||||
|
r'<button [^>]+?data-profile-id="(\w+)"', login_webpage, 'profile id', default=None)
|
||||||
|
if profile_id is None:
|
||||||
|
return # If only one profile, Hidive auto-selects it
|
||||||
|
self._request_webpage(
|
||||||
|
'https://www.hidive.com/ajax/chooseprofile', None,
|
||||||
|
data=urlencode_postdata({
|
||||||
|
'profileId': profile_id,
|
||||||
|
'hash': self._search_regex(
|
||||||
|
r'\<button [^>]+?data-hash="(\w+)"', login_webpage, 'profile id hash'),
|
||||||
|
'returnUrl': '/dashboard'
|
||||||
|
}))
|
||||||
|
|
||||||
def _call_api(self, video_id, title, key, data={}, **kwargs):
|
def _call_api(self, video_id, title, key, data={}, **kwargs):
|
||||||
data = {
|
data = {
|
||||||
|
@ -60,26 +71,6 @@ def _call_api(self, video_id, title, key, data={}, **kwargs):
|
||||||
'https://www.hidive.com/play/settings', video_id,
|
'https://www.hidive.com/play/settings', video_id,
|
||||||
data=urlencode_postdata(data), **kwargs) or {}
|
data=urlencode_postdata(data), **kwargs) or {}
|
||||||
|
|
||||||
def _extract_subtitles_from_rendition(self, rendition, subtitles, parsed_urls):
|
|
||||||
for cc_file in rendition.get('ccFiles', []):
|
|
||||||
cc_url = url_or_none(try_get(cc_file, lambda x: x[2]))
|
|
||||||
# name is used since we cant distinguish subs with same language code
|
|
||||||
cc_lang = try_get(cc_file, (lambda x: x[1].replace(' ', '-').lower(), lambda x: x[0]), str)
|
|
||||||
if cc_url not in parsed_urls and cc_lang:
|
|
||||||
parsed_urls.add(cc_url)
|
|
||||||
subtitles.setdefault(cc_lang, []).append({'url': cc_url})
|
|
||||||
|
|
||||||
def _get_subtitles(self, url, video_id, title, key, parsed_urls):
|
|
||||||
webpage = self._download_webpage(url, video_id, fatal=False) or ''
|
|
||||||
subtitles = {}
|
|
||||||
for caption in set(re.findall(r'data-captions=\"([^\"]+)\"', webpage)):
|
|
||||||
renditions = self._call_api(
|
|
||||||
video_id, title, key, {'Captions': caption}, fatal=False,
|
|
||||||
note=f'Downloading {caption} subtitle information').get('renditions') or {}
|
|
||||||
for rendition_id, rendition in renditions.items():
|
|
||||||
self._extract_subtitles_from_rendition(rendition, subtitles, parsed_urls)
|
|
||||||
return subtitles
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
video_id, title, key = self._match_valid_url(url).group('id', 'title', 'key')
|
video_id, title, key = self._match_valid_url(url).group('id', 'title', 'key')
|
||||||
settings = self._call_api(video_id, title, key)
|
settings = self._call_api(video_id, title, key)
|
||||||
|
@ -104,10 +95,20 @@ def _real_extract(self, url):
|
||||||
f['format_note'] = f'{version}, {extra}'
|
f['format_note'] = f'{version}, {extra}'
|
||||||
formats.extend(frmt)
|
formats.extend(frmt)
|
||||||
|
|
||||||
|
subtitles = {}
|
||||||
|
for rendition_id, rendition in settings['renditions'].items():
|
||||||
|
audio, version, extra = rendition_id.split('_')
|
||||||
|
for cc_file in rendition.get('ccFiles') or []:
|
||||||
|
cc_url = url_or_none(try_get(cc_file, lambda x: x[2]))
|
||||||
|
cc_lang = try_get(cc_file, (lambda x: x[1].replace(' ', '-').lower(), lambda x: x[0]), str)
|
||||||
|
if cc_url not in parsed_urls and cc_lang:
|
||||||
|
parsed_urls.add(cc_url)
|
||||||
|
subtitles.setdefault(cc_lang, []).append({'url': cc_url})
|
||||||
|
|
||||||
return {
|
return {
|
||||||
'id': video_id,
|
'id': video_id,
|
||||||
'title': video_id,
|
'title': video_id,
|
||||||
'subtitles': self.extract_subtitles(url, video_id, title, key, parsed_urls),
|
'subtitles': subtitles,
|
||||||
'formats': formats,
|
'formats': formats,
|
||||||
'series': title,
|
'series': title,
|
||||||
'season_number': int_or_none(
|
'season_number': int_or_none(
|
||||||
|
|
|
@ -1,5 +1,6 @@
|
||||||
import hashlib
|
import hashlib
|
||||||
import random
|
import random
|
||||||
|
import re
|
||||||
|
|
||||||
from ..compat import compat_urlparse, compat_b64decode
|
from ..compat import compat_urlparse, compat_b64decode
|
||||||
|
|
||||||
|
@ -37,7 +38,7 @@ class HuyaLiveIE(InfoExtractor):
|
||||||
}]
|
}]
|
||||||
|
|
||||||
_RESOLUTION = {
|
_RESOLUTION = {
|
||||||
'蓝光4M': {
|
'蓝光': {
|
||||||
'width': 1920,
|
'width': 1920,
|
||||||
'height': 1080,
|
'height': 1080,
|
||||||
},
|
},
|
||||||
|
@ -76,11 +77,15 @@ def _real_extract(self, url):
|
||||||
if re_secret:
|
if re_secret:
|
||||||
fm, ss = self.encrypt(params, stream_info, stream_name)
|
fm, ss = self.encrypt(params, stream_info, stream_name)
|
||||||
for si in stream_data.get('vMultiStreamInfo'):
|
for si in stream_data.get('vMultiStreamInfo'):
|
||||||
|
display_name, bitrate = re.fullmatch(
|
||||||
|
r'(.+?)(?:(\d+)M)?', si.get('sDisplayName')).groups()
|
||||||
rate = si.get('iBitRate')
|
rate = si.get('iBitRate')
|
||||||
if rate:
|
if rate:
|
||||||
params['ratio'] = rate
|
params['ratio'] = rate
|
||||||
else:
|
else:
|
||||||
params.pop('ratio', None)
|
params.pop('ratio', None)
|
||||||
|
if bitrate:
|
||||||
|
rate = int(bitrate) * 1000
|
||||||
if re_secret:
|
if re_secret:
|
||||||
params['wsSecret'] = hashlib.md5(
|
params['wsSecret'] = hashlib.md5(
|
||||||
'_'.join([fm, params['u'], stream_name, ss, params['wsTime']]))
|
'_'.join([fm, params['u'], stream_name, ss, params['wsTime']]))
|
||||||
|
@ -90,7 +95,7 @@ def _real_extract(self, url):
|
||||||
'tbr': rate,
|
'tbr': rate,
|
||||||
'url': update_url_query(f'{stream_url}/{stream_name}.{stream_info.get("sFlvUrlSuffix")}',
|
'url': update_url_query(f'{stream_url}/{stream_name}.{stream_info.get("sFlvUrlSuffix")}',
|
||||||
query=params),
|
query=params),
|
||||||
**self._RESOLUTION.get(si.get('sDisplayName'), {}),
|
**self._RESOLUTION.get(display_name, {}),
|
||||||
})
|
})
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
|
32
yt_dlp/extractor/hypergryph.py
Normal file
32
yt_dlp/extractor/hypergryph.py
Normal file
|
@ -0,0 +1,32 @@
|
||||||
|
from .common import InfoExtractor
|
||||||
|
from ..utils import js_to_json, traverse_obj
|
||||||
|
|
||||||
|
|
||||||
|
class MonsterSirenHypergryphMusicIE(InfoExtractor):
|
||||||
|
_VALID_URL = r'https?://monster-siren\.hypergryph\.com/music/(?P<id>\d+)'
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://monster-siren.hypergryph.com/music/514562',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '514562',
|
||||||
|
'ext': 'wav',
|
||||||
|
'artist': ['塞壬唱片-MSR'],
|
||||||
|
'album': 'Flame Shadow',
|
||||||
|
'title': 'Flame Shadow',
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
audio_id = self._match_id(url)
|
||||||
|
webpage = self._download_webpage(url, audio_id)
|
||||||
|
json_data = self._search_json(
|
||||||
|
r'window\.g_initialProps\s*=', webpage, 'data', audio_id, transform_source=js_to_json)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'id': audio_id,
|
||||||
|
'title': traverse_obj(json_data, ('player', 'songDetail', 'name')),
|
||||||
|
'url': traverse_obj(json_data, ('player', 'songDetail', 'sourceUrl')),
|
||||||
|
'ext': 'wav',
|
||||||
|
'vcodec': 'none',
|
||||||
|
'artist': traverse_obj(json_data, ('player', 'songDetail', 'artists')),
|
||||||
|
'album': traverse_obj(json_data, ('musicPlay', 'albumDetail', 'name'))
|
||||||
|
}
|
|
@ -1,17 +1,20 @@
|
||||||
import re
|
import re
|
||||||
|
import urllib.error
|
||||||
|
|
||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
from ..compat import (
|
from ..compat import compat_parse_qs
|
||||||
compat_parse_qs,
|
|
||||||
compat_urllib_parse_urlparse,
|
|
||||||
)
|
|
||||||
from ..utils import (
|
from ..utils import (
|
||||||
HEADRequest,
|
ExtractorError,
|
||||||
determine_ext,
|
determine_ext,
|
||||||
|
error_to_compat_str,
|
||||||
|
extract_attributes,
|
||||||
int_or_none,
|
int_or_none,
|
||||||
|
merge_dicts,
|
||||||
parse_iso8601,
|
parse_iso8601,
|
||||||
strip_or_none,
|
strip_or_none,
|
||||||
try_get,
|
traverse_obj,
|
||||||
|
url_or_none,
|
||||||
|
urljoin,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@ -20,14 +23,90 @@ def _call_api(self, slug):
|
||||||
return self._download_json(
|
return self._download_json(
|
||||||
'http://apis.ign.com/{0}/v3/{0}s/slug/{1}'.format(self._PAGE_TYPE, slug), slug)
|
'http://apis.ign.com/{0}/v3/{0}s/slug/{1}'.format(self._PAGE_TYPE, slug), slug)
|
||||||
|
|
||||||
|
def _checked_call_api(self, slug):
|
||||||
|
try:
|
||||||
|
return self._call_api(slug)
|
||||||
|
except ExtractorError as e:
|
||||||
|
if isinstance(e.cause, urllib.error.HTTPError) and e.cause.code == 404:
|
||||||
|
e.cause.args = e.cause.args or [
|
||||||
|
e.cause.geturl(), e.cause.getcode(), e.cause.reason]
|
||||||
|
raise ExtractorError(
|
||||||
|
'Content not found: expired?', cause=e.cause,
|
||||||
|
expected=True)
|
||||||
|
raise
|
||||||
|
|
||||||
|
def _extract_video_info(self, video, fatal=True):
|
||||||
|
video_id = video['videoId']
|
||||||
|
|
||||||
|
formats = []
|
||||||
|
refs = traverse_obj(video, 'refs', expected_type=dict) or {}
|
||||||
|
|
||||||
|
m3u8_url = url_or_none(refs.get('m3uUrl'))
|
||||||
|
if m3u8_url:
|
||||||
|
formats.extend(self._extract_m3u8_formats(
|
||||||
|
m3u8_url, video_id, 'mp4', 'm3u8_native',
|
||||||
|
m3u8_id='hls', fatal=False))
|
||||||
|
|
||||||
|
f4m_url = url_or_none(refs.get('f4mUrl'))
|
||||||
|
if f4m_url:
|
||||||
|
formats.extend(self._extract_f4m_formats(
|
||||||
|
f4m_url, video_id, f4m_id='hds', fatal=False))
|
||||||
|
|
||||||
|
for asset in (video.get('assets') or []):
|
||||||
|
asset_url = url_or_none(asset.get('url'))
|
||||||
|
if not asset_url:
|
||||||
|
continue
|
||||||
|
formats.append({
|
||||||
|
'url': asset_url,
|
||||||
|
'tbr': int_or_none(asset.get('bitrate'), 1000),
|
||||||
|
'fps': int_or_none(asset.get('frame_rate')),
|
||||||
|
'height': int_or_none(asset.get('height')),
|
||||||
|
'width': int_or_none(asset.get('width')),
|
||||||
|
})
|
||||||
|
|
||||||
|
mezzanine_url = traverse_obj(
|
||||||
|
video, ('system', 'mezzanineUrl'), expected_type=url_or_none)
|
||||||
|
if mezzanine_url:
|
||||||
|
formats.append({
|
||||||
|
'ext': determine_ext(mezzanine_url, 'mp4'),
|
||||||
|
'format_id': 'mezzanine',
|
||||||
|
'quality': 1,
|
||||||
|
'url': mezzanine_url,
|
||||||
|
})
|
||||||
|
|
||||||
|
thumbnails = traverse_obj(
|
||||||
|
video, ('thumbnails', ..., {'url': 'url'}), expected_type=url_or_none)
|
||||||
|
tags = traverse_obj(
|
||||||
|
video, ('tags', ..., 'displayName'),
|
||||||
|
expected_type=lambda x: x.strip() or None)
|
||||||
|
|
||||||
|
metadata = traverse_obj(video, 'metadata', expected_type=dict) or {}
|
||||||
|
title = traverse_obj(
|
||||||
|
metadata, 'longTitle', 'title', 'name',
|
||||||
|
expected_type=lambda x: x.strip() or None)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'id': video_id,
|
||||||
|
'title': title,
|
||||||
|
'description': strip_or_none(metadata.get('description')),
|
||||||
|
'timestamp': parse_iso8601(metadata.get('publishDate')),
|
||||||
|
'duration': int_or_none(metadata.get('duration')),
|
||||||
|
'thumbnails': thumbnails,
|
||||||
|
'formats': formats,
|
||||||
|
'tags': tags,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
class IGNIE(IGNBaseIE):
|
class IGNIE(IGNBaseIE):
|
||||||
"""
|
"""
|
||||||
Extractor for some of the IGN sites, like www.ign.com, es.ign.com de.ign.com.
|
Extractor for some of the IGN sites, like www.ign.com, es.ign.com de.ign.com.
|
||||||
Some videos of it.ign.com are also supported
|
Some videos of it.ign.com are also supported
|
||||||
"""
|
"""
|
||||||
|
_VIDEO_PATH_RE = r'/(?:\d{4}/\d{2}/\d{2}/)?(?P<id>.+?)'
|
||||||
_VALID_URL = r'https?://(?:.+?\.ign|www\.pcmag)\.com/videos/(?:\d{4}/\d{2}/\d{2}/)?(?P<id>[^/?&#]+)'
|
_PLAYLIST_PATH_RE = r'(?:/?\?(?P<filt>[^&#]+))?'
|
||||||
|
_VALID_URL = (
|
||||||
|
r'https?://(?:.+?\.ign|www\.pcmag)\.com/videos(?:%s)'
|
||||||
|
% '|'.join((_VIDEO_PATH_RE + r'(?:[/?&#]|$)', _PLAYLIST_PATH_RE)))
|
||||||
IE_NAME = 'ign.com'
|
IE_NAME = 'ign.com'
|
||||||
_PAGE_TYPE = 'video'
|
_PAGE_TYPE = 'video'
|
||||||
|
|
||||||
|
@ -42,7 +121,13 @@ class IGNIE(IGNBaseIE):
|
||||||
'timestamp': 1370440800,
|
'timestamp': 1370440800,
|
||||||
'upload_date': '20130605',
|
'upload_date': '20130605',
|
||||||
'tags': 'count:9',
|
'tags': 'count:9',
|
||||||
}
|
'display_id': 'the-last-of-us-review',
|
||||||
|
'thumbnail': 'https://assets1.ignimgs.com/vid/thumbnails/user/2014/03/26/lastofusreviewmimig2.jpg',
|
||||||
|
'duration': 440,
|
||||||
|
},
|
||||||
|
'params': {
|
||||||
|
'nocheckcertificate': True,
|
||||||
|
},
|
||||||
}, {
|
}, {
|
||||||
'url': 'http://www.pcmag.com/videos/2015/01/06/010615-whats-new-now-is-gogo-snooping-on-your-data',
|
'url': 'http://www.pcmag.com/videos/2015/01/06/010615-whats-new-now-is-gogo-snooping-on-your-data',
|
||||||
'md5': 'f1581a6fe8c5121be5b807684aeac3f6',
|
'md5': 'f1581a6fe8c5121be5b807684aeac3f6',
|
||||||
|
@ -54,84 +139,48 @@ class IGNIE(IGNBaseIE):
|
||||||
'timestamp': 1420571160,
|
'timestamp': 1420571160,
|
||||||
'upload_date': '20150106',
|
'upload_date': '20150106',
|
||||||
'tags': 'count:4',
|
'tags': 'count:4',
|
||||||
}
|
},
|
||||||
|
'skip': '404 Not Found',
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://www.ign.com/videos/is-a-resident-evil-4-remake-on-the-way-ign-daily-fix',
|
'url': 'https://www.ign.com/videos/is-a-resident-evil-4-remake-on-the-way-ign-daily-fix',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
}]
|
}]
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def _extract_embed_urls(cls, url, webpage):
|
||||||
|
grids = re.findall(
|
||||||
|
r'''(?s)<section\b[^>]+\bclass\s*=\s*['"](?:[\w-]+\s+)*?content-feed-grid(?!\B|-)[^>]+>(.+?)</section[^>]*>''',
|
||||||
|
webpage)
|
||||||
|
return filter(None,
|
||||||
|
(urljoin(url, m.group('path')) for m in re.finditer(
|
||||||
|
r'''<a\b[^>]+\bhref\s*=\s*('|")(?P<path>/videos%s)\1'''
|
||||||
|
% cls._VIDEO_PATH_RE, grids[0] if grids else '')))
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
display_id = self._match_id(url)
|
display_id, filt = self._match_valid_url(url).group('id', 'filt')
|
||||||
video = self._call_api(display_id)
|
if display_id:
|
||||||
video_id = video['videoId']
|
return self._extract_video(url, display_id)
|
||||||
metadata = video['metadata']
|
return self._extract_playlist(url, filt or 'all')
|
||||||
title = metadata.get('longTitle') or metadata.get('title') or metadata['name']
|
|
||||||
|
|
||||||
formats = []
|
def _extract_playlist(self, url, display_id):
|
||||||
refs = video.get('refs') or {}
|
webpage = self._download_webpage(url, display_id)
|
||||||
|
|
||||||
m3u8_url = refs.get('m3uUrl')
|
return self.playlist_result(
|
||||||
if m3u8_url:
|
(self.url_result(u, self.ie_key())
|
||||||
formats.extend(self._extract_m3u8_formats(
|
for u in self._extract_embed_urls(url, webpage)),
|
||||||
m3u8_url, video_id, 'mp4', 'm3u8_native',
|
playlist_id=display_id)
|
||||||
m3u8_id='hls', fatal=False))
|
|
||||||
|
|
||||||
f4m_url = refs.get('f4mUrl')
|
def _extract_video(self, url, display_id):
|
||||||
if f4m_url:
|
video = self._checked_call_api(display_id)
|
||||||
formats.extend(self._extract_f4m_formats(
|
|
||||||
f4m_url, video_id, f4m_id='hds', fatal=False))
|
|
||||||
|
|
||||||
for asset in (video.get('assets') or []):
|
info = self._extract_video_info(video)
|
||||||
asset_url = asset.get('url')
|
|
||||||
if not asset_url:
|
|
||||||
continue
|
|
||||||
formats.append({
|
|
||||||
'url': asset_url,
|
|
||||||
'tbr': int_or_none(asset.get('bitrate'), 1000),
|
|
||||||
'fps': int_or_none(asset.get('frame_rate')),
|
|
||||||
'height': int_or_none(asset.get('height')),
|
|
||||||
'width': int_or_none(asset.get('width')),
|
|
||||||
})
|
|
||||||
|
|
||||||
mezzanine_url = try_get(video, lambda x: x['system']['mezzanineUrl'])
|
return merge_dicts({
|
||||||
if mezzanine_url:
|
|
||||||
formats.append({
|
|
||||||
'ext': determine_ext(mezzanine_url, 'mp4'),
|
|
||||||
'format_id': 'mezzanine',
|
|
||||||
'quality': 1,
|
|
||||||
'url': mezzanine_url,
|
|
||||||
})
|
|
||||||
|
|
||||||
thumbnails = []
|
|
||||||
for thumbnail in (video.get('thumbnails') or []):
|
|
||||||
thumbnail_url = thumbnail.get('url')
|
|
||||||
if not thumbnail_url:
|
|
||||||
continue
|
|
||||||
thumbnails.append({
|
|
||||||
'url': thumbnail_url,
|
|
||||||
})
|
|
||||||
|
|
||||||
tags = []
|
|
||||||
for tag in (video.get('tags') or []):
|
|
||||||
display_name = tag.get('displayName')
|
|
||||||
if not display_name:
|
|
||||||
continue
|
|
||||||
tags.append(display_name)
|
|
||||||
|
|
||||||
return {
|
|
||||||
'id': video_id,
|
|
||||||
'title': title,
|
|
||||||
'description': strip_or_none(metadata.get('description')),
|
|
||||||
'timestamp': parse_iso8601(metadata.get('publishDate')),
|
|
||||||
'duration': int_or_none(metadata.get('duration')),
|
|
||||||
'display_id': display_id,
|
'display_id': display_id,
|
||||||
'thumbnails': thumbnails,
|
}, info)
|
||||||
'formats': formats,
|
|
||||||
'tags': tags,
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
class IGNVideoIE(InfoExtractor):
|
class IGNVideoIE(IGNBaseIE):
|
||||||
_VALID_URL = r'https?://.+?\.ign\.com/(?:[a-z]{2}/)?[^/]+/(?P<id>\d+)/(?:video|trailer)/'
|
_VALID_URL = r'https?://.+?\.ign\.com/(?:[a-z]{2}/)?[^/]+/(?P<id>\d+)/(?:video|trailer)/'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'http://me.ign.com/en/videos/112203/video/how-hitman-aims-to-be-different-than-every-other-s',
|
'url': 'http://me.ign.com/en/videos/112203/video/how-hitman-aims-to-be-different-than-every-other-s',
|
||||||
|
@ -143,7 +192,16 @@ class IGNVideoIE(InfoExtractor):
|
||||||
'description': 'Taking out assassination targets in Hitman has never been more stylish.',
|
'description': 'Taking out assassination targets in Hitman has never been more stylish.',
|
||||||
'timestamp': 1444665600,
|
'timestamp': 1444665600,
|
||||||
'upload_date': '20151012',
|
'upload_date': '20151012',
|
||||||
}
|
'display_id': '112203',
|
||||||
|
'thumbnail': 'https://sm.ign.com/ign_me/video/h/how-hitman/how-hitman-aims-to-be-different-than-every-other-s_8z14.jpg',
|
||||||
|
'duration': 298,
|
||||||
|
'tags': 'count:13',
|
||||||
|
'display_id': '112203',
|
||||||
|
'thumbnail': 'https://sm.ign.com/ign_me/video/h/how-hitman/how-hitman-aims-to-be-different-than-every-other-s_8z14.jpg',
|
||||||
|
'duration': 298,
|
||||||
|
'tags': 'count:13',
|
||||||
|
},
|
||||||
|
'expected_warnings': ['HTTP Error 400: Bad Request'],
|
||||||
}, {
|
}, {
|
||||||
'url': 'http://me.ign.com/ar/angry-birds-2/106533/video/lrd-ldyy-lwl-lfylm-angry-birds',
|
'url': 'http://me.ign.com/ar/angry-birds-2/106533/video/lrd-ldyy-lwl-lfylm-angry-birds',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
|
@ -163,22 +221,38 @@ class IGNVideoIE(InfoExtractor):
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
video_id = self._match_id(url)
|
video_id = self._match_id(url)
|
||||||
req = HEADRequest(url.rsplit('/', 1)[0] + '/embed')
|
parsed_url = urllib.parse.urlparse(url)
|
||||||
url = self._request_webpage(req, video_id).geturl()
|
embed_url = urllib.parse.urlunparse(
|
||||||
|
parsed_url._replace(path=parsed_url.path.rsplit('/', 1)[0] + '/embed'))
|
||||||
|
|
||||||
|
webpage, urlh = self._download_webpage_handle(embed_url, video_id)
|
||||||
|
new_url = urlh.geturl()
|
||||||
ign_url = compat_parse_qs(
|
ign_url = compat_parse_qs(
|
||||||
compat_urllib_parse_urlparse(url).query).get('url', [None])[0]
|
urllib.parse.urlparse(new_url).query).get('url', [None])[-1]
|
||||||
if ign_url:
|
if ign_url:
|
||||||
return self.url_result(ign_url, IGNIE.ie_key())
|
return self.url_result(ign_url, IGNIE.ie_key())
|
||||||
return self.url_result(url)
|
video = self._search_regex(r'(<div\b[^>]+\bdata-video-id\s*=\s*[^>]+>)', webpage, 'video element', fatal=False)
|
||||||
|
if not video:
|
||||||
|
if new_url == url:
|
||||||
|
raise ExtractorError('Redirect loop: ' + url)
|
||||||
|
return self.url_result(new_url)
|
||||||
|
video = extract_attributes(video)
|
||||||
|
video_data = video.get('data-settings') or '{}'
|
||||||
|
video_data = self._parse_json(video_data, video_id)['video']
|
||||||
|
info = self._extract_video_info(video_data)
|
||||||
|
|
||||||
|
return merge_dicts({
|
||||||
|
'display_id': video_id,
|
||||||
|
}, info)
|
||||||
|
|
||||||
|
|
||||||
class IGNArticleIE(IGNBaseIE):
|
class IGNArticleIE(IGNBaseIE):
|
||||||
_VALID_URL = r'https?://.+?\.ign\.com/(?:articles(?:/\d{4}/\d{2}/\d{2})?|(?:[a-z]{2}/)?feature/\d+)/(?P<id>[^/?&#]+)'
|
_VALID_URL = r'https?://.+?\.ign\.com/(?:articles(?:/\d{4}/\d{2}/\d{2})?|(?:[a-z]{2}/)?(?:[\w-]+/)*?feature/\d+)/(?P<id>[^/?&#]+)'
|
||||||
_PAGE_TYPE = 'article'
|
_PAGE_TYPE = 'article'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'http://me.ign.com/en/feature/15775/100-little-things-in-gta-5-that-will-blow-your-mind',
|
'url': 'http://me.ign.com/en/feature/15775/100-little-things-in-gta-5-that-will-blow-your-mind',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': '524497489e4e8ff5848ece34',
|
'id': '72113',
|
||||||
'title': '100 Little Things in GTA 5 That Will Blow Your Mind',
|
'title': '100 Little Things in GTA 5 That Will Blow Your Mind',
|
||||||
},
|
},
|
||||||
'playlist': [
|
'playlist': [
|
||||||
|
@ -186,34 +260,43 @@ class IGNArticleIE(IGNBaseIE):
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': '5ebbd138523268b93c9141af17bec937',
|
'id': '5ebbd138523268b93c9141af17bec937',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
'title': 'GTA 5 Video Review',
|
'title': 'Grand Theft Auto V Video Review',
|
||||||
'description': 'Rockstar drops the mic on this generation of games. Watch our review of the masterly Grand Theft Auto V.',
|
'description': 'Rockstar drops the mic on this generation of games. Watch our review of the masterly Grand Theft Auto V.',
|
||||||
'timestamp': 1379339880,
|
'timestamp': 1379339880,
|
||||||
'upload_date': '20130916',
|
'upload_date': '20130916',
|
||||||
|
'tags': 'count:12',
|
||||||
|
'thumbnail': 'https://assets1.ignimgs.com/thumbs/userUploaded/2021/8/16/gta-v-heistsjpg-e94705-1629138553533.jpeg',
|
||||||
|
'display_id': 'grand-theft-auto-v-video-review',
|
||||||
|
'duration': 501,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': '638672ee848ae4ff108df2a296418ee2',
|
'id': '638672ee848ae4ff108df2a296418ee2',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
'title': '26 Twisted Moments from GTA 5 in Slow Motion',
|
'title': 'GTA 5 In Slow Motion',
|
||||||
'description': 'The twisted beauty of GTA 5 in stunning slow motion.',
|
'description': 'The twisted beauty of GTA 5 in stunning slow motion.',
|
||||||
'timestamp': 1386878820,
|
'timestamp': 1386878820,
|
||||||
'upload_date': '20131212',
|
'upload_date': '20131212',
|
||||||
|
'duration': 202,
|
||||||
|
'tags': 'count:25',
|
||||||
|
'display_id': 'gta-5-in-slow-motion',
|
||||||
|
'thumbnail': 'https://assets1.ignimgs.com/vid/thumbnails/user/2013/11/03/GTA-SLO-MO-1.jpg',
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
'params': {
|
'params': {
|
||||||
'playlist_items': '2-3',
|
|
||||||
'skip_download': True,
|
'skip_download': True,
|
||||||
},
|
},
|
||||||
|
'expected_warnings': ['Backend fetch failed'],
|
||||||
}, {
|
}, {
|
||||||
'url': 'http://www.ign.com/articles/2014/08/15/rewind-theater-wild-trailer-gamescom-2014?watch',
|
'url': 'http://www.ign.com/articles/2014/08/15/rewind-theater-wild-trailer-gamescom-2014?watch',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': '53ee806780a81ec46e0790f8',
|
'id': '53ee806780a81ec46e0790f8',
|
||||||
'title': 'Rewind Theater - Wild Trailer Gamescom 2014',
|
'title': 'Rewind Theater - Wild Trailer Gamescom 2014',
|
||||||
},
|
},
|
||||||
'playlist_count': 2,
|
'playlist_count': 1,
|
||||||
|
'expected_warnings': ['Backend fetch failed'],
|
||||||
}, {
|
}, {
|
||||||
# videoId pattern
|
# videoId pattern
|
||||||
'url': 'http://www.ign.com/articles/2017/06/08/new-ducktales-short-donalds-birthday-doesnt-go-as-planned',
|
'url': 'http://www.ign.com/articles/2017/06/08/new-ducktales-short-donalds-birthday-doesnt-go-as-planned',
|
||||||
|
@ -236,18 +319,84 @@ class IGNArticleIE(IGNBaseIE):
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
}]
|
}]
|
||||||
|
|
||||||
|
def _checked_call_api(self, slug):
|
||||||
|
try:
|
||||||
|
return self._call_api(slug)
|
||||||
|
except ExtractorError as e:
|
||||||
|
if isinstance(e.cause, urllib.error.HTTPError):
|
||||||
|
e.cause.args = e.cause.args or [
|
||||||
|
e.cause.geturl(), e.cause.getcode(), e.cause.reason]
|
||||||
|
if e.cause.code == 404:
|
||||||
|
raise ExtractorError(
|
||||||
|
'Content not found: expired?', cause=e.cause,
|
||||||
|
expected=True)
|
||||||
|
elif e.cause.code == 503:
|
||||||
|
self.report_warning(error_to_compat_str(e.cause))
|
||||||
|
return
|
||||||
|
raise
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
display_id = self._match_id(url)
|
display_id = self._match_id(url)
|
||||||
article = self._call_api(display_id)
|
article = self._checked_call_api(display_id)
|
||||||
|
|
||||||
def entries():
|
if article:
|
||||||
media_url = try_get(article, lambda x: x['mediaRelations'][0]['media']['metadata']['url'])
|
# obsolete ?
|
||||||
if media_url:
|
def entries():
|
||||||
yield self.url_result(media_url, IGNIE.ie_key())
|
media_url = traverse_obj(
|
||||||
for content in (article.get('content') or []):
|
article, ('mediaRelations', 0, 'media', 'metadata', 'url'),
|
||||||
for video_url in re.findall(r'(?:\[(?:ignvideo\s+url|youtube\s+clip_id)|<iframe[^>]+src)="([^"]+)"', content):
|
expected_type=url_or_none)
|
||||||
yield self.url_result(video_url)
|
if media_url:
|
||||||
|
yield self.url_result(media_url, IGNIE.ie_key())
|
||||||
|
for content in (article.get('content') or []):
|
||||||
|
for video_url in re.findall(r'(?:\[(?:ignvideo\s+url|youtube\s+clip_id)|<iframe[^>]+src)="([^"]+)"', content):
|
||||||
|
if url_or_none(video_url):
|
||||||
|
yield self.url_result(video_url)
|
||||||
|
|
||||||
|
return self.playlist_result(
|
||||||
|
entries(), article.get('articleId'),
|
||||||
|
traverse_obj(
|
||||||
|
article, ('metadata', 'headline'),
|
||||||
|
expected_type=lambda x: x.strip() or None))
|
||||||
|
|
||||||
|
webpage = self._download_webpage(url, display_id)
|
||||||
|
|
||||||
|
playlist_id = self._html_search_meta('dable:item_id', webpage, default=None)
|
||||||
|
if playlist_id:
|
||||||
|
|
||||||
|
def entries():
|
||||||
|
for m in re.finditer(
|
||||||
|
r'''(?s)<object\b[^>]+\bclass\s*=\s*("|')ign-videoplayer\1[^>]*>(?P<params>.+?)</object''',
|
||||||
|
webpage):
|
||||||
|
flashvars = self._search_regex(
|
||||||
|
r'''(<param\b[^>]+\bname\s*=\s*("|')flashvars\2[^>]*>)''',
|
||||||
|
m.group('params'), 'flashvars', default='')
|
||||||
|
flashvars = compat_parse_qs(extract_attributes(flashvars).get('value') or '')
|
||||||
|
v_url = url_or_none((flashvars.get('url') or [None])[-1])
|
||||||
|
if v_url:
|
||||||
|
yield self.url_result(v_url)
|
||||||
|
else:
|
||||||
|
playlist_id = self._search_regex(
|
||||||
|
r'''\bdata-post-id\s*=\s*("|')(?P<id>[\da-f]+)\1''',
|
||||||
|
webpage, 'id', group='id', default=None)
|
||||||
|
|
||||||
|
nextjs_data = self._search_nextjs_data(webpage, display_id)
|
||||||
|
|
||||||
|
def entries():
|
||||||
|
for player in traverse_obj(
|
||||||
|
nextjs_data,
|
||||||
|
('props', 'apolloState', 'ROOT_QUERY', lambda k, _: k.startswith('videoPlayerProps('), '__ref')):
|
||||||
|
# skip promo links (which may not always be served, eg GH CI servers)
|
||||||
|
if traverse_obj(nextjs_data,
|
||||||
|
('props', 'apolloState', player.replace('PlayerProps', 'ModernContent')),
|
||||||
|
expected_type=dict):
|
||||||
|
continue
|
||||||
|
video = traverse_obj(nextjs_data, ('props', 'apolloState', player), expected_type=dict) or {}
|
||||||
|
info = self._extract_video_info(video, fatal=False)
|
||||||
|
if info:
|
||||||
|
yield merge_dicts({
|
||||||
|
'display_id': display_id,
|
||||||
|
}, info)
|
||||||
|
|
||||||
return self.playlist_result(
|
return self.playlist_result(
|
||||||
entries(), article.get('articleId'),
|
entries(), playlist_id or display_id,
|
||||||
strip_or_none(try_get(article, lambda x: x['metadata']['headline'])))
|
re.sub(r'\s+-\s+IGN\s*$', '', self._og_search_title(webpage, default='')) or None)
|
||||||
|
|
|
@ -7,7 +7,8 @@
|
||||||
js_to_json,
|
js_to_json,
|
||||||
urlencode_postdata,
|
urlencode_postdata,
|
||||||
ExtractorError,
|
ExtractorError,
|
||||||
parse_qs
|
parse_qs,
|
||||||
|
traverse_obj
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@ -15,8 +16,7 @@ class IPrimaIE(InfoExtractor):
|
||||||
_VALID_URL = r'https?://(?!cnn)(?:[^/]+)\.iprima\.cz/(?:[^/]+/)*(?P<id>[^/?#&]+)'
|
_VALID_URL = r'https?://(?!cnn)(?:[^/]+)\.iprima\.cz/(?:[^/]+/)*(?P<id>[^/?#&]+)'
|
||||||
_GEO_BYPASS = False
|
_GEO_BYPASS = False
|
||||||
_NETRC_MACHINE = 'iprima'
|
_NETRC_MACHINE = 'iprima'
|
||||||
_LOGIN_URL = 'https://auth.iprima.cz/oauth2/login'
|
_AUTH_ROOT = 'https://auth.iprima.cz'
|
||||||
_TOKEN_URL = 'https://auth.iprima.cz/oauth2/token'
|
|
||||||
access_token = None
|
access_token = None
|
||||||
|
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
|
@ -67,7 +67,7 @@ def _perform_login(self, username, password):
|
||||||
return
|
return
|
||||||
|
|
||||||
login_page = self._download_webpage(
|
login_page = self._download_webpage(
|
||||||
self._LOGIN_URL, None, note='Downloading login page',
|
f'{self._AUTH_ROOT}/oauth2/login', None, note='Downloading login page',
|
||||||
errnote='Downloading login page failed')
|
errnote='Downloading login page failed')
|
||||||
|
|
||||||
login_form = self._hidden_inputs(login_page)
|
login_form = self._hidden_inputs(login_page)
|
||||||
|
@ -76,11 +76,20 @@ def _perform_login(self, username, password):
|
||||||
'_email': username,
|
'_email': username,
|
||||||
'_password': password})
|
'_password': password})
|
||||||
|
|
||||||
_, login_handle = self._download_webpage_handle(
|
profile_select_html, login_handle = self._download_webpage_handle(
|
||||||
self._LOGIN_URL, None, data=urlencode_postdata(login_form),
|
f'{self._AUTH_ROOT}/oauth2/login', None, data=urlencode_postdata(login_form),
|
||||||
note='Logging in')
|
note='Logging in')
|
||||||
|
|
||||||
code = parse_qs(login_handle.geturl()).get('code')[0]
|
# a profile may need to be selected first, even when there is only a single one
|
||||||
|
if '/profile-select' in login_handle.geturl():
|
||||||
|
profile_id = self._search_regex(
|
||||||
|
r'data-identifier\s*=\s*["\']?(\w+)', profile_select_html, 'profile id')
|
||||||
|
|
||||||
|
login_handle = self._request_webpage(
|
||||||
|
f'{self._AUTH_ROOT}/user/profile-select-perform/{profile_id}', None,
|
||||||
|
query={'continueUrl': '/user/login?redirect_uri=/user/'}, note='Selecting profile')
|
||||||
|
|
||||||
|
code = traverse_obj(login_handle.geturl(), ({parse_qs}, 'code', 0))
|
||||||
if not code:
|
if not code:
|
||||||
raise ExtractorError('Login failed', expected=True)
|
raise ExtractorError('Login failed', expected=True)
|
||||||
|
|
||||||
|
@ -89,10 +98,10 @@ def _perform_login(self, username, password):
|
||||||
'client_id': 'prima_sso',
|
'client_id': 'prima_sso',
|
||||||
'grant_type': 'authorization_code',
|
'grant_type': 'authorization_code',
|
||||||
'code': code,
|
'code': code,
|
||||||
'redirect_uri': 'https://auth.iprima.cz/sso/auth-check'}
|
'redirect_uri': f'{self._AUTH_ROOT}/sso/auth-check'}
|
||||||
|
|
||||||
token_data = self._download_json(
|
token_data = self._download_json(
|
||||||
self._TOKEN_URL, None,
|
f'{self._AUTH_ROOT}/oauth2/token', None,
|
||||||
note='Downloading token', errnote='Downloading token failed',
|
note='Downloading token', errnote='Downloading token failed',
|
||||||
data=urlencode_postdata(token_request_data))
|
data=urlencode_postdata(token_request_data))
|
||||||
|
|
||||||
|
@ -115,14 +124,22 @@ def _real_extract(self, url):
|
||||||
|
|
||||||
webpage = self._download_webpage(url, video_id)
|
webpage = self._download_webpage(url, video_id)
|
||||||
|
|
||||||
title = self._html_search_meta(
|
title = self._html_extract_title(webpage) or self._html_search_meta(
|
||||||
['og:title', 'twitter:title'],
|
['og:title', 'twitter:title'],
|
||||||
webpage, 'title', default=None)
|
webpage, 'title', default=None)
|
||||||
|
|
||||||
video_id = self._search_regex((
|
video_id = self._search_regex((
|
||||||
r'productId\s*=\s*([\'"])(?P<id>p\d+)\1',
|
r'productId\s*=\s*([\'"])(?P<id>p\d+)\1',
|
||||||
r'pproduct_id\s*=\s*([\'"])(?P<id>p\d+)\1'),
|
r'pproduct_id\s*=\s*([\'"])(?P<id>p\d+)\1',
|
||||||
webpage, 'real id', group='id')
|
), webpage, 'real id', group='id', default=None)
|
||||||
|
|
||||||
|
if not video_id:
|
||||||
|
nuxt_data = self._search_nuxt_data(webpage, video_id, traverse='data')
|
||||||
|
video_id = traverse_obj(
|
||||||
|
nuxt_data, (..., 'content', 'additionals', 'videoPlayId', {str}), get_all=False)
|
||||||
|
|
||||||
|
if not video_id:
|
||||||
|
self.raise_no_formats('Unable to extract video ID from webpage')
|
||||||
|
|
||||||
metadata = self._download_json(
|
metadata = self._download_json(
|
||||||
f'https://api.play-backend.iprima.cz/api/v1//products/id-{video_id}/play',
|
f'https://api.play-backend.iprima.cz/api/v1//products/id-{video_id}/play',
|
||||||
|
|
|
@ -440,12 +440,14 @@ class IqIE(InfoExtractor):
|
||||||
'1': 'zh_CN',
|
'1': 'zh_CN',
|
||||||
'2': 'zh_TW',
|
'2': 'zh_TW',
|
||||||
'3': 'en',
|
'3': 'en',
|
||||||
'4': 'kor',
|
'4': 'ko',
|
||||||
|
'5': 'ja',
|
||||||
'18': 'th',
|
'18': 'th',
|
||||||
'21': 'my',
|
'21': 'my',
|
||||||
'23': 'vi',
|
'23': 'vi',
|
||||||
'24': 'id',
|
'24': 'id',
|
||||||
'26': 'es',
|
'26': 'es',
|
||||||
|
'27': 'pt',
|
||||||
'28': 'ar',
|
'28': 'ar',
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -585,7 +587,7 @@ def _real_extract(self, url):
|
||||||
'langCode': self._get_cookie('lang', 'en_us'),
|
'langCode': self._get_cookie('lang', 'en_us'),
|
||||||
'deviceId': self._get_cookie('QC005', '')
|
'deviceId': self._get_cookie('QC005', '')
|
||||||
}, fatal=False)
|
}, fatal=False)
|
||||||
ut_list = traverse_obj(vip_data, ('data', 'all_vip', ..., 'vipType'), expected_type=str_or_none, default=[])
|
ut_list = traverse_obj(vip_data, ('data', 'all_vip', ..., 'vipType'), expected_type=str_or_none)
|
||||||
else:
|
else:
|
||||||
ut_list = ['0']
|
ut_list = ['0']
|
||||||
|
|
||||||
|
@ -617,7 +619,7 @@ def _real_extract(self, url):
|
||||||
self.report_warning('This preview video is limited%s' % format_field(preview_time, None, ' to %s seconds'))
|
self.report_warning('This preview video is limited%s' % format_field(preview_time, None, ' to %s seconds'))
|
||||||
|
|
||||||
# TODO: Extract audio-only formats
|
# TODO: Extract audio-only formats
|
||||||
for bid in set(traverse_obj(initial_format_data, ('program', 'video', ..., 'bid'), expected_type=str_or_none, default=[])):
|
for bid in set(traverse_obj(initial_format_data, ('program', 'video', ..., 'bid'), expected_type=str_or_none)):
|
||||||
dash_path = dash_paths.get(bid)
|
dash_path = dash_paths.get(bid)
|
||||||
if not dash_path:
|
if not dash_path:
|
||||||
self.report_warning(f'Unknown format id: {bid}. It is currently not being extracted')
|
self.report_warning(f'Unknown format id: {bid}. It is currently not being extracted')
|
||||||
|
@ -628,7 +630,7 @@ def _real_extract(self, url):
|
||||||
fatal=False), 'data', expected_type=dict)
|
fatal=False), 'data', expected_type=dict)
|
||||||
|
|
||||||
video_format = traverse_obj(format_data, ('program', 'video', lambda _, v: str(v['bid']) == bid),
|
video_format = traverse_obj(format_data, ('program', 'video', lambda _, v: str(v['bid']) == bid),
|
||||||
expected_type=dict, default=[], get_all=False) or {}
|
expected_type=dict, get_all=False) or {}
|
||||||
extracted_formats = []
|
extracted_formats = []
|
||||||
if video_format.get('m3u8Url'):
|
if video_format.get('m3u8Url'):
|
||||||
extracted_formats.extend(self._extract_m3u8_formats(
|
extracted_formats.extend(self._extract_m3u8_formats(
|
||||||
|
@ -669,7 +671,7 @@ def _real_extract(self, url):
|
||||||
})
|
})
|
||||||
formats.extend(extracted_formats)
|
formats.extend(extracted_formats)
|
||||||
|
|
||||||
for sub_format in traverse_obj(initial_format_data, ('program', 'stl', ...), expected_type=dict, default=[]):
|
for sub_format in traverse_obj(initial_format_data, ('program', 'stl', ...), expected_type=dict):
|
||||||
lang = self._LID_TAGS.get(str_or_none(sub_format.get('lid')), sub_format.get('_name'))
|
lang = self._LID_TAGS.get(str_or_none(sub_format.get('lid')), sub_format.get('_name'))
|
||||||
subtitles.setdefault(lang, []).extend([{
|
subtitles.setdefault(lang, []).extend([{
|
||||||
'ext': format_ext,
|
'ext': format_ext,
|
||||||
|
|
|
@ -2,11 +2,8 @@
|
||||||
import re
|
import re
|
||||||
|
|
||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
from ..utils import (
|
from ..dependencies import Cryptodome
|
||||||
ExtractorError,
|
from ..utils import ExtractorError, int_or_none, qualities
|
||||||
int_or_none,
|
|
||||||
qualities,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class IviIE(InfoExtractor):
|
class IviIE(InfoExtractor):
|
||||||
|
@ -94,18 +91,8 @@ def _real_extract(self, url):
|
||||||
for site in (353, 183):
|
for site in (353, 183):
|
||||||
content_data = (data % site).encode()
|
content_data = (data % site).encode()
|
||||||
if site == 353:
|
if site == 353:
|
||||||
try:
|
if not Cryptodome.CMAC:
|
||||||
from Cryptodome.Cipher import Blowfish
|
continue
|
||||||
from Cryptodome.Hash import CMAC
|
|
||||||
pycryptodome_found = True
|
|
||||||
except ImportError:
|
|
||||||
try:
|
|
||||||
from Crypto.Cipher import Blowfish
|
|
||||||
from Crypto.Hash import CMAC
|
|
||||||
pycryptodome_found = True
|
|
||||||
except ImportError:
|
|
||||||
pycryptodome_found = False
|
|
||||||
continue
|
|
||||||
|
|
||||||
timestamp = (self._download_json(
|
timestamp = (self._download_json(
|
||||||
self._LIGHT_URL, video_id,
|
self._LIGHT_URL, video_id,
|
||||||
|
@ -118,7 +105,8 @@ def _real_extract(self, url):
|
||||||
|
|
||||||
query = {
|
query = {
|
||||||
'ts': timestamp,
|
'ts': timestamp,
|
||||||
'sign': CMAC.new(self._LIGHT_KEY, timestamp.encode() + content_data, Blowfish).hexdigest(),
|
'sign': Cryptodome.CMAC.new(self._LIGHT_KEY, timestamp.encode() + content_data,
|
||||||
|
Cryptodome.Blowfish).hexdigest(),
|
||||||
}
|
}
|
||||||
else:
|
else:
|
||||||
query = {}
|
query = {}
|
||||||
|
@ -138,7 +126,7 @@ def _real_extract(self, url):
|
||||||
extractor_msg = 'Video %s does not exist'
|
extractor_msg = 'Video %s does not exist'
|
||||||
elif site == 353:
|
elif site == 353:
|
||||||
continue
|
continue
|
||||||
elif not pycryptodome_found:
|
elif not Cryptodome.CMAC:
|
||||||
raise ExtractorError('pycryptodomex not found. Please install', expected=True)
|
raise ExtractorError('pycryptodomex not found. Please install', expected=True)
|
||||||
elif message:
|
elif message:
|
||||||
extractor_msg += ': ' + message
|
extractor_msg += ': ' + message
|
||||||
|
|
|
@ -1,239 +1,184 @@
|
||||||
import itertools
|
import functools
|
||||||
import re
|
|
||||||
import urllib.parse
|
import urllib.parse
|
||||||
|
import hashlib
|
||||||
|
|
||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
from ..utils import (
|
from ..utils import (
|
||||||
|
OnDemandPagedList,
|
||||||
int_or_none,
|
int_or_none,
|
||||||
mimetype2ext,
|
mimetype2ext,
|
||||||
remove_end,
|
traverse_obj,
|
||||||
strip_or_none,
|
unified_timestamp,
|
||||||
unified_strdate,
|
|
||||||
url_or_none,
|
|
||||||
urljoin,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class IwaraBaseIE(InfoExtractor):
|
class IwaraIE(InfoExtractor):
|
||||||
_BASE_REGEX = r'(?P<base_url>https?://(?:www\.|ecchi\.)?iwara\.tv)'
|
IE_NAME = 'iwara'
|
||||||
|
_VALID_URL = r'https?://(?:www\.)?iwara\.tv/video/(?P<id>[a-zA-Z0-9]+)'
|
||||||
def _extract_playlist(self, base_url, webpage):
|
|
||||||
for path in re.findall(r'class="title">\s*<a[^<]+href="([^"]+)', webpage):
|
|
||||||
yield self.url_result(urljoin(base_url, path))
|
|
||||||
|
|
||||||
|
|
||||||
class IwaraIE(IwaraBaseIE):
|
|
||||||
_VALID_URL = fr'{IwaraBaseIE._BASE_REGEX}/videos/(?P<id>[a-zA-Z0-9]+)'
|
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'http://iwara.tv/videos/amVwUl1EHpAD9RD',
|
# this video cannot be played because of migration
|
||||||
# md5 is unstable
|
'only_matching': True,
|
||||||
|
'url': 'https://www.iwara.tv/video/k2ayoueezfkx6gvq',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': 'amVwUl1EHpAD9RD',
|
'id': 'k2ayoueezfkx6gvq',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
'title': '【MMD R-18】ガールフレンド carry_me_off',
|
|
||||||
'age_limit': 18,
|
'age_limit': 18,
|
||||||
'thumbnail': 'https://i.iwara.tv/sites/default/files/videos/thumbnails/7951/thumbnail-7951_0001.png',
|
'title': 'Defeat of Irybelda - アイリベルダの敗北',
|
||||||
'uploader': 'Reimu丨Action',
|
'description': 'md5:70278abebe706647a8b4cb04cf23e0d3',
|
||||||
'upload_date': '20150828',
|
'uploader': 'Inwerwm',
|
||||||
'description': 'md5:1d4905ce48c66c9299c617f08e106e0f',
|
'uploader_id': 'inwerwm',
|
||||||
|
'tags': 'count:1',
|
||||||
|
'like_count': 6133,
|
||||||
|
'view_count': 1050343,
|
||||||
|
'comment_count': 1,
|
||||||
|
'timestamp': 1677843869,
|
||||||
|
'modified_timestamp': 1679056362,
|
||||||
},
|
},
|
||||||
}, {
|
}, {
|
||||||
'url': 'http://ecchi.iwara.tv/videos/Vb4yf2yZspkzkBO',
|
'url': 'https://iwara.tv/video/1ywe1sbkqwumpdxz5/',
|
||||||
'md5': '7e5f1f359cd51a027ba4a7b7710a50f0',
|
'md5': '20691ce1473ec2766c0788e14c60ce66',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': '0B1LvuHnL-sRFNXB1WHNqbGw4SXc',
|
'id': '1ywe1sbkqwumpdxz5',
|
||||||
'ext': 'mp4',
|
|
||||||
'title': '[3D Hentai] Kyonyu × Genkai × Emaki Shinobi Girls.mp4',
|
|
||||||
'age_limit': 18,
|
|
||||||
},
|
|
||||||
'add_ie': ['GoogleDrive'],
|
|
||||||
}, {
|
|
||||||
'url': 'http://www.iwara.tv/videos/nawkaumd6ilezzgq',
|
|
||||||
# md5 is unstable
|
|
||||||
'info_dict': {
|
|
||||||
'id': '6liAP9s2Ojc',
|
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
'age_limit': 18,
|
'age_limit': 18,
|
||||||
'title': '[MMD] Do It Again Ver.2 [1080p 60FPS] (Motion,Camera,Wav+DL)',
|
'title': 'Aponia 阿波尼亚SEX Party Tonight 手动脱衣 大奶 裸腿',
|
||||||
'description': 'md5:590c12c0df1443d833fbebe05da8c47a',
|
'description': 'md5:0c4c310f2e0592d68b9f771d348329ca',
|
||||||
'upload_date': '20160910',
|
'uploader': '龙也zZZ',
|
||||||
'uploader': 'aMMDsork',
|
'uploader_id': 'user792540',
|
||||||
'uploader_id': 'UCVOFyOSCyFkXTYYHITtqB7A',
|
'tags': [
|
||||||
|
'uncategorized'
|
||||||
|
],
|
||||||
|
'like_count': 1809,
|
||||||
|
'view_count': 25156,
|
||||||
|
'comment_count': 1,
|
||||||
|
'timestamp': 1678732213,
|
||||||
|
'modified_timestamp': 1679110271,
|
||||||
},
|
},
|
||||||
'add_ie': ['Youtube'],
|
|
||||||
}]
|
}]
|
||||||
|
|
||||||
|
def _extract_formats(self, video_id, fileurl):
|
||||||
|
up = urllib.parse.urlparse(fileurl)
|
||||||
|
q = urllib.parse.parse_qs(up.query)
|
||||||
|
paths = up.path.rstrip('/').split('/')
|
||||||
|
# https://github.com/yt-dlp/yt-dlp/issues/6549#issuecomment-1473771047
|
||||||
|
x_version = hashlib.sha1('_'.join((paths[-1], q['expires'][0], '5nFp9kmbNnHdAFhaqMvt')).encode()).hexdigest()
|
||||||
|
|
||||||
|
files = self._download_json(fileurl, video_id, headers={'X-Version': x_version})
|
||||||
|
for fmt in files:
|
||||||
|
yield traverse_obj(fmt, {
|
||||||
|
'format_id': 'name',
|
||||||
|
'url': ('src', ('view', 'download'), {self._proto_relative_url}),
|
||||||
|
'ext': ('type', {mimetype2ext}),
|
||||||
|
'quality': ('name', {lambda x: int_or_none(x) or 1e4}),
|
||||||
|
'height': ('name', {int_or_none}),
|
||||||
|
}, get_all=False)
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
video_id = self._match_id(url)
|
video_id = self._match_id(url)
|
||||||
|
video_data = self._download_json(f'http://api.iwara.tv/video/{video_id}', video_id)
|
||||||
webpage, urlh = self._download_webpage_handle(url, video_id)
|
|
||||||
|
|
||||||
hostname = urllib.parse.urlparse(urlh.geturl()).hostname
|
|
||||||
# ecchi is 'sexy' in Japanese
|
|
||||||
age_limit = 18 if hostname.split('.')[0] == 'ecchi' else 0
|
|
||||||
|
|
||||||
video_data = self._download_json('http://www.iwara.tv/api/video/%s' % video_id, video_id)
|
|
||||||
|
|
||||||
if not video_data:
|
|
||||||
iframe_url = self._html_search_regex(
|
|
||||||
r'<iframe[^>]+src=([\'"])(?P<url>[^\'"]+)\1',
|
|
||||||
webpage, 'iframe URL', group='url')
|
|
||||||
return {
|
|
||||||
'_type': 'url_transparent',
|
|
||||||
'url': iframe_url,
|
|
||||||
'age_limit': age_limit,
|
|
||||||
}
|
|
||||||
|
|
||||||
title = remove_end(self._html_extract_title(webpage), ' | Iwara')
|
|
||||||
|
|
||||||
thumbnail = self._html_search_regex(
|
|
||||||
r'poster=[\'"]([^\'"]+)', webpage, 'thumbnail', default=None)
|
|
||||||
|
|
||||||
uploader = self._html_search_regex(
|
|
||||||
r'class="username">([^<]+)', webpage, 'uploader', fatal=False)
|
|
||||||
|
|
||||||
upload_date = unified_strdate(self._html_search_regex(
|
|
||||||
r'作成日:([^\s]+)', webpage, 'upload_date', fatal=False))
|
|
||||||
|
|
||||||
description = strip_or_none(self._search_regex(
|
|
||||||
r'<p>(.+?(?=</div))', webpage, 'description', fatal=False,
|
|
||||||
flags=re.DOTALL))
|
|
||||||
|
|
||||||
formats = []
|
|
||||||
for a_format in video_data:
|
|
||||||
format_uri = url_or_none(a_format.get('uri'))
|
|
||||||
if not format_uri:
|
|
||||||
continue
|
|
||||||
format_id = a_format.get('resolution')
|
|
||||||
height = int_or_none(self._search_regex(
|
|
||||||
r'(\d+)p', format_id, 'height', default=None))
|
|
||||||
formats.append({
|
|
||||||
'url': self._proto_relative_url(format_uri, 'https:'),
|
|
||||||
'format_id': format_id,
|
|
||||||
'ext': mimetype2ext(a_format.get('mime')) or 'mp4',
|
|
||||||
'height': height,
|
|
||||||
'width': int_or_none(height / 9.0 * 16.0 if height else None),
|
|
||||||
'quality': 1 if format_id == 'Source' else 0,
|
|
||||||
})
|
|
||||||
|
|
||||||
return {
|
return {
|
||||||
'id': video_id,
|
'id': video_id,
|
||||||
'title': title,
|
'age_limit': 18 if video_data.get('rating') == 'ecchi' else 0, # ecchi is 'sexy' in Japanese
|
||||||
'age_limit': age_limit,
|
**traverse_obj(video_data, {
|
||||||
'formats': formats,
|
'title': 'title',
|
||||||
'thumbnail': self._proto_relative_url(thumbnail, 'https:'),
|
'description': 'body',
|
||||||
'uploader': uploader,
|
'uploader': ('user', 'name'),
|
||||||
'upload_date': upload_date,
|
'uploader_id': ('user', 'username'),
|
||||||
'description': description,
|
'tags': ('tags', ..., 'id'),
|
||||||
|
'like_count': 'numLikes',
|
||||||
|
'view_count': 'numViews',
|
||||||
|
'comment_count': 'numComments',
|
||||||
|
'timestamp': ('createdAt', {unified_timestamp}),
|
||||||
|
'modified_timestamp': ('updatedAt', {unified_timestamp}),
|
||||||
|
'thumbnail': ('file', 'id', {str}, {
|
||||||
|
lambda x: f'https://files.iwara.tv/image/thumbnail/{x}/thumbnail-00.jpg'}),
|
||||||
|
}),
|
||||||
|
'formats': list(self._extract_formats(video_id, video_data.get('fileUrl'))),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
class IwaraPlaylistIE(IwaraBaseIE):
|
class IwaraUserIE(InfoExtractor):
|
||||||
_VALID_URL = fr'{IwaraBaseIE._BASE_REGEX}/playlist/(?P<id>[^/?#&]+)'
|
_VALID_URL = r'https?://(?:www\.)?iwara\.tv/profile/(?P<id>[^/?#&]+)'
|
||||||
IE_NAME = 'iwara:playlist'
|
|
||||||
|
|
||||||
_TESTS = [{
|
|
||||||
'url': 'https://ecchi.iwara.tv/playlist/best-enf',
|
|
||||||
'info_dict': {
|
|
||||||
'title': 'Best enf',
|
|
||||||
'uploader': 'Jared98112',
|
|
||||||
'id': 'best-enf',
|
|
||||||
},
|
|
||||||
'playlist_mincount': 1097,
|
|
||||||
}, {
|
|
||||||
# urlencoded
|
|
||||||
'url': 'https://ecchi.iwara.tv/playlist/%E3%83%97%E3%83%AC%E3%82%A4%E3%83%AA%E3%82%B9%E3%83%88-2',
|
|
||||||
'info_dict': {
|
|
||||||
'id': 'プレイリスト-2',
|
|
||||||
'title': 'プレイリスト',
|
|
||||||
'uploader': 'mainyu',
|
|
||||||
},
|
|
||||||
'playlist_mincount': 91,
|
|
||||||
}]
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
|
||||||
playlist_id, base_url = self._match_valid_url(url).group('id', 'base_url')
|
|
||||||
playlist_id = urllib.parse.unquote(playlist_id)
|
|
||||||
webpage = self._download_webpage(url, playlist_id)
|
|
||||||
|
|
||||||
return {
|
|
||||||
'_type': 'playlist',
|
|
||||||
'id': playlist_id,
|
|
||||||
'title': self._html_search_regex(r'class="title"[^>]*>([^<]+)', webpage, 'title', fatal=False),
|
|
||||||
'uploader': self._html_search_regex(r'<h2>([^<]+)', webpage, 'uploader', fatal=False),
|
|
||||||
'entries': self._extract_playlist(base_url, webpage),
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
class IwaraUserIE(IwaraBaseIE):
|
|
||||||
_VALID_URL = fr'{IwaraBaseIE._BASE_REGEX}/users/(?P<id>[^/?#&]+)'
|
|
||||||
IE_NAME = 'iwara:user'
|
IE_NAME = 'iwara:user'
|
||||||
|
_PER_PAGE = 32
|
||||||
|
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'note': 'number of all videos page is just 1 page. less than 40 videos',
|
'url': 'https://iwara.tv/profile/user792540/videos',
|
||||||
'url': 'https://ecchi.iwara.tv/users/infinityyukarip',
|
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'title': 'Uploaded videos from Infinity_YukariP',
|
'id': 'user792540',
|
||||||
'id': 'infinityyukarip',
|
|
||||||
'uploader': 'Infinity_YukariP',
|
|
||||||
'uploader_id': 'infinityyukarip',
|
|
||||||
},
|
},
|
||||||
'playlist_mincount': 39,
|
'playlist_mincount': 80,
|
||||||
}, {
|
}, {
|
||||||
'note': 'no even all videos page. probably less than 10 videos',
|
'url': 'https://iwara.tv/profile/theblackbirdcalls/videos',
|
||||||
'url': 'https://ecchi.iwara.tv/users/mmd-quintet',
|
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'title': 'Uploaded videos from mmd quintet',
|
|
||||||
'id': 'mmd-quintet',
|
|
||||||
'uploader': 'mmd quintet',
|
|
||||||
'uploader_id': 'mmd-quintet',
|
|
||||||
},
|
|
||||||
'playlist_mincount': 6,
|
|
||||||
}, {
|
|
||||||
'note': 'has paging. more than 40 videos',
|
|
||||||
'url': 'https://ecchi.iwara.tv/users/theblackbirdcalls',
|
|
||||||
'info_dict': {
|
|
||||||
'title': 'Uploaded videos from TheBlackbirdCalls',
|
|
||||||
'id': 'theblackbirdcalls',
|
'id': 'theblackbirdcalls',
|
||||||
'uploader': 'TheBlackbirdCalls',
|
|
||||||
'uploader_id': 'theblackbirdcalls',
|
|
||||||
},
|
},
|
||||||
'playlist_mincount': 420,
|
'playlist_mincount': 723,
|
||||||
}, {
|
}, {
|
||||||
'note': 'foreign chars in URL. there must be foreign characters in URL',
|
'url': 'https://iwara.tv/profile/user792540',
|
||||||
'url': 'https://ecchi.iwara.tv/users/ぶた丼',
|
'only_matching': True,
|
||||||
'info_dict': {
|
}, {
|
||||||
'title': 'Uploaded videos from ぶた丼',
|
'url': 'https://iwara.tv/profile/theblackbirdcalls',
|
||||||
'id': 'ぶた丼',
|
'only_matching': True,
|
||||||
'uploader': 'ぶた丼',
|
|
||||||
'uploader_id': 'ぶた丼',
|
|
||||||
},
|
|
||||||
'playlist_mincount': 170,
|
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _entries(self, playlist_id, base_url):
|
def _entries(self, playlist_id, user_id, page):
|
||||||
webpage = self._download_webpage(
|
videos = self._download_json(
|
||||||
f'{base_url}/users/{playlist_id}', playlist_id)
|
'https://api.iwara.tv/videos', playlist_id,
|
||||||
videos_url = self._search_regex(r'<a href="(/users/[^/]+/videos)(?:\?[^"]+)?">', webpage, 'all videos url', default=None)
|
note=f'Downloading page {page}',
|
||||||
if not videos_url:
|
query={
|
||||||
yield from self._extract_playlist(base_url, webpage)
|
'page': page,
|
||||||
return
|
'sort': 'date',
|
||||||
|
'user': user_id,
|
||||||
videos_url = urljoin(base_url, videos_url)
|
'limit': self._PER_PAGE,
|
||||||
|
})
|
||||||
for n in itertools.count(1):
|
for x in traverse_obj(videos, ('results', ..., 'id')):
|
||||||
page = self._download_webpage(
|
yield self.url_result(f'https://iwara.tv/video/{x}')
|
||||||
videos_url, playlist_id, note=f'Downloading playlist page {n}',
|
|
||||||
query={'page': str(n - 1)} if n > 1 else {})
|
|
||||||
yield from self._extract_playlist(
|
|
||||||
base_url, page)
|
|
||||||
|
|
||||||
if f'page={n}' not in page:
|
|
||||||
break
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
playlist_id, base_url = self._match_valid_url(url).group('id', 'base_url')
|
playlist_id = self._match_id(url)
|
||||||
playlist_id = urllib.parse.unquote(playlist_id)
|
user_info = self._download_json(
|
||||||
|
f'https://api.iwara.tv/profile/{playlist_id}', playlist_id,
|
||||||
|
note='Requesting user info')
|
||||||
|
user_id = traverse_obj(user_info, ('user', 'id'))
|
||||||
|
|
||||||
return self.playlist_result(
|
return self.playlist_result(
|
||||||
self._entries(playlist_id, base_url), playlist_id)
|
OnDemandPagedList(
|
||||||
|
functools.partial(self._entries, playlist_id, user_id),
|
||||||
|
self._PER_PAGE),
|
||||||
|
playlist_id, traverse_obj(user_info, ('user', 'name')))
|
||||||
|
|
||||||
|
|
||||||
|
class IwaraPlaylistIE(InfoExtractor):
|
||||||
|
# the ID is an UUID but I don't think it's necessary to write concrete regex
|
||||||
|
_VALID_URL = r'https?://(?:www\.)?iwara\.tv/playlist/(?P<id>[0-9a-f-]+)'
|
||||||
|
IE_NAME = 'iwara:playlist'
|
||||||
|
_PER_PAGE = 32
|
||||||
|
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://iwara.tv/playlist/458e5486-36a4-4ac0-b233-7e9eef01025f',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '458e5486-36a4-4ac0-b233-7e9eef01025f',
|
||||||
|
},
|
||||||
|
'playlist_mincount': 3,
|
||||||
|
}]
|
||||||
|
|
||||||
|
def _entries(self, playlist_id, first_page, page):
|
||||||
|
videos = self._download_json(
|
||||||
|
'https://api.iwara.tv/videos', playlist_id, f'Downloading page {page}',
|
||||||
|
query={'page': page, 'limit': self._PER_PAGE}) if page else first_page
|
||||||
|
for x in traverse_obj(videos, ('results', ..., 'id')):
|
||||||
|
yield self.url_result(f'https://iwara.tv/video/{x}')
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
playlist_id = self._match_id(url)
|
||||||
|
page_0 = self._download_json(
|
||||||
|
f'https://api.iwara.tv/playlist/{playlist_id}?page=0&limit={self._PER_PAGE}', playlist_id,
|
||||||
|
note='Requesting playlist info')
|
||||||
|
|
||||||
|
return self.playlist_result(
|
||||||
|
OnDemandPagedList(
|
||||||
|
functools.partial(self._entries, playlist_id, page_0),
|
||||||
|
self._PER_PAGE),
|
||||||
|
playlist_id, traverse_obj(page_0, ('title', 'name')))
|
||||||
|
|
|
@ -8,14 +8,16 @@ class JWPlatformIE(InfoExtractor):
|
||||||
_VALID_URL = r'(?:https?://(?:content\.jwplatform|cdn\.jwplayer)\.com/(?:(?:feed|player|thumb|preview|manifest)s|jw6|v2/media)/|jwplatform:)(?P<id>[a-zA-Z0-9]{8})'
|
_VALID_URL = r'(?:https?://(?:content\.jwplatform|cdn\.jwplayer)\.com/(?:(?:feed|player|thumb|preview|manifest)s|jw6|v2/media)/|jwplatform:)(?P<id>[a-zA-Z0-9]{8})'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'http://content.jwplatform.com/players/nPripu9l-ALJ3XQCI.js',
|
'url': 'http://content.jwplatform.com/players/nPripu9l-ALJ3XQCI.js',
|
||||||
'md5': 'fa8899fa601eb7c83a64e9d568bdf325',
|
'md5': '3aa16e4f6860e6e78b7df5829519aed3',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': 'nPripu9l',
|
'id': 'nPripu9l',
|
||||||
'ext': 'mov',
|
'ext': 'mp4',
|
||||||
'title': 'Big Buck Bunny Trailer',
|
'title': 'Big Buck Bunny Trailer',
|
||||||
'description': 'Big Buck Bunny is a short animated film by the Blender Institute. It is made using free and open source software.',
|
'description': 'Big Buck Bunny is a short animated film by the Blender Institute. It is made using free and open source software.',
|
||||||
'upload_date': '20081127',
|
'upload_date': '20081127',
|
||||||
'timestamp': 1227796140,
|
'timestamp': 1227796140,
|
||||||
|
'duration': 32.0,
|
||||||
|
'thumbnail': 'https://cdn.jwplayer.com/v2/media/nPripu9l/poster.jpg?width=720',
|
||||||
}
|
}
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://cdn.jwplayer.com/players/nPripu9l-ALJ3XQCI.js',
|
'url': 'https://cdn.jwplayer.com/players/nPripu9l-ALJ3XQCI.js',
|
||||||
|
@ -37,18 +39,31 @@ class JWPlatformIE(InfoExtractor):
|
||||||
},
|
},
|
||||||
}, {
|
}, {
|
||||||
# Player url not surrounded by quotes
|
# Player url not surrounded by quotes
|
||||||
'url': 'https://www.deutsche-kinemathek.de/en/online/streaming/darling-berlin',
|
'url': 'https://www.deutsche-kinemathek.de/en/online/streaming/school-trip',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': 'R10NQdhY',
|
'id': 'jUxh5uin',
|
||||||
'title': 'Playgirl',
|
'title': 'Klassenfahrt',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
'upload_date': '20220624',
|
'upload_date': '20230109',
|
||||||
'thumbnail': 'https://cdn.jwplayer.com/v2/media/R10NQdhY/poster.jpg?width=720',
|
'thumbnail': 'https://cdn.jwplayer.com/v2/media/jUxh5uin/poster.jpg?width=720',
|
||||||
'timestamp': 1656064800,
|
'timestamp': 1673270298,
|
||||||
'description': 'BRD 1966, Will Tremper',
|
'description': '',
|
||||||
'duration': 5146.0,
|
'duration': 5193.0,
|
||||||
},
|
},
|
||||||
'params': {'allowed_extractors': ['generic', 'jwplatform']},
|
'params': {'allowed_extractors': ['generic', 'jwplatform']},
|
||||||
|
}, {
|
||||||
|
# iframe src attribute includes backslash before URL string
|
||||||
|
'url': 'https://www.elespectador.com/colombia/video-asi-se-evito-la-fuga-de-john-poulos-presunto-feminicida-de-valentina-trespalacios-explicacion',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'QD3gsexj',
|
||||||
|
'title': 'Así se evitó la fuga de John Poulos, presunto feminicida de Valentina Trespalacios',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'upload_date': '20230127',
|
||||||
|
'thumbnail': 'https://cdn.jwplayer.com/v2/media/QD3gsexj/poster.jpg?width=720',
|
||||||
|
'timestamp': 1674862986,
|
||||||
|
'description': 'md5:128fd74591c4e1fc2da598c5cb6f5ce4',
|
||||||
|
'duration': 263.0,
|
||||||
|
},
|
||||||
}]
|
}]
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
|
@ -57,7 +72,7 @@ def _extract_embed_urls(cls, url, webpage):
|
||||||
# <input value=URL> is used by hyland.com
|
# <input value=URL> is used by hyland.com
|
||||||
# if we find <iframe>, dont look for <input>
|
# if we find <iframe>, dont look for <input>
|
||||||
ret = re.findall(
|
ret = re.findall(
|
||||||
r'<%s[^>]+?%s=["\']?((?:https?:)?//(?:content\.jwplatform|cdn\.jwplayer)\.com/players/[a-zA-Z0-9]{8})' % (tag, key),
|
r'<%s[^>]+?%s=\\?["\']?((?:https?:)?//(?:content\.jwplatform|cdn\.jwplayer)\.com/players/[a-zA-Z0-9]{8})' % (tag, key),
|
||||||
webpage)
|
webpage)
|
||||||
if ret:
|
if ret:
|
||||||
return ret
|
return ret
|
||||||
|
|
31
yt_dlp/extractor/kommunetv.py
Normal file
31
yt_dlp/extractor/kommunetv.py
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
from .common import InfoExtractor
|
||||||
|
from ..utils import update_url
|
||||||
|
|
||||||
|
|
||||||
|
class KommunetvIE(InfoExtractor):
|
||||||
|
_VALID_URL = r'https://(\w+).kommunetv.no/archive/(?P<id>\w+)'
|
||||||
|
_TEST = {
|
||||||
|
'url': 'https://oslo.kommunetv.no/archive/921',
|
||||||
|
'md5': '5f102be308ee759be1e12b63d5da4bbc',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '921',
|
||||||
|
'title': 'Bystyremøte',
|
||||||
|
'ext': 'mp4'
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
video_id = self._match_id(url)
|
||||||
|
headers = {
|
||||||
|
'Accept': 'application/json'
|
||||||
|
}
|
||||||
|
data = self._download_json('https://oslo.kommunetv.no/api/streams?streamType=1&id=%s' % video_id, video_id, headers=headers)
|
||||||
|
title = data['stream']['title']
|
||||||
|
file = data['playlist'][0]['playlist'][0]['file']
|
||||||
|
url = update_url(file, query=None, fragment=None)
|
||||||
|
formats = self._extract_m3u8_formats(url, video_id, ext='mp4', entry_protocol='m3u8_native', m3u8_id='hls', fatal=False)
|
||||||
|
return {
|
||||||
|
'id': video_id,
|
||||||
|
'formats': formats,
|
||||||
|
'title': title
|
||||||
|
}
|
|
@ -1,33 +1,24 @@
|
||||||
|
import itertools
|
||||||
import re
|
import re
|
||||||
|
|
||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
from ..utils import int_or_none, format_field
|
from ..utils import int_or_none, parse_qs, traverse_obj
|
||||||
|
|
||||||
|
|
||||||
class LastFMPlaylistBaseIE(InfoExtractor):
|
class LastFMPlaylistBaseIE(InfoExtractor):
|
||||||
def _entries(self, url, playlist_id):
|
def _entries(self, url, playlist_id):
|
||||||
webpage = self._download_webpage(url, playlist_id)
|
single_page = traverse_obj(parse_qs(url), ('page', -1, {int_or_none}))
|
||||||
start_page_number = int_or_none(self._search_regex(
|
for page in itertools.count(single_page or 1):
|
||||||
r'\bpage=(\d+)', url, 'page', default=None)) or 1
|
|
||||||
last_page_number = int_or_none(self._search_regex(
|
|
||||||
r'>(\d+)</a>[^<]*</li>[^<]*<li[^>]+class="pagination-next', webpage, 'last_page', default=None))
|
|
||||||
|
|
||||||
for page_number in range(start_page_number, (last_page_number or start_page_number) + 1):
|
|
||||||
webpage = self._download_webpage(
|
webpage = self._download_webpage(
|
||||||
url, playlist_id,
|
url, playlist_id, f'Downloading page {page}', query={'page': page})
|
||||||
note='Downloading page %d%s' % (page_number, format_field(last_page_number, None, ' of %d')),
|
videos = re.findall(r'data-youtube-url="([^"]+)"', webpage)
|
||||||
query={'page': page_number})
|
yield from videos
|
||||||
page_entries = [
|
if single_page or not videos:
|
||||||
self.url_result(player_url, 'Youtube')
|
return
|
||||||
for player_url in set(re.findall(r'data-youtube-url="([^"]+)"', webpage))
|
|
||||||
]
|
|
||||||
|
|
||||||
for e in page_entries:
|
|
||||||
yield e
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
playlist_id = self._match_id(url)
|
playlist_id = self._match_id(url)
|
||||||
return self.playlist_result(self._entries(url, playlist_id), playlist_id)
|
return self.playlist_from_matches(self._entries(url, playlist_id), playlist_id, ie='Youtube')
|
||||||
|
|
||||||
|
|
||||||
class LastFMPlaylistIE(LastFMPlaylistBaseIE):
|
class LastFMPlaylistIE(LastFMPlaylistBaseIE):
|
||||||
|
@ -37,7 +28,7 @@ class LastFMPlaylistIE(LastFMPlaylistBaseIE):
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': 'Oasis',
|
'id': 'Oasis',
|
||||||
},
|
},
|
||||||
'playlist_count': 11,
|
'playlist_mincount': 11,
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://www.last.fm/music/Oasis',
|
'url': 'https://www.last.fm/music/Oasis',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
|
@ -73,6 +64,18 @@ class LastFMUserIE(LastFMPlaylistBaseIE):
|
||||||
'id': '12319471',
|
'id': '12319471',
|
||||||
},
|
},
|
||||||
'playlist_count': 30,
|
'playlist_count': 30,
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.last.fm/user/naamloos1/playlists/12543760',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '12543760',
|
||||||
|
},
|
||||||
|
'playlist_mincount': 80,
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.last.fm/user/naamloos1/playlists/12543760?page=3',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '12543760',
|
||||||
|
},
|
||||||
|
'playlist_count': 32,
|
||||||
}]
|
}]
|
||||||
|
|
||||||
|
|
||||||
|
|
135
yt_dlp/extractor/lefigaro.py
Normal file
135
yt_dlp/extractor/lefigaro.py
Normal file
|
@ -0,0 +1,135 @@
|
||||||
|
import json
|
||||||
|
import math
|
||||||
|
|
||||||
|
from .common import InfoExtractor
|
||||||
|
from ..utils import (
|
||||||
|
InAdvancePagedList,
|
||||||
|
traverse_obj,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class LeFigaroVideoEmbedIE(InfoExtractor):
|
||||||
|
_VALID_URL = r'https?://video\.lefigaro\.fr/embed/[^?#]+/(?P<id>[\w-]+)'
|
||||||
|
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://video.lefigaro.fr/embed/figaro/video/les-francais-ne-veulent-ils-plus-travailler-suivez-en-direct-le-club-le-figaro-idees/',
|
||||||
|
'md5': 'e94de44cd80818084352fcf8de1ce82c',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'g9j7Eovo',
|
||||||
|
'title': 'Les Français ne veulent-ils plus travailler ? Retrouvez Le Club Le Figaro Idées',
|
||||||
|
'description': 'md5:862b8813148ba4bf10763a65a69dfe41',
|
||||||
|
'upload_date': '20230216',
|
||||||
|
'timestamp': 1676581615,
|
||||||
|
'duration': 3076,
|
||||||
|
'thumbnail': r're:^https?://[^?#]+\.(?:jpeg|jpg)',
|
||||||
|
'ext': 'mp4',
|
||||||
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://video.lefigaro.fr/embed/figaro/video/intelligence-artificielle-faut-il-sen-mefier/',
|
||||||
|
'md5': '0b3f10332b812034b3a3eda1ef877c5f',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'LeAgybyc',
|
||||||
|
'title': 'Intelligence artificielle : faut-il s’en méfier ?',
|
||||||
|
'description': 'md5:249d136e3e5934a67c8cb704f8abf4d2',
|
||||||
|
'upload_date': '20230124',
|
||||||
|
'timestamp': 1674584477,
|
||||||
|
'duration': 860,
|
||||||
|
'thumbnail': r're:^https?://[^?#]+\.(?:jpeg|jpg)',
|
||||||
|
'ext': 'mp4',
|
||||||
|
},
|
||||||
|
}]
|
||||||
|
|
||||||
|
_WEBPAGE_TESTS = [{
|
||||||
|
'url': 'https://video.lefigaro.fr/figaro/video/suivez-en-direct-le-club-le-figaro-international-avec-philippe-gelie-9/',
|
||||||
|
'md5': '3972ddf2d5f8b98699f191687258e2f9',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'QChnbPYA',
|
||||||
|
'title': 'Où en est le couple franco-allemand ? Retrouvez Le Club Le Figaro International',
|
||||||
|
'description': 'md5:6f47235b7e7c93b366fd8ebfa10572ac',
|
||||||
|
'upload_date': '20230123',
|
||||||
|
'timestamp': 1674503575,
|
||||||
|
'duration': 3153,
|
||||||
|
'thumbnail': r're:^https?://[^?#]+\.(?:jpeg|jpg)',
|
||||||
|
'age_limit': 0,
|
||||||
|
'ext': 'mp4',
|
||||||
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://video.lefigaro.fr/figaro/video/la-philosophe-nathalie-sarthou-lajus-est-linvitee-du-figaro-live/',
|
||||||
|
'md5': '3ac0a0769546ee6be41ab52caea5d9a9',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'QJzqoNbf',
|
||||||
|
'title': 'La philosophe Nathalie Sarthou-Lajus est l’invitée du Figaro Live',
|
||||||
|
'description': 'md5:c586793bb72e726c83aa257f99a8c8c4',
|
||||||
|
'upload_date': '20230217',
|
||||||
|
'timestamp': 1676661986,
|
||||||
|
'duration': 1558,
|
||||||
|
'thumbnail': r're:^https?://[^?#]+\.(?:jpeg|jpg)',
|
||||||
|
'age_limit': 0,
|
||||||
|
'ext': 'mp4',
|
||||||
|
},
|
||||||
|
}]
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
display_id = self._match_id(url)
|
||||||
|
webpage = self._download_webpage(url, display_id)
|
||||||
|
|
||||||
|
player_data = self._search_nextjs_data(webpage, display_id)['props']['pageProps']['pageData']['playerData']
|
||||||
|
|
||||||
|
return self.url_result(
|
||||||
|
f'jwplatform:{player_data["videoId"]}', title=player_data.get('title'),
|
||||||
|
description=player_data.get('description'), thumbnail=player_data.get('poster'))
|
||||||
|
|
||||||
|
|
||||||
|
class LeFigaroVideoSectionIE(InfoExtractor):
|
||||||
|
_VALID_URL = r'https?://video\.lefigaro\.fr/figaro/(?P<id>[\w-]+)/?(?:[#?]|$)'
|
||||||
|
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://video.lefigaro.fr/figaro/le-club-le-figaro-idees/',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'le-club-le-figaro-idees',
|
||||||
|
'title': 'Le Club Le Figaro Idées',
|
||||||
|
},
|
||||||
|
'playlist_mincount': 14,
|
||||||
|
}, {
|
||||||
|
'url': 'https://video.lefigaro.fr/figaro/factu/',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'factu',
|
||||||
|
'title': 'Factu',
|
||||||
|
},
|
||||||
|
'playlist_mincount': 519,
|
||||||
|
}]
|
||||||
|
|
||||||
|
_PAGE_SIZE = 20
|
||||||
|
|
||||||
|
def _get_api_response(self, display_id, page_num, note=None):
|
||||||
|
return self._download_json(
|
||||||
|
'https://api-graphql.lefigaro.fr/graphql', display_id, note=note,
|
||||||
|
query={
|
||||||
|
'id': 'flive-website_UpdateListPage_1fb260f996bca2d78960805ac382544186b3225f5bedb43ad08b9b8abef79af6',
|
||||||
|
'variables': json.dumps({
|
||||||
|
'slug': display_id,
|
||||||
|
'videosLimit': self._PAGE_SIZE,
|
||||||
|
'sort': 'DESC',
|
||||||
|
'order': 'PUBLISHED_AT',
|
||||||
|
'page': page_num,
|
||||||
|
}).encode(),
|
||||||
|
})
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
display_id = self._match_id(url)
|
||||||
|
initial_response = self._get_api_response(display_id, page_num=1)['data']['playlist']
|
||||||
|
|
||||||
|
def page_func(page_num):
|
||||||
|
api_response = self._get_api_response(display_id, page_num + 1, note=f'Downloading page {page_num + 1}')
|
||||||
|
|
||||||
|
return [self.url_result(
|
||||||
|
video['embedUrl'], LeFigaroVideoEmbedIE, **traverse_obj(video, {
|
||||||
|
'title': 'name',
|
||||||
|
'description': 'description',
|
||||||
|
'thumbnail': 'thumbnailUrl',
|
||||||
|
})) for video in api_response['data']['playlist']['jsonLd'][0]['itemListElement']]
|
||||||
|
|
||||||
|
entries = InAdvancePagedList(
|
||||||
|
page_func, math.ceil(initial_response['videoCount'] / self._PAGE_SIZE), self._PAGE_SIZE)
|
||||||
|
|
||||||
|
return self.playlist_result(entries, playlist_id=display_id, playlist_title=initial_response.get('title'))
|
24
yt_dlp/extractor/lumni.py
Normal file
24
yt_dlp/extractor/lumni.py
Normal file
|
@ -0,0 +1,24 @@
|
||||||
|
from .common import InfoExtractor
|
||||||
|
from .francetv import FranceTVIE
|
||||||
|
|
||||||
|
|
||||||
|
class LumniIE(InfoExtractor):
|
||||||
|
_VALID_URL = r'https?://(?:www\.)?lumni\.fr/video/(?P<id>[\w-]+)'
|
||||||
|
_TESTS = [{
|
||||||
|
'url': 'https://www.lumni.fr/video/l-homme-et-son-environnement-dans-la-revolution-industrielle',
|
||||||
|
'md5': '960e8240c4f2c7a20854503a71e52f5e',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'd2b9a4e5-a526-495b-866c-ab72737e3645',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': "L'homme et son environnement dans la révolution industrielle - L'ère de l'homme",
|
||||||
|
'thumbnail': 'https://assets.webservices.francetelevisions.fr/v1/assets/images/a7/17/9f/a7179f5f-63a5-4e11-8d4d-012ab942d905.jpg',
|
||||||
|
'duration': 230,
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
|
||||||
|
def _real_extract(self, url):
|
||||||
|
display_id = self._match_id(url)
|
||||||
|
webpage = self._download_webpage(url, display_id)
|
||||||
|
video_id = self._html_search_regex(
|
||||||
|
r'<div[^>]+data-factoryid\s*=\s*["\']([^"\']+)', webpage, 'video id')
|
||||||
|
return self.url_result(f'francetv:{video_id}', FranceTVIE, video_id)
|
|
@ -8,12 +8,12 @@
|
||||||
float_or_none,
|
float_or_none,
|
||||||
int_or_none,
|
int_or_none,
|
||||||
str_or_none,
|
str_or_none,
|
||||||
traverse_obj,
|
traverse_obj
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class MedalTVIE(InfoExtractor):
|
class MedalTVIE(InfoExtractor):
|
||||||
_VALID_URL = r'https?://(?:www\.)?medal\.tv/(?P<path>games/[^/?#&]+/clips)/(?P<id>[^/?#&]+)'
|
_VALID_URL = r'https?://(?:www\.)?medal\.tv/games/[^/?#&]+/clips/(?P<id>[^/?#&]+)'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'https://medal.tv/games/valorant/clips/jTBFnLKdLy15K',
|
'url': 'https://medal.tv/games/valorant/clips/jTBFnLKdLy15K',
|
||||||
'md5': '6930f8972914b6b9fdc2bb3918098ba0',
|
'md5': '6930f8972914b6b9fdc2bb3918098ba0',
|
||||||
|
@ -80,25 +80,14 @@ class MedalTVIE(InfoExtractor):
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
video_id = self._match_id(url)
|
video_id = self._match_id(url)
|
||||||
path = self._match_valid_url(url).group('path')
|
|
||||||
|
|
||||||
webpage = self._download_webpage(url, video_id)
|
webpage = self._download_webpage(url, video_id)
|
||||||
|
|
||||||
next_data = self._search_json(
|
hydration_data = self._search_json(
|
||||||
'<script[^>]*__NEXT_DATA__[^>]*>', webpage,
|
r'<script[^>]*>[^<]*\bhydrationData\s*=', webpage,
|
||||||
'next data', video_id, end_pattern='</script>', fatal=False)
|
'next data', video_id, end_pattern='</script>', fatal=False)
|
||||||
|
|
||||||
build_id = next_data.get('buildId')
|
clip = traverse_obj(hydration_data, ('clips', ...), get_all=False)
|
||||||
if not build_id:
|
|
||||||
raise ExtractorError(
|
|
||||||
'Could not find build ID.', video_id=video_id)
|
|
||||||
|
|
||||||
locale = next_data.get('locale', 'en')
|
|
||||||
|
|
||||||
api_response = self._download_json(
|
|
||||||
f'https://medal.tv/_next/data/{build_id}/{locale}/{path}/{video_id}.json', video_id)
|
|
||||||
|
|
||||||
clip = traverse_obj(api_response, ('pageProps', 'clip')) or {}
|
|
||||||
if not clip:
|
if not clip:
|
||||||
raise ExtractorError(
|
raise ExtractorError(
|
||||||
'Could not find video information.', video_id=video_id)
|
'Could not find video information.', video_id=video_id)
|
||||||
|
@ -152,7 +141,7 @@ def add_item(container, item_url, height, id_key='format_id', item_id=None):
|
||||||
|
|
||||||
# Necessary because the id of the author is not known in advance.
|
# Necessary because the id of the author is not known in advance.
|
||||||
# Won't raise an issue if no profile can be found as this is optional.
|
# Won't raise an issue if no profile can be found as this is optional.
|
||||||
author = traverse_obj(api_response, ('pageProps', 'profile')) or {}
|
author = traverse_obj(hydration_data, ('profiles', ...), get_all=False) or {}
|
||||||
author_id = str_or_none(author.get('userId'))
|
author_id = str_or_none(author.get('userId'))
|
||||||
author_url = format_field(author_id, None, 'https://medal.tv/users/%s')
|
author_url = format_field(author_id, None, 'https://medal.tv/users/%s')
|
||||||
|
|
||||||
|
|
|
@ -1,11 +1,45 @@
|
||||||
import re
|
import re
|
||||||
|
|
||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
from ..utils import clean_html, get_element_html_by_class
|
from ..utils import (
|
||||||
|
clean_html,
|
||||||
|
remove_end,
|
||||||
|
traverse_obj,
|
||||||
|
urljoin,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class MediaStreamIE(InfoExtractor):
|
class MediaStreamBaseIE(InfoExtractor):
|
||||||
_VALID_URL = r'https?://mdstrm.com/(?:embed|live-stream)/(?P<id>\w+)'
|
_EMBED_BASE_URL = 'https://mdstrm.com/embed'
|
||||||
|
_BASE_URL_RE = r'https?://mdstrm\.com/(?:embed|live-stream)'
|
||||||
|
|
||||||
|
def _extract_mediastream_urls(self, webpage):
|
||||||
|
yield from traverse_obj(list(self._yield_json_ld(webpage, None)), (
|
||||||
|
lambda _, v: v['@type'] == 'VideoObject', ('embedUrl', 'contentUrl'),
|
||||||
|
{lambda x: x if re.match(rf'{self._BASE_URL_RE}/\w+', x) else None}))
|
||||||
|
|
||||||
|
for mobj in re.finditer(r'<script[^>]+>[^>]*playerMdStream\.mdstreamVideo\(\s*[\'"](?P<video_id>\w+)', webpage):
|
||||||
|
yield f'{self._EMBED_BASE_URL}/{mobj.group("video_id")}'
|
||||||
|
|
||||||
|
yield from re.findall(
|
||||||
|
rf'<iframe[^>]+\bsrc="({self._BASE_URL_RE}/\w+)', webpage)
|
||||||
|
|
||||||
|
for mobj in re.finditer(
|
||||||
|
r'''(?x)
|
||||||
|
<(?:div|ps-mediastream)[^>]+
|
||||||
|
(class="[^"]*MediaStreamVideoPlayer)[^"]*"[^>]+
|
||||||
|
data-video-id="(?P<video_id>\w+)"
|
||||||
|
(?:\s*data-video-type="(?P<video_type>[^"]+))?
|
||||||
|
(?:[^>]*>\s*<div[^>]+\1[^"]*"[^>]+data-mediastream=["\'][^>]+
|
||||||
|
https://mdstrm\.com/(?P<live>live-stream))?
|
||||||
|
''', webpage):
|
||||||
|
|
||||||
|
video_type = 'live-stream' if mobj.group('video_type') == 'live' or mobj.group('live') else 'embed'
|
||||||
|
yield f'https://mdstrm.com/{video_type}/{mobj.group("video_id")}'
|
||||||
|
|
||||||
|
|
||||||
|
class MediaStreamIE(MediaStreamBaseIE):
|
||||||
|
_VALID_URL = MediaStreamBaseIE._BASE_URL_RE + r'/(?P<id>\w+)'
|
||||||
|
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'https://mdstrm.com/embed/6318e3f1d1d316083ae48831',
|
'url': 'https://mdstrm.com/embed/6318e3f1d1d316083ae48831',
|
||||||
|
@ -17,6 +51,7 @@ class MediaStreamIE(InfoExtractor):
|
||||||
'thumbnail': r're:^https?://[^?#]+6318e3f1d1d316083ae48831',
|
'thumbnail': r're:^https?://[^?#]+6318e3f1d1d316083ae48831',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
},
|
},
|
||||||
|
'params': {'skip_download': 'm3u8'},
|
||||||
}]
|
}]
|
||||||
|
|
||||||
_WEBPAGE_TESTS = [{
|
_WEBPAGE_TESTS = [{
|
||||||
|
@ -29,9 +64,7 @@ class MediaStreamIE(InfoExtractor):
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
'live_status': 'is_live',
|
'live_status': 'is_live',
|
||||||
},
|
},
|
||||||
'params': {
|
'params': {'skip_download': 'Livestream'},
|
||||||
'skip_download': 'Livestream'
|
|
||||||
},
|
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://www.multimedios.com/television/clases-de-llaves-y-castigos-quien-sabe-mas',
|
'url': 'https://www.multimedios.com/television/clases-de-llaves-y-castigos-quien-sabe-mas',
|
||||||
'md5': 'de31f0b1ecc321fb35bf22d58734ea40',
|
'md5': 'de31f0b1ecc321fb35bf22d58734ea40',
|
||||||
|
@ -42,6 +75,7 @@ class MediaStreamIE(InfoExtractor):
|
||||||
'thumbnail': 're:^https?://[^?#]+63731bab8ec9b308a2c9ed28',
|
'thumbnail': 're:^https?://[^?#]+63731bab8ec9b308a2c9ed28',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
},
|
},
|
||||||
|
'params': {'skip_download': 'm3u8'},
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://www.americatv.com.pe/videos/esto-es-guerra/facundo-gonzalez-sufrio-fuerte-golpe-durante-competencia-frente-hugo-garcia-eeg-noticia-139120',
|
'url': 'https://www.americatv.com.pe/videos/esto-es-guerra/facundo-gonzalez-sufrio-fuerte-golpe-durante-competencia-frente-hugo-garcia-eeg-noticia-139120',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
|
@ -51,6 +85,7 @@ class MediaStreamIE(InfoExtractor):
|
||||||
'thumbnail': 're:^https?://[^?#]+63756df1c638b008a5659dec',
|
'thumbnail': 're:^https?://[^?#]+63756df1c638b008a5659dec',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
},
|
},
|
||||||
|
'params': {'skip_download': 'm3u8'},
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://www.americatv.com.pe/videos/al-fondo-hay-sitio/nuevas-lomas-town-bernardo-mata-se-enfrento-sujeto-luchar-amor-macarena-noticia-139083',
|
'url': 'https://www.americatv.com.pe/videos/al-fondo-hay-sitio/nuevas-lomas-town-bernardo-mata-se-enfrento-sujeto-luchar-amor-macarena-noticia-139083',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
|
@ -60,26 +95,12 @@ class MediaStreamIE(InfoExtractor):
|
||||||
'thumbnail': 're:^https?://[^?#]+637307669609130f74cd3a6e',
|
'thumbnail': 're:^https?://[^?#]+637307669609130f74cd3a6e',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
},
|
},
|
||||||
|
'params': {'skip_download': 'm3u8'},
|
||||||
}]
|
}]
|
||||||
|
|
||||||
@classmethod
|
def _extract_from_webpage(self, url, webpage):
|
||||||
def _extract_embed_urls(cls, url, webpage):
|
for embed_url in self._extract_mediastream_urls(webpage):
|
||||||
for mobj in re.finditer(r'<script[^>]+>[^>]*playerMdStream.mdstreamVideo\(\s*[\'"](?P<video_id>\w+)', webpage):
|
yield self.url_result(embed_url, MediaStreamIE, None)
|
||||||
yield f'https://mdstrm.com/embed/{mobj.group("video_id")}'
|
|
||||||
|
|
||||||
yield from re.findall(
|
|
||||||
r'<iframe[^>]src\s*=\s*"(https://mdstrm.com/[\w-]+/\w+)', webpage)
|
|
||||||
|
|
||||||
for mobj in re.finditer(
|
|
||||||
r'''(?x)
|
|
||||||
<(?:div|ps-mediastream)[^>]+
|
|
||||||
class\s*=\s*"[^"]*MediaStreamVideoPlayer[^"]*"[^>]+
|
|
||||||
data-video-id\s*=\s*"(?P<video_id>\w+)\s*"
|
|
||||||
(?:\s*data-video-type\s*=\s*"(?P<video_type>[^"]+))?
|
|
||||||
''', webpage):
|
|
||||||
|
|
||||||
video_type = 'live-stream' if mobj.group('video_type') == 'live' else 'embed'
|
|
||||||
yield f'https://mdstrm.com/{video_type}/{mobj.group("video_id")}'
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
video_id = self._match_id(url)
|
video_id = self._match_id(url)
|
||||||
|
@ -88,7 +109,7 @@ def _real_extract(self, url):
|
||||||
if 'Debido a tu ubicación no puedes ver el contenido' in webpage:
|
if 'Debido a tu ubicación no puedes ver el contenido' in webpage:
|
||||||
self.raise_geo_restricted()
|
self.raise_geo_restricted()
|
||||||
|
|
||||||
player_config = self._search_json(r'window.MDSTRM.OPTIONS\s*=', webpage, 'metadata', video_id)
|
player_config = self._search_json(r'window\.MDSTRM\.OPTIONS\s*=', webpage, 'metadata', video_id)
|
||||||
|
|
||||||
formats, subtitles = [], {}
|
formats, subtitles = [], {}
|
||||||
for video_format in player_config['src']:
|
for video_format in player_config['src']:
|
||||||
|
@ -116,40 +137,72 @@ def _real_extract(self, url):
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
class WinSportsVideoIE(InfoExtractor):
|
class WinSportsVideoIE(MediaStreamBaseIE):
|
||||||
_VALID_URL = r'https?://www\.winsports\.co/videos/(?P<display_id>[\w-]+)-(?P<id>\d+)'
|
_VALID_URL = r'https?://www\.winsports\.co/videos/(?P<id>[\w-]+)'
|
||||||
|
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'https://www.winsports.co/videos/siempre-castellanos-gran-atajada-del-portero-cardenal-para-evitar-la-caida-de-su-arco-60536',
|
'url': 'https://www.winsports.co/videos/siempre-castellanos-gran-atajada-del-portero-cardenal-para-evitar-la-caida-de-su-arco-60536',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': '62dc8357162c4b0821fcfb3c',
|
'id': '62dc8357162c4b0821fcfb3c',
|
||||||
'display_id': 'siempre-castellanos-gran-atajada-del-portero-cardenal-para-evitar-la-caida-de-su-arco',
|
'display_id': 'siempre-castellanos-gran-atajada-del-portero-cardenal-para-evitar-la-caida-de-su-arco-60536',
|
||||||
'title': '¡Siempre Castellanos! Gran atajada del portero \'cardenal\' para evitar la caída de su arco',
|
'title': '¡Siempre Castellanos! Gran atajada del portero \'cardenal\' para evitar la caída de su arco',
|
||||||
'description': 'md5:eb811b2b2882bdc59431732c06b905f2',
|
'description': 'md5:eb811b2b2882bdc59431732c06b905f2',
|
||||||
'thumbnail': r're:^https?://[^?#]+62dc8357162c4b0821fcfb3c',
|
'thumbnail': r're:^https?://[^?#]+62dc8357162c4b0821fcfb3c',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
},
|
},
|
||||||
|
'params': {'skip_download': 'm3u8'},
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://www.winsports.co/videos/observa-aqui-los-goles-del-empate-entre-tolima-y-nacional-60548',
|
'url': 'https://www.winsports.co/videos/observa-aqui-los-goles-del-empate-entre-tolima-y-nacional-60548',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': '62dcb875ef12a5526790b552',
|
'id': '62dcb875ef12a5526790b552',
|
||||||
'display_id': 'observa-aqui-los-goles-del-empate-entre-tolima-y-nacional',
|
'display_id': 'observa-aqui-los-goles-del-empate-entre-tolima-y-nacional-60548',
|
||||||
'title': 'Observa aquí los goles del empate entre Tolima y Nacional',
|
'title': 'Observa aquí los goles del empate entre Tolima y Nacional',
|
||||||
'description': 'md5:b19402ba6e46558b93fd24b873eea9c9',
|
'description': 'md5:b19402ba6e46558b93fd24b873eea9c9',
|
||||||
'thumbnail': r're:^https?://[^?#]+62dcb875ef12a5526790b552',
|
'thumbnail': r're:^https?://[^?#]+62dcb875ef12a5526790b552',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
},
|
},
|
||||||
|
'params': {'skip_download': 'm3u8'},
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.winsports.co/videos/equidad-vuelve-defender-su-arco-de-remates-de-junior',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '63fa7eca72f1741ad3a4d515',
|
||||||
|
'display_id': 'equidad-vuelve-defender-su-arco-de-remates-de-junior',
|
||||||
|
'title': '⚽ Equidad vuelve a defender su arco de remates de Junior',
|
||||||
|
'description': 'Remate de Sierra',
|
||||||
|
'thumbnail': r're:^https?://[^?#]+63fa7eca72f1741ad3a4d515',
|
||||||
|
'ext': 'mp4',
|
||||||
|
},
|
||||||
|
'params': {'skip_download': 'm3u8'},
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.winsports.co/videos/bucaramanga-se-quedo-con-el-grito-de-gol-en-la-garganta',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '6402adb62bbf3b18d454e1b0',
|
||||||
|
'display_id': 'bucaramanga-se-quedo-con-el-grito-de-gol-en-la-garganta',
|
||||||
|
'title': '⚽Bucaramanga se quedó con el grito de gol en la garganta',
|
||||||
|
'description': 'Gol anulado Bucaramanga',
|
||||||
|
'thumbnail': r're:^https?://[^?#]+6402adb62bbf3b18d454e1b0',
|
||||||
|
'ext': 'mp4',
|
||||||
|
},
|
||||||
|
'params': {'skip_download': 'm3u8'},
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
display_id, video_id = self._match_valid_url(url).group('display_id', 'id')
|
display_id = self._match_id(url)
|
||||||
webpage = self._download_webpage(url, display_id)
|
webpage = self._download_webpage(url, display_id)
|
||||||
|
data = self._search_json(
|
||||||
|
r'<script\s*[^>]+data-drupal-selector="drupal-settings-json">', webpage, 'data', display_id)
|
||||||
|
|
||||||
media_setting_json = self._search_json(
|
mediastream_url = urljoin(f'{self._EMBED_BASE_URL}/', (
|
||||||
r'<script\s*[^>]+data-drupal-selector="drupal-settings-json">', webpage, 'drupal-setting-json', display_id)
|
traverse_obj(data, (
|
||||||
|
(('settings', 'mediastream_formatter', ..., 'mediastream_id'), 'url'), {str}), get_all=False)
|
||||||
|
or next(self._extract_mediastream_urls(webpage), None)))
|
||||||
|
|
||||||
mediastream_id = media_setting_json['settings']['mediastream_formatter'][video_id]['mediastream_id']
|
if not mediastream_url:
|
||||||
|
self.raise_no_formats('No MediaStream embed found in webpage')
|
||||||
|
|
||||||
|
title = clean_html(remove_end(
|
||||||
|
self._search_json_ld(webpage, display_id, expected_type='VideoObject', default={}).get('title')
|
||||||
|
or self._og_search_title(webpage), '| Win Sports'))
|
||||||
|
|
||||||
return self.url_result(
|
return self.url_result(
|
||||||
f'https://mdstrm.com/embed/{mediastream_id}', MediaStreamIE, video_id, url_transparent=True,
|
mediastream_url, MediaStreamIE, display_id, url_transparent=True, display_id=display_id, video_title=title)
|
||||||
display_id=display_id, video_title=clean_html(get_element_html_by_class('title-news', webpage)))
|
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue