Compare commits

...

86 Commits

Author SHA1 Message Date
Simon Sawicki
73bf102116
[test] traversal: Fix morsel tests for Python 3.14 (#13471)
Authored by: Grub4K
2025-06-17 09:45:19 +02:00
doe1080
1722c55400
[ie/hypergryph] Improve metadata extraction (#13415)
Closes #13384
Authored by: doe1080, eason1478

Co-authored-by: eason1478 <134664337+eason1478@users.noreply.github.com>
2025-06-12 23:25:08 +00:00
doe1080
e6bd4a3da2
[ie/brightcove:new] Improve metadata extraction (#13461)
Authored by: doe1080
2025-06-12 23:16:48 +00:00
bashonly
51887484e4
[ie] Add _search_nuxt_json helper (#13386)
* Adds InfoExtractor._search_nuxt_json for webpage extraction
* Adds InfoExtractor._resolve_nuxt_array for direct use with payload JSON
* Adds yt_dlp.utils.jslib module for Python solutions to common JavaScript libraries
* Adds devalue.parse and devalue.parse_iter to jslib utils

Ref:
* 9e503be0f2
* f3fd2aa93d/src/parse.js

Authored by: bashonly, Grub4K

Co-authored-by: Simon Sawicki <contact@grub4k.dev>
2025-06-12 22:15:01 +00:00
github-actions[bot]
ba090caeaa Release 2025.06.09
Created by: bashonly

:ci skip all
2025-06-09 23:41:52 +00:00
bashonly
339614a173
[cleanup] Misc (#13278)
Authored by: bashonly
2025-06-09 23:39:00 +00:00
nullpos
aa863ddab9
[ie/cu.ntv.co.jp] Fix extractor (#13302)
Closes #10976
Authored by: nullpos, doe1080

Co-authored-by: doe1080 <98906116+doe1080@users.noreply.github.com>
2025-06-08 00:45:32 +00:00
InvalidUsernameException
db162b76f6
[ie/zdf] Fix language extraction and format sorting (#13313)
Closes #13118
Authored by: InvalidUsernameException
2025-06-08 00:10:01 +00:00
doe1080
e3c605a61f
[ie/sr:mediathek] Improve metadata extraction (#13294)
Authored by: doe1080
2025-06-08 00:06:57 +00:00
doe1080
97ddfefeb4
[ie/nobelprize] Fix extractor (#13205)
Authored by: doe1080
2025-06-08 00:04:32 +00:00
doe1080
a8bf0011bd
[ie/startrek] Fix extractor (#13188)
Authored by: doe1080
2025-06-07 23:16:31 +00:00
c-basalt
13e5516271
[ie/BiliBiliBangumi] Fix extractor (#13416)
Closes #13121
Authored by: c-basalt
2025-06-07 23:14:57 +00:00
bashonly
03dba2012d
[ie/telecinco] Fix extractor (#13379)
Closes #13378
Authored by: bashonly
2025-06-06 22:02:26 +00:00
bashonly
5d96527be8
[ie/stacommu] Avoid partial stream formats (#13412)
Authored by: bashonly
2025-06-06 21:53:30 +00:00
gamer191
1fd0e88b67
[ie/youtube] Add tv_simply player client (#13389)
Authored by: gamer191
2025-06-06 21:50:36 +00:00
gamer191
231349786e
[ie/youtube] Extract srt subtitles (#13411)
Closes #1734
Authored by: gamer191
2025-06-06 19:32:03 +00:00
Sipherdrakon
f37d599a69
[ie/aenetworks] Fix playlist extractors (#13408)
Fix 41952255d114163c43caa2b07416210cbe7709b3

Authored by: Sipherdrakon
2025-06-06 09:50:21 +00:00
Simon Sawicki
9e38b273b7
[ie/youtube] Rework nsig function name extraction (#13403)
Closes #13401

Authored by: Grub4K
2025-06-05 23:50:58 +02:00
doe1080
4e7c1ea346
[ie/umg:de] Rework extractor (#13373)
Authored by: doe1080
2025-06-03 19:20:46 +00:00
barsnick
e1b6062f8c
[ie/svt:play] Fix extractor (#13329)
Closes #13312
Authored by: barsnick, bashonly

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
2025-06-03 02:29:03 +00:00
bashonly
c723c4e5e7
[ie/vimeo] Extract subtitles from player subdomain (#13350)
Closes #12198
Authored by: bashonly
2025-06-01 23:20:29 +00:00
bashonly
148a1eb4c5
[ie/odnoklassniki] Detect and raise when login is required (#13361)
Closes #13360
Authored by: bashonly
2025-06-01 23:18:24 +00:00
bashonly
85c8a405e3
[ie] Improve JSON LD thumbnails extraction (#13368)
Authored by: bashonly, doe1080

Co-authored-by: doe1080 <98906116+doe1080@users.noreply.github.com>
2025-06-01 23:09:47 +00:00
Sipherdrakon
943083edcd
[ie/adobepass] Fix Philo MSO authentication (#13335)
Closes #2603
Authored by: Sipherdrakon
2025-06-01 17:26:33 +00:00
bashonly
3fe72e9eea
[ie/weverse] Support login with oauth refresh tokens (#13284)
Closes #7806
Authored by: bashonly
2025-05-30 23:20:59 +00:00
bashonly
d30a49742c
[ie/youtube] Improve signature extraction debug output (#13327)
Authored by: bashonly
2025-05-30 23:16:47 +00:00
bashonly
6d265388c6
[ie/10play] Fix extractor (#13349)
Closes #12337
Authored by: bashonly
2025-05-30 22:51:25 +00:00
bashonly
a9b3700698
[test:postprocessors] Remove binary thumbnail test data (#13341)
Authored by: bashonly
2025-05-30 22:48:48 +00:00
bashonly
201812100f
[build] Fix macOS requirements caching (#13328)
Authored by: bashonly
2025-05-28 18:13:48 +00:00
bashonly
cc749a8a3b
[build] Exclude pkg_resources from being collected (#13320)
Closes #13311
Authored by: bashonly
2025-05-27 23:11:58 +00:00
bashonly
f7bbf5a617
[ie/youtube] nsig code improvements and cleanup (#13280)
Authored by: bashonly
2025-05-26 22:54:43 +00:00
Brian
b5be29fa58
[ie/youtube] Fix --mark-watched support (#13222)
Closes #11532
Authored by: iednod55, brian6932

Co-authored-by: iednod55 <210167282+iednod55@users.noreply.github.com>
2025-05-26 22:31:22 +00:00
bashonly
6121559e02 [ie/vice] Mark extractors as broken (#13131)
Authored by: bashonly
2025-05-26 15:57:19 -05:00
Max
2e5bf002da [ie/go] Fix provider-locked content extraction (#13131)
Closes #1770, Closes #8073
Authored by: maxbin123, bashonly

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
2025-05-26 15:57:19 -05:00
Max
6693d66033 [ie/aenetworks] Fix provider-locked content extraction (#13131)
Authored by: maxbin123
2025-05-26 15:57:19 -05:00
Max
b094747e93 [ie/WatchESPN] Fix provider-locked content extraction (#13131)
Closes #4662
Authored by: maxbin123
2025-05-26 15:57:19 -05:00
bashonly
98f8eec956 [ie/brightcove:new] Adapt to new AdobePass requirement (#13131)
Authored by: bashonly
2025-05-26 15:57:19 -05:00
bashonly
0daddc780d [ie/turner] Adapt extractors to new AdobePass flow (#13131)
Authored by: bashonly
2025-05-26 15:57:19 -05:00
bashonly
2d7949d564 [ie/nbc] Rework and adapt extractors to new AdobePass flow (#13131)
Closes #1032, Closes #10874, Closes #11148, Closes #12432
Authored by: bashonly
2025-05-26 15:57:19 -05:00
bashonly
ed108b3ea4 [ie/theplatform] Improve metadata extraction (#13131)
Authored by: bashonly
2025-05-26 15:57:19 -05:00
Max
eee90acc47 [ie/adobepass] Add Fubo MSO (#13131)
Closes #8287
Authored by: maxbin123
2025-05-26 15:57:19 -05:00
Max
711c5d5d09 [ie/adobepass] Rework to require software statement (#13131)
* Also removes broken cookie support

Closes #11811
Authored by: maxbin123, bashonly

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
2025-05-26 15:57:19 -05:00
bashonly
89c1b349ad [ie/adobepass] Validate login URL before sending credentials (#13131)
Authored by: bashonly
2025-05-26 15:57:19 -05:00
bashonly
0ee1102268 [ie/adobepass] Always add newer user-agent when required (#13131)
Fix dcfeea4dd5e5686821350baa6c7767a011944867

Closes #516
Authored by: bashonly
2025-05-26 15:57:19 -05:00
doe1080
7794374de8
[ie/twitter:broadcast] Support events URLs (#13248)
Closes #12989
Authored by: doe1080
2025-05-23 19:25:56 +00:00
bashonly
538eb30567
[ie/podchaser] Fix extractor (#13271)
Closes #13269
Authored by: bashonly
2025-05-23 17:42:24 +00:00
doe1080
f8051e3a61
[ie/toutiao] Add extractor (#13246)
Closes #12125
Authored by: doe1080
2025-05-23 17:29:55 +00:00
bashonly
52f9729c9a
[ie/twitcasting] Fix password-protected livestream support (#13097)
Closes #13096
Authored by: bashonly
2025-05-23 12:58:53 +00:00
bashonly
1a8a03ea8d
[ie/patreon] Fix referer header used for embeds (#13276)
Fix e0d6c0822930f6e63f574d46d946a58b73ecd10c

Closes #13263
Authored by: bashonly
2025-05-23 12:53:36 +00:00
bashonly
e0d6c08229
[ie/patreon] Fix m3u8 formats extraction (#13266)
Closes #13263
Authored by: bashonly
2025-05-22 22:42:42 +00:00
bashonly
53ea743a9c
[ie/youtube] Fix automatic captions for some client combinations (#13268)
Fix 32ed5f107c6c641958d1cd2752e130de4db55a13

Authored by: bashonly
2025-05-22 22:41:31 +00:00
github-actions[bot]
415b4c9f95 Release 2025.05.22
Created by: bashonly

:ci skip all
2025-05-22 09:49:11 +00:00
bashonly
7977b329ed
[cleanup] Misc (#13166)
Authored by: bashonly
2025-05-22 09:33:11 +00:00
Matt Broadway
e491fd4d09
[cookies] Fix Linux desktop environment detection (#13197)
Closes #12885
Authored by: mbway
2025-05-22 09:22:11 +00:00
bashonly
32ed5f107c
[ie/youtube] Add PO token support for subtitles (#13234)
Closes #13075
Authored by: bashonly, coletdjnz

Co-authored-by: coletdjnz <coletdjnz@protonmail.com>
2025-05-22 09:13:42 +00:00
sepro
167d7a9f0f
[jsinterp] Fix increment/decrement evaluation (#13238)
Closes #13241
Authored by: seproDev, bashonly

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
2025-05-21 22:27:07 +00:00
garret1317
83fabf3524
[ie/xinpianchang] Fix extractor (#13245)
Closes #12737
Authored by: garret1317
2025-05-21 18:29:35 +00:00
bashonly
00b1bec552
[ie/twitch] Support --live-from-start (#13202)
Closes #10520
Authored by: bashonly
2025-05-20 21:53:54 +00:00
Yasin Özel
c7e575e316
[ie/youtube] Fix geo-restriction error handling (#13217)
Authored by: yozel
2025-05-20 21:39:27 +00:00
Subrat Lima
31e090cb78
[ie/picarto:vod] Support /profile/ video URLs (#13227)
Closes #13192
Authored by: subrat-lima
2025-05-20 21:37:21 +00:00
bashonly
545c1a5b6f
[ie/vimeo:event] Add extractor (#13216)
Closes #1608
Authored by: bashonly
2025-05-20 18:28:34 +00:00
bashonly
f569be4602
[ie/niconico] Fix error handling (#13236)
Closes #11430
Authored by: bashonly
2025-05-20 18:25:27 +00:00
coletdjnz
2685654a37
[ie/youtube] Add a PO Token Provider Framework (#12840)
https://github.com/yt-dlp/yt-dlp/tree/master/yt_dlp/extractor/youtube/pot/README.md

Authored by: coletdjnz
2025-05-18 13:45:26 +12:00
Povilas Balzaravičius
abf58dcd6a
[ie/LRTRadio] Fix extractor (#13200)
Authored by: Pawka
2025-05-17 20:37:00 +00:00
Geoffrey Frogeye
20f288bdc2
[ie/nebula] Support --mark-watched (#13120)
Authored by: GeoffreyFrogeye
2025-05-16 23:24:30 +00:00
bashonly
f475e8b529
[ie/once] Remove extractor (#13164)
Authored by: bashonly
2025-05-16 23:16:58 +00:00
bashonly
41c0a1fb89
[ie/1tv] Fix extractor (#13168)
Closes #13167
Authored by: bashonly
2025-05-16 23:16:03 +00:00
Jan Baier
a7d9a5eb79
[ie/iprima] Fix login support (#12937)
Closes #12387
Authored by: baierjan
2025-05-16 23:04:24 +00:00
Subrat Lima
586b557b12 [ie/jiosaavn:artist] Add extractor (#12803)
Closes #10823

Authored by: subrat-lima
2025-05-11 03:01:13 -05:00
Subrat Lima
317f4b8006 [ie/jiosaavn:show:playlist] Add extractor (#12803)
Closes #12766

Authored by: subrat-lima
2025-05-11 03:01:13 -05:00
Subrat Lima
6839276496 [ie/jiosaavn:show] Add extractor (#12803)
Closes #12766

Authored by: subrat-lima
2025-05-11 03:01:13 -05:00
bashonly
cbcfe6378d
[ie/sprout] Remove extractor (#13149)
Authored by: bashonly
2025-05-10 23:22:53 +00:00
bashonly
7dbb47f84f
[ie/cartoonnetwork] Remove extractor (#13148)
Authored by: bashonly
2025-05-10 23:22:38 +00:00
bashonly
464c84fedf
[ie/amcnetworks] Fix extractor (#13147)
Authored by: bashonly
2025-05-10 23:15:12 +00:00
doe1080
7a7b85c901
[ie/niconico:live] Fix extractor (#13045)
Authored by: doe1080
2025-05-10 22:46:28 +00:00
v3DJG6GL
d880e06080
[ie/playsuisse] Improve metadata extraction (#12466)
Authored by: v3DJG6GL
2025-05-10 22:37:04 +00:00
bashonly
ded11ebc9a
[ie/youtube] Extract media_type for all videos (#13136)
Authored by: bashonly
2025-05-10 22:33:57 +00:00
diman8
ea8498ed53
[ie/SVTPage] Fix extractor (#12957)
Closes #13142
Authored by: diman8
2025-05-10 08:53:59 +00:00
bashonly
b26bc32579
[ie/nytimesarticle] Fix extraction (#13104)
Closes #13098
Authored by: bashonly
2025-05-06 20:32:41 +00:00
bashonly
f123cc83b3
[ie/wat.tv] Improve error handling (#13111)
Closes #8191
Authored by: bashonly
2025-05-05 15:03:07 +00:00
bashonly
0feec6dc13
[ie/youtube] Add web_embedded client for age-restricted videos (#13089)
Authored by: bashonly
2025-05-03 20:11:40 +00:00
bashonly
1d0f6539c4
[ie/bitchute] Fix extractor (#13081)
Closes #13080
Authored by: bashonly
2025-05-03 19:31:33 +00:00
bashonly
17cf9088d0
[build] Bump PyInstaller to v6.13.0 (#13082)
Ref: https://github.com/yt-dlp/yt-dlp/issues/10294

Authored by: bashonly
2025-05-03 17:10:31 +00:00
bashonly
9064d2482d
[build] Bump run-on-arch-action to v3 (#13088)
Authored by: bashonly
2025-05-03 17:08:24 +00:00
Abdulmohsen
8f303afb43
[ie/youtube] Fix --live-from-start support for premieres (#13079)
Closes #8543
Authored by: arabcoders
2025-05-03 15:23:28 +00:00
bashonly
5328eda882
[ie/weverse] Fix live extraction (#13084)
Closes #12883
Authored by: bashonly
2025-05-03 07:19:52 +00:00
118 changed files with 8127 additions and 2125 deletions

View File

@ -192,7 +192,7 @@ jobs:
with: with:
path: ./repo path: ./repo
- name: Virtualized Install, Prepare & Build - name: Virtualized Install, Prepare & Build
uses: yt-dlp/run-on-arch-action@v2 uses: yt-dlp/run-on-arch-action@v3
with: with:
# Ref: https://github.com/uraimo/run-on-arch-action/issues/55 # Ref: https://github.com/uraimo/run-on-arch-action/issues/55
env: | env: |
@ -256,7 +256,7 @@ jobs:
with: with:
path: | path: |
~/yt-dlp-build-venv ~/yt-dlp-build-venv
key: cache-reqs-${{ github.job }} key: cache-reqs-${{ github.job }}-${{ github.ref }}
- name: Install Requirements - name: Install Requirements
run: | run: |
@ -331,19 +331,16 @@ jobs:
if: steps.restore-cache.outputs.cache-hit == 'true' if: steps.restore-cache.outputs.cache-hit == 'true'
env: env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
cache_key: cache-reqs-${{ github.job }} cache_key: cache-reqs-${{ github.job }}-${{ github.ref }}
repository: ${{ github.repository }}
branch: ${{ github.ref }}
run: | run: |
gh extension install actions/gh-actions-cache gh cache delete "${cache_key}"
gh actions-cache delete "${cache_key}" -R "${repository}" -B "${branch}" --confirm
- name: Cache requirements - name: Cache requirements
uses: actions/cache/save@v4 uses: actions/cache/save@v4
with: with:
path: | path: |
~/yt-dlp-build-venv ~/yt-dlp-build-venv
key: cache-reqs-${{ github.job }} key: cache-reqs-${{ github.job }}-${{ github.ref }}
macos_legacy: macos_legacy:
needs: process needs: process
@ -411,7 +408,7 @@ jobs:
run: | # Custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds run: | # Custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds
python devscripts/install_deps.py -o --include build python devscripts/install_deps.py -o --include build
python devscripts/install_deps.py --include curl-cffi python devscripts/install_deps.py --include curl-cffi
python -m pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-6.11.1-py3-none-any.whl" python -m pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-6.13.0-py3-none-any.whl"
- name: Prepare - name: Prepare
run: | run: |
@ -460,7 +457,7 @@ jobs:
run: | run: |
python devscripts/install_deps.py -o --include build python devscripts/install_deps.py -o --include build
python devscripts/install_deps.py python devscripts/install_deps.py
python -m pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-6.11.1-py3-none-any.whl" python -m pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-6.13.0-py3-none-any.whl"
- name: Prepare - name: Prepare
run: | run: |

2
.gitignore vendored
View File

@ -105,6 +105,8 @@ README.txt
*.zsh *.zsh
*.spec *.spec
test/testdata/sigs/player-*.js test/testdata/sigs/player-*.js
test/testdata/thumbnails/empty.webp
test/testdata/thumbnails/foo\ %d\ bar/foo_%d.*
# Binary # Binary
/youtube-dl /youtube-dl

View File

@ -770,3 +770,12 @@ NeonMan
pj47x pj47x
troex troex
WouterGordts WouterGordts
baierjan
GeoffreyFrogeye
Pawka
v3DJG6GL
yozel
brian6932
iednod55
maxbin123
nullpos

View File

@ -4,6 +4,107 @@
# To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master # To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master
--> -->
### 2025.06.09
#### Extractor changes
- [Improve JSON LD thumbnails extraction](https://github.com/yt-dlp/yt-dlp/commit/85c8a405e3651dc041b758f4744d4fb3c4c55e01) ([#13368](https://github.com/yt-dlp/yt-dlp/issues/13368)) by [bashonly](https://github.com/bashonly), [doe1080](https://github.com/doe1080)
- **10play**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/6d265388c6e943419ac99e9151cf75a3265f980f) ([#13349](https://github.com/yt-dlp/yt-dlp/issues/13349)) by [bashonly](https://github.com/bashonly)
- **adobepass**
- [Add Fubo MSO](https://github.com/yt-dlp/yt-dlp/commit/eee90acc47d7f8de24afaa8b0271ccaefdf6e88c) ([#13131](https://github.com/yt-dlp/yt-dlp/issues/13131)) by [maxbin123](https://github.com/maxbin123)
- [Always add newer user-agent when required](https://github.com/yt-dlp/yt-dlp/commit/0ee1102268cf31b07f8a8318a47424c66b2f7378) ([#13131](https://github.com/yt-dlp/yt-dlp/issues/13131)) by [bashonly](https://github.com/bashonly)
- [Fix Philo MSO authentication](https://github.com/yt-dlp/yt-dlp/commit/943083edcd3df45aaa597a6967bc6c95b720f54c) ([#13335](https://github.com/yt-dlp/yt-dlp/issues/13335)) by [Sipherdrakon](https://github.com/Sipherdrakon)
- [Rework to require software statement](https://github.com/yt-dlp/yt-dlp/commit/711c5d5d098fee2992a1a624b1c4b30364b91426) ([#13131](https://github.com/yt-dlp/yt-dlp/issues/13131)) by [bashonly](https://github.com/bashonly), [maxbin123](https://github.com/maxbin123)
- [Validate login URL before sending credentials](https://github.com/yt-dlp/yt-dlp/commit/89c1b349ad81318d9d3bea76c01c891696e58d38) ([#13131](https://github.com/yt-dlp/yt-dlp/issues/13131)) by [bashonly](https://github.com/bashonly)
- **aenetworks**
- [Fix playlist extractors](https://github.com/yt-dlp/yt-dlp/commit/f37d599a697e82fe68b423865897d55bae34f373) ([#13408](https://github.com/yt-dlp/yt-dlp/issues/13408)) by [Sipherdrakon](https://github.com/Sipherdrakon)
- [Fix provider-locked content extraction](https://github.com/yt-dlp/yt-dlp/commit/6693d6603358ae6beca834dbd822a7917498b813) ([#13131](https://github.com/yt-dlp/yt-dlp/issues/13131)) by [maxbin123](https://github.com/maxbin123)
- **bilibilibangumi**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/13e55162719528d42d2133e16b65ff59a667a6e4) ([#13416](https://github.com/yt-dlp/yt-dlp/issues/13416)) by [c-basalt](https://github.com/c-basalt)
- **brightcove**: new: [Adapt to new AdobePass requirement](https://github.com/yt-dlp/yt-dlp/commit/98f8eec956e3b16cb66a3d49cc71af3807db795e) ([#13131](https://github.com/yt-dlp/yt-dlp/issues/13131)) by [bashonly](https://github.com/bashonly)
- **cu.ntv.co.jp**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/aa863ddab9b1d104678e9cf39bb76f5b14fca660) ([#13302](https://github.com/yt-dlp/yt-dlp/issues/13302)) by [doe1080](https://github.com/doe1080), [nullpos](https://github.com/nullpos)
- **go**: [Fix provider-locked content extraction](https://github.com/yt-dlp/yt-dlp/commit/2e5bf002dad16f5ce35aa2023d392c9e518fcd8f) ([#13131](https://github.com/yt-dlp/yt-dlp/issues/13131)) by [bashonly](https://github.com/bashonly), [maxbin123](https://github.com/maxbin123)
- **nbc**: [Rework and adapt extractors to new AdobePass flow](https://github.com/yt-dlp/yt-dlp/commit/2d7949d5642bc37d1e71bf00c9a55260e5505d58) ([#13131](https://github.com/yt-dlp/yt-dlp/issues/13131)) by [bashonly](https://github.com/bashonly)
- **nobelprize**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/97ddfefeb4faba6e61cd80996c16952b8eab16f3) ([#13205](https://github.com/yt-dlp/yt-dlp/issues/13205)) by [doe1080](https://github.com/doe1080)
- **odnoklassniki**: [Detect and raise when login is required](https://github.com/yt-dlp/yt-dlp/commit/148a1eb4c59e127965396c7a6e6acf1979de459e) ([#13361](https://github.com/yt-dlp/yt-dlp/issues/13361)) by [bashonly](https://github.com/bashonly)
- **patreon**: [Fix m3u8 formats extraction](https://github.com/yt-dlp/yt-dlp/commit/e0d6c0822930f6e63f574d46d946a58b73ecd10c) ([#13266](https://github.com/yt-dlp/yt-dlp/issues/13266)) by [bashonly](https://github.com/bashonly) (With fixes in [1a8a03e](https://github.com/yt-dlp/yt-dlp/commit/1a8a03ea8d827107319a18076ee3505090667c5a))
- **podchaser**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/538eb305673c26bff6a2b12f1c96375fe02ce41a) ([#13271](https://github.com/yt-dlp/yt-dlp/issues/13271)) by [bashonly](https://github.com/bashonly)
- **sr**: mediathek: [Improve metadata extraction](https://github.com/yt-dlp/yt-dlp/commit/e3c605a61f4cc2de9059f37434fa108c3c20f58e) ([#13294](https://github.com/yt-dlp/yt-dlp/issues/13294)) by [doe1080](https://github.com/doe1080)
- **stacommu**: [Avoid partial stream formats](https://github.com/yt-dlp/yt-dlp/commit/5d96527be80dc1ed1702d9cd548ff86de570ad70) ([#13412](https://github.com/yt-dlp/yt-dlp/issues/13412)) by [bashonly](https://github.com/bashonly)
- **startrek**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/a8bf0011bde92b3f1324a98bfbd38932fd3ebe18) ([#13188](https://github.com/yt-dlp/yt-dlp/issues/13188)) by [doe1080](https://github.com/doe1080)
- **svt**: play: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/e1b6062f8c4a3fa33c65269d48d09ec78de765a2) ([#13329](https://github.com/yt-dlp/yt-dlp/issues/13329)) by [barsnick](https://github.com/barsnick), [bashonly](https://github.com/bashonly)
- **telecinco**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/03dba2012d9bd3f402fa8c2f122afba89bbd22a4) ([#13379](https://github.com/yt-dlp/yt-dlp/issues/13379)) by [bashonly](https://github.com/bashonly)
- **theplatform**: [Improve metadata extraction](https://github.com/yt-dlp/yt-dlp/commit/ed108b3ea481c6a4b5215a9302ba92d74baa2425) ([#13131](https://github.com/yt-dlp/yt-dlp/issues/13131)) by [bashonly](https://github.com/bashonly)
- **toutiao**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/f8051e3a61686c5db1de5f5746366ecfbc3ad20c) ([#13246](https://github.com/yt-dlp/yt-dlp/issues/13246)) by [doe1080](https://github.com/doe1080)
- **turner**: [Adapt extractors to new AdobePass flow](https://github.com/yt-dlp/yt-dlp/commit/0daddc780d3ac5bebc3a3ec5b884d9243cbc0745) ([#13131](https://github.com/yt-dlp/yt-dlp/issues/13131)) by [bashonly](https://github.com/bashonly)
- **twitcasting**: [Fix password-protected livestream support](https://github.com/yt-dlp/yt-dlp/commit/52f9729c9a92ad4656d746ff0b1acecb87b3e96d) ([#13097](https://github.com/yt-dlp/yt-dlp/issues/13097)) by [bashonly](https://github.com/bashonly)
- **twitter**: broadcast: [Support events URLs](https://github.com/yt-dlp/yt-dlp/commit/7794374de8afb20499b023107e2abfd4e6b93ee4) ([#13248](https://github.com/yt-dlp/yt-dlp/issues/13248)) by [doe1080](https://github.com/doe1080)
- **umg**: de: [Rework extractor](https://github.com/yt-dlp/yt-dlp/commit/4e7c1ea346b510280218b47e8653dbbca3a69870) ([#13373](https://github.com/yt-dlp/yt-dlp/issues/13373)) by [doe1080](https://github.com/doe1080)
- **vice**: [Mark extractors as broken](https://github.com/yt-dlp/yt-dlp/commit/6121559e027a04574690799c1776bc42bb51af31) ([#13131](https://github.com/yt-dlp/yt-dlp/issues/13131)) by [bashonly](https://github.com/bashonly)
- **vimeo**: [Extract subtitles from player subdomain](https://github.com/yt-dlp/yt-dlp/commit/c723c4e5e78263df178dbe69844a3d05f3ef9e35) ([#13350](https://github.com/yt-dlp/yt-dlp/issues/13350)) by [bashonly](https://github.com/bashonly)
- **watchespn**: [Fix provider-locked content extraction](https://github.com/yt-dlp/yt-dlp/commit/b094747e93cfb0a2c53007120e37d0d84d41f030) ([#13131](https://github.com/yt-dlp/yt-dlp/issues/13131)) by [maxbin123](https://github.com/maxbin123)
- **weverse**: [Support login with oauth refresh tokens](https://github.com/yt-dlp/yt-dlp/commit/3fe72e9eea38d9a58211cde42cfaa577ce020e2c) ([#13284](https://github.com/yt-dlp/yt-dlp/issues/13284)) by [bashonly](https://github.com/bashonly)
- **youtube**
- [Add `tv_simply` player client](https://github.com/yt-dlp/yt-dlp/commit/1fd0e88b67db53ad163393d6965f68e908fa70e3) ([#13389](https://github.com/yt-dlp/yt-dlp/issues/13389)) by [gamer191](https://github.com/gamer191)
- [Extract srt subtitles](https://github.com/yt-dlp/yt-dlp/commit/231349786e8c42089c2e079ec94c0ea866c37999) ([#13411](https://github.com/yt-dlp/yt-dlp/issues/13411)) by [gamer191](https://github.com/gamer191)
- [Fix `--mark-watched` support](https://github.com/yt-dlp/yt-dlp/commit/b5be29fa58ec98226e11621fd9c58585bcff6879) ([#13222](https://github.com/yt-dlp/yt-dlp/issues/13222)) by [brian6932](https://github.com/brian6932), [iednod55](https://github.com/iednod55)
- [Fix automatic captions for some client combinations](https://github.com/yt-dlp/yt-dlp/commit/53ea743a9c158f8ca2d75a09ca44ba68606042d8) ([#13268](https://github.com/yt-dlp/yt-dlp/issues/13268)) by [bashonly](https://github.com/bashonly)
- [Improve signature extraction debug output](https://github.com/yt-dlp/yt-dlp/commit/d30a49742cfa22e61c47df4ac0e7334d648fb85d) ([#13327](https://github.com/yt-dlp/yt-dlp/issues/13327)) by [bashonly](https://github.com/bashonly)
- [Rework nsig function name extraction](https://github.com/yt-dlp/yt-dlp/commit/9e38b273b7ac942e7e9fc05a651ed810ab7d30ba) ([#13403](https://github.com/yt-dlp/yt-dlp/issues/13403)) by [Grub4K](https://github.com/Grub4K)
- [nsig code improvements and cleanup](https://github.com/yt-dlp/yt-dlp/commit/f7bbf5a617f9ab54ef51eaef99be36e175b5e9c3) ([#13280](https://github.com/yt-dlp/yt-dlp/issues/13280)) by [bashonly](https://github.com/bashonly)
- **zdf**: [Fix language extraction and format sorting](https://github.com/yt-dlp/yt-dlp/commit/db162b76f6bdece50babe2e0cacfe56888c2e125) ([#13313](https://github.com/yt-dlp/yt-dlp/issues/13313)) by [InvalidUsernameException](https://github.com/InvalidUsernameException)
#### Misc. changes
- **build**
- [Exclude `pkg_resources` from being collected](https://github.com/yt-dlp/yt-dlp/commit/cc749a8a3b8b6e5c05318868c72a403f376a1b38) ([#13320](https://github.com/yt-dlp/yt-dlp/issues/13320)) by [bashonly](https://github.com/bashonly)
- [Fix macOS requirements caching](https://github.com/yt-dlp/yt-dlp/commit/201812100f315c6727a4418698d5b4e8a79863d4) ([#13328](https://github.com/yt-dlp/yt-dlp/issues/13328)) by [bashonly](https://github.com/bashonly)
- **cleanup**: Miscellaneous: [339614a](https://github.com/yt-dlp/yt-dlp/commit/339614a173c74b42d63e858c446a9cae262a13af) by [bashonly](https://github.com/bashonly)
- **test**: postprocessors: [Remove binary thumbnail test data](https://github.com/yt-dlp/yt-dlp/commit/a9b370069838e84d44ac7ad095d657003665885a) ([#13341](https://github.com/yt-dlp/yt-dlp/issues/13341)) by [bashonly](https://github.com/bashonly)
### 2025.05.22
#### Core changes
- **cookies**: [Fix Linux desktop environment detection](https://github.com/yt-dlp/yt-dlp/commit/e491fd4d090db3af52a82863fb0553dd5e17fb85) ([#13197](https://github.com/yt-dlp/yt-dlp/issues/13197)) by [mbway](https://github.com/mbway)
- **jsinterp**: [Fix increment/decrement evaluation](https://github.com/yt-dlp/yt-dlp/commit/167d7a9f0ffd1b4fe600193441bdb7358db2740b) ([#13238](https://github.com/yt-dlp/yt-dlp/issues/13238)) by [bashonly](https://github.com/bashonly), [seproDev](https://github.com/seproDev)
#### Extractor changes
- **1tv**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/41c0a1fb89628696f8bb88e2b9f3a68f355b8c26) ([#13168](https://github.com/yt-dlp/yt-dlp/issues/13168)) by [bashonly](https://github.com/bashonly)
- **amcnetworks**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/464c84fedf78eef822a431361155f108b5df96d7) ([#13147](https://github.com/yt-dlp/yt-dlp/issues/13147)) by [bashonly](https://github.com/bashonly)
- **bitchute**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/1d0f6539c47e5d5c68c3c47cdb7075339e2885ac) ([#13081](https://github.com/yt-dlp/yt-dlp/issues/13081)) by [bashonly](https://github.com/bashonly)
- **cartoonnetwork**: [Remove extractor](https://github.com/yt-dlp/yt-dlp/commit/7dbb47f84f0ee1266a3a01f58c9bc4c76d76794a) ([#13148](https://github.com/yt-dlp/yt-dlp/issues/13148)) by [bashonly](https://github.com/bashonly)
- **iprima**: [Fix login support](https://github.com/yt-dlp/yt-dlp/commit/a7d9a5eb79ceeecb851389f3f2c88597871ca3f2) ([#12937](https://github.com/yt-dlp/yt-dlp/issues/12937)) by [baierjan](https://github.com/baierjan)
- **jiosaavn**
- artist: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/586b557b124f954d3f625360ebe970989022ad97) ([#12803](https://github.com/yt-dlp/yt-dlp/issues/12803)) by [subrat-lima](https://github.com/subrat-lima)
- playlist, show: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/317f4b8006c2c0f0f64f095b1485163ad97c9053) ([#12803](https://github.com/yt-dlp/yt-dlp/issues/12803)) by [subrat-lima](https://github.com/subrat-lima)
- show: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/6839276496d8814cf16f58b637e45663467928e6) ([#12803](https://github.com/yt-dlp/yt-dlp/issues/12803)) by [subrat-lima](https://github.com/subrat-lima)
- **lrtradio**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/abf58dcd6a09e14eec4ea82ae12f79a0337cb383) ([#13200](https://github.com/yt-dlp/yt-dlp/issues/13200)) by [Pawka](https://github.com/Pawka)
- **nebula**: [Support `--mark-watched`](https://github.com/yt-dlp/yt-dlp/commit/20f288bdc2173c7cc58d709d25ca193c1f6001e7) ([#13120](https://github.com/yt-dlp/yt-dlp/issues/13120)) by [GeoffreyFrogeye](https://github.com/GeoffreyFrogeye)
- **niconico**
- [Fix error handling](https://github.com/yt-dlp/yt-dlp/commit/f569be4602c2a857087e495d5d7ed6060cd97abe) ([#13236](https://github.com/yt-dlp/yt-dlp/issues/13236)) by [bashonly](https://github.com/bashonly)
- live: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/7a7b85c9014d96421e18aa7ea5f4c1bee5ceece0) ([#13045](https://github.com/yt-dlp/yt-dlp/issues/13045)) by [doe1080](https://github.com/doe1080)
- **nytimesarticle**: [Fix extraction](https://github.com/yt-dlp/yt-dlp/commit/b26bc32579c00ef579d75a835807ccc87d20ee0a) ([#13104](https://github.com/yt-dlp/yt-dlp/issues/13104)) by [bashonly](https://github.com/bashonly)
- **once**: [Remove extractor](https://github.com/yt-dlp/yt-dlp/commit/f475e8b529d18efdad603ffda02a56e707fe0e2c) ([#13164](https://github.com/yt-dlp/yt-dlp/issues/13164)) by [bashonly](https://github.com/bashonly)
- **picarto**: vod: [Support `/profile/` video URLs](https://github.com/yt-dlp/yt-dlp/commit/31e090cb787f3504ec25485adff9a2a51d056734) ([#13227](https://github.com/yt-dlp/yt-dlp/issues/13227)) by [subrat-lima](https://github.com/subrat-lima)
- **playsuisse**: [Improve metadata extraction](https://github.com/yt-dlp/yt-dlp/commit/d880e060803ae8ed5a047e578cca01e1f0e630ce) ([#12466](https://github.com/yt-dlp/yt-dlp/issues/12466)) by [v3DJG6GL](https://github.com/v3DJG6GL)
- **sprout**: [Remove extractor](https://github.com/yt-dlp/yt-dlp/commit/cbcfe6378dde33a650e3852ab17ad4503b8e008d) ([#13149](https://github.com/yt-dlp/yt-dlp/issues/13149)) by [bashonly](https://github.com/bashonly)
- **svtpage**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/ea8498ed534642dd7e925961b97b934987142fd3) ([#12957](https://github.com/yt-dlp/yt-dlp/issues/12957)) by [diman8](https://github.com/diman8)
- **twitch**: [Support `--live-from-start`](https://github.com/yt-dlp/yt-dlp/commit/00b1bec55249cf2ad6271d36492c51b34b6459d1) ([#13202](https://github.com/yt-dlp/yt-dlp/issues/13202)) by [bashonly](https://github.com/bashonly)
- **vimeo**: event: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/545c1a5b6f2fe88722b41aef0e7485bf3be3f3f9) ([#13216](https://github.com/yt-dlp/yt-dlp/issues/13216)) by [bashonly](https://github.com/bashonly)
- **wat.tv**: [Improve error handling](https://github.com/yt-dlp/yt-dlp/commit/f123cc83b3aea45053f5fa1d9141048b01fc2774) ([#13111](https://github.com/yt-dlp/yt-dlp/issues/13111)) by [bashonly](https://github.com/bashonly)
- **weverse**: [Fix live extraction](https://github.com/yt-dlp/yt-dlp/commit/5328eda8820cc5f21dcf917684d23fbdca41831d) ([#13084](https://github.com/yt-dlp/yt-dlp/issues/13084)) by [bashonly](https://github.com/bashonly)
- **xinpianchang**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/83fabf352489d52843f67e6e9cc752db86d27e6e) ([#13245](https://github.com/yt-dlp/yt-dlp/issues/13245)) by [garret1317](https://github.com/garret1317)
- **youtube**
- [Add PO token support for subtitles](https://github.com/yt-dlp/yt-dlp/commit/32ed5f107c6c641958d1cd2752e130de4db55a13) ([#13234](https://github.com/yt-dlp/yt-dlp/issues/13234)) by [bashonly](https://github.com/bashonly), [coletdjnz](https://github.com/coletdjnz)
- [Add `web_embedded` client for age-restricted videos](https://github.com/yt-dlp/yt-dlp/commit/0feec6dc131f488428bf881519e7c69766fbb9ae) ([#13089](https://github.com/yt-dlp/yt-dlp/issues/13089)) by [bashonly](https://github.com/bashonly)
- [Add a PO Token Provider Framework](https://github.com/yt-dlp/yt-dlp/commit/2685654a37141cca63eda3a92da0e2706e23ccfd) ([#12840](https://github.com/yt-dlp/yt-dlp/issues/12840)) by [coletdjnz](https://github.com/coletdjnz)
- [Extract `media_type` for all videos](https://github.com/yt-dlp/yt-dlp/commit/ded11ebc9afba6ba33923375103e9be2d7c804e7) ([#13136](https://github.com/yt-dlp/yt-dlp/issues/13136)) by [bashonly](https://github.com/bashonly)
- [Fix `--live-from-start` support for premieres](https://github.com/yt-dlp/yt-dlp/commit/8f303afb43395be360cafd7ad4ce2b6e2eedfb8a) ([#13079](https://github.com/yt-dlp/yt-dlp/issues/13079)) by [arabcoders](https://github.com/arabcoders)
- [Fix geo-restriction error handling](https://github.com/yt-dlp/yt-dlp/commit/c7e575e31608c19c5b26c10a4229db89db5fc9a8) ([#13217](https://github.com/yt-dlp/yt-dlp/issues/13217)) by [yozel](https://github.com/yozel)
#### Misc. changes
- **build**
- [Bump PyInstaller to v6.13.0](https://github.com/yt-dlp/yt-dlp/commit/17cf9088d0d535e4a7feffbf02bd49cd9dae5ab9) ([#13082](https://github.com/yt-dlp/yt-dlp/issues/13082)) by [bashonly](https://github.com/bashonly)
- [Bump run-on-arch-action to v3](https://github.com/yt-dlp/yt-dlp/commit/9064d2482d1fe722bbb4a49731fe0711c410d1c8) ([#13088](https://github.com/yt-dlp/yt-dlp/issues/13088)) by [bashonly](https://github.com/bashonly)
- **cleanup**: Miscellaneous: [7977b32](https://github.com/yt-dlp/yt-dlp/commit/7977b329ed97b216e37bd402f4935f28c00eac9e) by [bashonly](https://github.com/bashonly)
### 2025.04.30 ### 2025.04.30
#### Important changes #### Important changes

View File

@ -18,10 +18,11 @@ pypi-files: AUTHORS Changelog.md LICENSE README.md README.txt supportedsites \
tar pypi-files lazy-extractors install uninstall tar pypi-files lazy-extractors install uninstall
clean-test: clean-test:
rm -rf test/testdata/sigs/player-*.js tmp/ *.annotations.xml *.aria2 *.description *.dump *.frag \ rm -rf tmp/ *.annotations.xml *.aria2 *.description *.dump *.frag \
*.frag.aria2 *.frag.urls *.info.json *.live_chat.json *.meta *.part* *.tmp *.temp *.unknown_video *.ytdl \ *.frag.aria2 *.frag.urls *.info.json *.live_chat.json *.meta *.part* *.tmp *.temp *.unknown_video *.ytdl \
*.3gp *.ape *.ass *.avi *.desktop *.f4v *.flac *.flv *.gif *.jpeg *.jpg *.lrc *.m4a *.m4v *.mhtml *.mkv *.mov *.mp3 *.mp4 \ *.3gp *.ape *.ass *.avi *.desktop *.f4v *.flac *.flv *.gif *.jpeg *.jpg *.lrc *.m4a *.m4v *.mhtml *.mkv *.mov *.mp3 *.mp4 \
*.mpg *.mpga *.oga *.ogg *.opus *.png *.sbv *.srt *.ssa *.swf *.tt *.ttml *.url *.vtt *.wav *.webloc *.webm *.webp *.mpg *.mpga *.oga *.ogg *.opus *.png *.sbv *.srt *.ssa *.swf *.tt *.ttml *.url *.vtt *.wav *.webloc *.webm *.webp \
test/testdata/sigs/player-*.js test/testdata/thumbnails/empty.webp "test/testdata/thumbnails/foo %d bar/foo_%d."*
clean-dist: clean-dist:
rm -rf yt-dlp.1.temp.md yt-dlp.1 README.txt MANIFEST build/ dist/ .coverage cover/ yt-dlp.tar.gz completions/ \ rm -rf yt-dlp.1.temp.md yt-dlp.1 README.txt MANIFEST build/ dist/ .coverage cover/ yt-dlp.tar.gz completions/ \
yt_dlp/extractor/lazy_extractors.py *.spec CONTRIBUTING.md.tmp yt-dlp yt-dlp.exe yt_dlp.egg-info/ AUTHORS yt_dlp/extractor/lazy_extractors.py *.spec CONTRIBUTING.md.tmp yt-dlp yt-dlp.exe yt_dlp.egg-info/ AUTHORS

View File

@ -44,6 +44,7 @@ yt-dlp is a feature-rich command-line audio/video downloader with support for [t
* [Post-processing Options](#post-processing-options) * [Post-processing Options](#post-processing-options)
* [SponsorBlock Options](#sponsorblock-options) * [SponsorBlock Options](#sponsorblock-options)
* [Extractor Options](#extractor-options) * [Extractor Options](#extractor-options)
* [Preset Aliases](#preset-aliases)
* [CONFIGURATION](#configuration) * [CONFIGURATION](#configuration)
* [Configuration file encoding](#configuration-file-encoding) * [Configuration file encoding](#configuration-file-encoding)
* [Authentication with netrc](#authentication-with-netrc) * [Authentication with netrc](#authentication-with-netrc)
@ -348,8 +349,8 @@ If you fork the project on GitHub, you can run your fork's [build workflow](.git
--no-flat-playlist Fully extract the videos of a playlist --no-flat-playlist Fully extract the videos of a playlist
(default) (default)
--live-from-start Download livestreams from the start. --live-from-start Download livestreams from the start.
Currently only supported for YouTube Currently experimental and only supported
(Experimental) for YouTube and Twitch
--no-live-from-start Download livestreams from the current time --no-live-from-start Download livestreams from the current time
(default) (default)
--wait-for-video MIN[-MAX] Wait for scheduled streams to become --wait-for-video MIN[-MAX] Wait for scheduled streams to become
@ -375,12 +376,12 @@ If you fork the project on GitHub, you can run your fork's [build workflow](.git
an alias starts with a dash "-", it is an alias starts with a dash "-", it is
prefixed with "--". Arguments are parsed prefixed with "--". Arguments are parsed
according to the Python string formatting according to the Python string formatting
mini-language. E.g. --alias get-audio,-X mini-language. E.g. --alias get-audio,-X "-S
"-S=aext:{0},abr -x --audio-format {0}" aext:{0},abr -x --audio-format {0}" creates
creates options "--get-audio" and "-X" that options "--get-audio" and "-X" that takes an
takes an argument (ARG0) and expands to argument (ARG0) and expands to "-S
"-S=aext:ARG0,abr -x --audio-format ARG0". aext:ARG0,abr -x --audio-format ARG0". All
All defined aliases are listed in the --help defined aliases are listed in the --help
output. Alias options can trigger more output. Alias options can trigger more
aliases; so be careful to avoid defining aliases; so be careful to avoid defining
recursive options. As a safety measure, each recursive options. As a safety measure, each
@ -1105,6 +1106,10 @@ Make chapter entries for, or remove various segments (sponsor,
arguments for different extractors arguments for different extractors
## Preset Aliases: ## Preset Aliases:
Predefined aliases for convenience and ease of use. Note that future
versions of yt-dlp may add or adjust presets, but the existing preset
names will not be changed or removed
-t mp3 -f 'ba[acodec^=mp3]/ba/b' -x --audio-format -t mp3 -f 'ba[acodec^=mp3]/ba/b' -x --audio-format
mp3 mp3
@ -1790,11 +1795,12 @@ Note: In CLI, `ARG` can use `-` instead of `_`; e.g. `youtube:player-client"` be
The following extractors use this feature: The following extractors use this feature:
#### youtube #### youtube
* `lang`: Prefer translated metadata (`title`, `description` etc) of this language code (case-sensitive). By default, the video primary language metadata is preferred, with a fallback to `en` translated. See [youtube.py](https://github.com/yt-dlp/yt-dlp/blob/c26f9b991a0681fd3ea548d535919cec1fbbd430/yt_dlp/extractor/youtube.py#L381-L390) for list of supported content language codes * `lang`: Prefer translated metadata (`title`, `description` etc) of this language code (case-sensitive). By default, the video primary language metadata is preferred, with a fallback to `en` translated. See [youtube/_base.py](https://github.com/yt-dlp/yt-dlp/blob/415b4c9f955b1a0391204bd24a7132590e7b3bdb/yt_dlp/extractor/youtube/_base.py#L402-L409) for the list of supported content language codes
* `skip`: One or more of `hls`, `dash` or `translated_subs` to skip extraction of the m3u8 manifests, dash manifests and [auto-translated subtitles](https://github.com/yt-dlp/yt-dlp/issues/4090#issuecomment-1158102032) respectively * `skip`: One or more of `hls`, `dash` or `translated_subs` to skip extraction of the m3u8 manifests, dash manifests and [auto-translated subtitles](https://github.com/yt-dlp/yt-dlp/issues/4090#issuecomment-1158102032) respectively
* `player_client`: Clients to extract video data from. The currently available clients are `web`, `web_safari`, `web_embedded`, `web_music`, `web_creator`, `mweb`, `ios`, `android`, `android_vr`, `tv` and `tv_embedded`. By default, `tv,ios,web` is used, or `tv,web` is used when authenticating with cookies. The `web_music` client is added for `music.youtube.com` URLs when logged-in cookies are used. The `tv_embedded` and `web_creator` clients are added for age-restricted videos if account age-verification is required. Some clients, such as `web` and `web_music`, require a `po_token` for their formats to be downloadable. Some clients, such as `web_creator`, will only work with authentication. Not all clients support authentication via cookies. You can use `default` for the default clients, or you can use `all` for all clients (not recommended). You can prefix a client with `-` to exclude it, e.g. `youtube:player_client=default,-ios` * `player_client`: Clients to extract video data from. The currently available clients are `web`, `web_safari`, `web_embedded`, `web_music`, `web_creator`, `mweb`, `ios`, `android`, `android_vr`, `tv`, `tv_simply` and `tv_embedded`. By default, `tv,ios,web` is used, or `tv,web` is used when authenticating with cookies. The `web_music` client is added for `music.youtube.com` URLs when logged-in cookies are used. The `web_embedded` client is added for age-restricted videos but only works if the video is embeddable. The `tv_embedded` and `web_creator` clients are added for age-restricted videos if account age-verification is required. Some clients, such as `web` and `web_music`, require a `po_token` for their formats to be downloadable. Some clients, such as `web_creator`, will only work with authentication. Not all clients support authentication via cookies. You can use `default` for the default clients, or you can use `all` for all clients (not recommended). You can prefix a client with `-` to exclude it, e.g. `youtube:player_client=default,-ios`
* `player_skip`: Skip some network requests that are generally needed for robust extraction. One or more of `configs` (skip client configs), `webpage` (skip initial webpage), `js` (skip js player), `initial_data` (skip initial data/next ep request). While these options can help reduce the number of requests needed or avoid some rate-limiting, they could cause issues such as missing formats or metadata. See [#860](https://github.com/yt-dlp/yt-dlp/pull/860) and [#12826](https://github.com/yt-dlp/yt-dlp/issues/12826) for more details * `player_skip`: Skip some network requests that are generally needed for robust extraction. One or more of `configs` (skip client configs), `webpage` (skip initial webpage), `js` (skip js player), `initial_data` (skip initial data/next ep request). While these options can help reduce the number of requests needed or avoid some rate-limiting, they could cause issues such as missing formats or metadata. See [#860](https://github.com/yt-dlp/yt-dlp/pull/860) and [#12826](https://github.com/yt-dlp/yt-dlp/issues/12826) for more details
* `player_params`: YouTube player parameters to use for player requests. Will overwrite any default ones set by yt-dlp. * `player_params`: YouTube player parameters to use for player requests. Will overwrite any default ones set by yt-dlp.
* `player_js_variant`: The player javascript variant to use for signature and nsig deciphering. The known variants are: `main`, `tce`, `tv`, `tv_es6`, `phone`, `tablet`. Only `main` is recommended as a possible workaround; the others are for debugging purposes. The default is to use what is prescribed by the site, and can be selected with `actual`
* `comment_sort`: `top` or `new` (default) - choose comment sorting mode (on YouTube's side) * `comment_sort`: `top` or `new` (default) - choose comment sorting mode (on YouTube's side)
* `max_comments`: Limit the amount of comments to gather. Comma-separated list of integers representing `max-comments,max-parents,max-replies,max-replies-per-thread`. Default is `all,all,all,all` * `max_comments`: Limit the amount of comments to gather. Comma-separated list of integers representing `max-comments,max-parents,max-replies,max-replies-per-thread`. Default is `all,all,all,all`
* E.g. `all,all,1000,10` will get a maximum of 1000 replies total, with up to 10 replies per thread. `1000,all,100` will get a maximum of 1000 comments, with a maximum of 100 replies total * E.g. `all,all,1000,10` will get a maximum of 1000 replies total, with up to 10 replies per thread. `1000,all,100` will get a maximum of 1000 comments, with a maximum of 100 replies total
@ -1804,8 +1810,12 @@ The following extractors use this feature:
* `raise_incomplete_data`: `Incomplete Data Received` raises an error instead of reporting a warning * `raise_incomplete_data`: `Incomplete Data Received` raises an error instead of reporting a warning
* `data_sync_id`: Overrides the account Data Sync ID used in Innertube API requests. This may be needed if you are using an account with `youtube:player_skip=webpage,configs` or `youtubetab:skip=webpage` * `data_sync_id`: Overrides the account Data Sync ID used in Innertube API requests. This may be needed if you are using an account with `youtube:player_skip=webpage,configs` or `youtubetab:skip=webpage`
* `visitor_data`: Overrides the Visitor Data used in Innertube API requests. This should be used with `player_skip=webpage,configs` and without cookies. Note: this may have adverse effects if used improperly. If a session from a browser is wanted, you should pass cookies instead (which contain the Visitor ID) * `visitor_data`: Overrides the Visitor Data used in Innertube API requests. This should be used with `player_skip=webpage,configs` and without cookies. Note: this may have adverse effects if used improperly. If a session from a browser is wanted, you should pass cookies instead (which contain the Visitor ID)
* `po_token`: Proof of Origin (PO) Token(s) to use. Comma seperated list of PO Tokens in the format `CLIENT.CONTEXT+PO_TOKEN`, e.g. `youtube:po_token=web.gvs+XXX,web.player=XXX,web_safari.gvs+YYY`. Context can be either `gvs` (Google Video Server URLs) or `player` (Innertube player request) * `po_token`: Proof of Origin (PO) Token(s) to use. Comma seperated list of PO Tokens in the format `CLIENT.CONTEXT+PO_TOKEN`, e.g. `youtube:po_token=web.gvs+XXX,web.player=XXX,web_safari.gvs+YYY`. Context can be any of `gvs` (Google Video Server URLs), `player` (Innertube player request) or `subs` (Subtitles)
* `player_js_variant`: The player javascript variant to use for signature and nsig deciphering. The known variants are: `main`, `tce`, `tv`, `tv_es6`, `phone`, `tablet`. Only `main` is recommended as a possible workaround; the others are for debugging purposes. The default is to use what is prescribed by the site, and can be selected with `actual` * `pot_trace`: Enable debug logging for PO Token fetching. Either `true` or `false` (default)
* `fetch_pot`: Policy to use for fetching a PO Token from providers. One of `always` (always try fetch a PO Token regardless if the client requires one for the given context), `never` (never fetch a PO Token), or `auto` (default; only fetch a PO Token if the client requires one for the given context)
#### youtubepot-webpo
* `bind_to_visitor_id`: Whether to use the Visitor ID instead of Visitor Data for caching WebPO tokens. Either `true` (default) or `false`
#### youtubetab (YouTube playlists, channels, feeds, etc.) #### youtubetab (YouTube playlists, channels, feeds, etc.)
* `skip`: One or more of `webpage` (skip initial webpage download), `authcheck` (allow the download of playlists requiring authentication when no initial webpage is downloaded. This may cause unwanted behavior, see [#1122](https://github.com/yt-dlp/yt-dlp/pull/1122) for more details) * `skip`: One or more of `webpage` (skip initial webpage download), `authcheck` (allow the download of playlists requiring authentication when no initial webpage is downloaded. This may cause unwanted behavior, see [#1122](https://github.com/yt-dlp/yt-dlp/pull/1122) for more details)

View File

@ -2,6 +2,7 @@
set -e set -e
source ~/.local/share/pipx/venvs/pyinstaller/bin/activate source ~/.local/share/pipx/venvs/pyinstaller/bin/activate
python -m devscripts.install_deps -o --include build
python -m devscripts.install_deps --include secretstorage --include curl-cffi python -m devscripts.install_deps --include secretstorage --include curl-cffi
python -m devscripts.make_lazy_extractors python -m devscripts.make_lazy_extractors
python devscripts/update-version.py -c "${channel}" -r "${origin}" "${version}" python devscripts/update-version.py -c "${channel}" -r "${origin}" "${version}"

View File

@ -36,6 +36,9 @@ def main():
f'--name={name}', f'--name={name}',
'--icon=devscripts/logo.ico', '--icon=devscripts/logo.ico',
'--upx-exclude=vcruntime140.dll', '--upx-exclude=vcruntime140.dll',
# Ref: https://github.com/yt-dlp/yt-dlp/issues/13311
# https://github.com/pyinstaller/pyinstaller/issues/9149
'--exclude-module=pkg_resources',
'--noconfirm', '--noconfirm',
'--additional-hooks-dir=yt_dlp/__pyinstaller', '--additional-hooks-dir=yt_dlp/__pyinstaller',
*opts, *opts,

View File

@ -65,7 +65,7 @@ build = [
"build", "build",
"hatchling", "hatchling",
"pip", "pip",
"setuptools>=71.0.2", # 71.0.0 broke pyinstaller "setuptools>=71.0.2,<81", # See https://github.com/pyinstaller/pyinstaller/issues/9149
"wheel", "wheel",
] ]
dev = [ dev = [
@ -82,7 +82,7 @@ test = [
"pytest-rerunfailures~=14.0", "pytest-rerunfailures~=14.0",
] ]
pyinstaller = [ pyinstaller = [
"pyinstaller>=6.11.1", # Windows temp cleanup fixed in 6.11.1 "pyinstaller>=6.13.0", # Windows temp cleanup fixed in 6.13.0
] ]
[project.urls] [project.urls]

View File

@ -5,6 +5,8 @@ If a site is not listed here, it might still be supported by yt-dlp's embed extr
Not all sites listed here are guaranteed to work; websites are constantly changing and sometimes this breaks yt-dlp's support for them. Not all sites listed here are guaranteed to work; websites are constantly changing and sometimes this breaks yt-dlp's support for them.
The only reliable way to check if a site is supported is to try it. The only reliable way to check if a site is supported is to try it.
- **10play**: [*10play*](## "netrc machine")
- **10play:season**
- **17live** - **17live**
- **17live:clip** - **17live:clip**
- **17live:vod** - **17live:vod**
@ -246,7 +248,6 @@ The only reliable way to check if a site is supported is to try it.
- **Canalplus**: mycanal.fr and piwiplus.fr - **Canalplus**: mycanal.fr and piwiplus.fr
- **Canalsurmas** - **Canalsurmas**
- **CaracolTvPlay**: [*caracoltv-play*](## "netrc machine") - **CaracolTvPlay**: [*caracoltv-play*](## "netrc machine")
- **CartoonNetwork**
- **cbc.ca** - **cbc.ca**
- **cbc.ca:player** - **cbc.ca:player**
- **cbc.ca:player:playlist** - **cbc.ca:player:playlist**
@ -296,7 +297,7 @@ The only reliable way to check if a site is supported is to try it.
- **CNNIndonesia** - **CNNIndonesia**
- **ComedyCentral** - **ComedyCentral**
- **ComedyCentralTV** - **ComedyCentralTV**
- **ConanClassic** - **ConanClassic**: (**Currently broken**)
- **CondeNast**: Condé Nast media group: Allure, Architectural Digest, Ars Technica, Bon Appétit, Brides, Condé Nast, Condé Nast Traveler, Details, Epicurious, GQ, Glamour, Golf Digest, SELF, Teen Vogue, The New Yorker, Vanity Fair, Vogue, W Magazine, WIRED - **CondeNast**: Condé Nast media group: Allure, Architectural Digest, Ars Technica, Bon Appétit, Brides, Condé Nast, Condé Nast Traveler, Details, Epicurious, GQ, Glamour, Golf Digest, SELF, Teen Vogue, The New Yorker, Vanity Fair, Vogue, W Magazine, WIRED
- **CONtv** - **CONtv**
- **CookingChannel** - **CookingChannel**
@ -318,7 +319,7 @@ The only reliable way to check if a site is supported is to try it.
- **CtsNews**: 華視新聞 - **CtsNews**: 華視新聞
- **CTV** - **CTV**
- **CTVNews** - **CTVNews**
- **cu.ntv.co.jp**: Nippon Television Network - **cu.ntv.co.jp**: 日テレ無料TADA!
- **CultureUnplugged** - **CultureUnplugged**
- **curiositystream**: [*curiositystream*](## "netrc machine") - **curiositystream**: [*curiositystream*](## "netrc machine")
- **curiositystream:collections**: [*curiositystream*](## "netrc machine") - **curiositystream:collections**: [*curiositystream*](## "netrc machine")
@ -649,7 +650,10 @@ The only reliable way to check if a site is supported is to try it.
- **jiocinema**: [*jiocinema*](## "netrc machine") - **jiocinema**: [*jiocinema*](## "netrc machine")
- **jiocinema:series**: [*jiocinema*](## "netrc machine") - **jiocinema:series**: [*jiocinema*](## "netrc machine")
- **jiosaavn:album** - **jiosaavn:album**
- **jiosaavn:artist**
- **jiosaavn:playlist** - **jiosaavn:playlist**
- **jiosaavn:show**
- **jiosaavn:show:playlist**
- **jiosaavn:song** - **jiosaavn:song**
- **Joj** - **Joj**
- **JoqrAg**: 超!A&G+ 文化放送 (f.k.a. AGQR) Nippon Cultural Broadcasting, Inc. (JOQR) - **JoqrAg**: 超!A&G+ 文化放送 (f.k.a. AGQR) Nippon Cultural Broadcasting, Inc. (JOQR)
@ -880,19 +884,19 @@ The only reliable way to check if a site is supported is to try it.
- **Naver** - **Naver**
- **Naver:live** - **Naver:live**
- **navernow** - **navernow**
- **nba** - **nba**: (**Currently broken**)
- **nba:channel** - **nba:channel**: (**Currently broken**)
- **nba:embed** - **nba:embed**: (**Currently broken**)
- **nba:watch** - **nba:watch**: (**Currently broken**)
- **nba:watch:collection** - **nba:watch:collection**: (**Currently broken**)
- **nba:watch:embed** - **nba:watch:embed**: (**Currently broken**)
- **NBC** - **NBC**
- **NBCNews** - **NBCNews**
- **nbcolympics** - **nbcolympics**
- **nbcolympics:stream** - **nbcolympics:stream**: (**Currently broken**)
- **NBCSports** - **NBCSports**: (**Currently broken**)
- **NBCSportsStream** - **NBCSportsStream**: (**Currently broken**)
- **NBCSportsVPlayer** - **NBCSportsVPlayer**: (**Currently broken**)
- **NBCStations** - **NBCStations**
- **ndr**: NDR.de - Norddeutscher Rundfunk - **ndr**: NDR.de - Norddeutscher Rundfunk
- **ndr:embed** - **ndr:embed**
@ -968,7 +972,7 @@ The only reliable way to check if a site is supported is to try it.
- **Nitter** - **Nitter**
- **njoy**: N-JOY - **njoy**: N-JOY
- **njoy:embed** - **njoy:embed**
- **NobelPrize**: (**Currently broken**) - **NobelPrize**
- **NoicePodcast** - **NoicePodcast**
- **NonkTube** - **NonkTube**
- **NoodleMagazine** - **NoodleMagazine**
@ -1081,8 +1085,8 @@ The only reliable way to check if a site is supported is to try it.
- **Photobucket** - **Photobucket**
- **PiaLive** - **PiaLive**
- **Piapro**: [*piapro*](## "netrc machine") - **Piapro**: [*piapro*](## "netrc machine")
- **Picarto** - **picarto**
- **PicartoVod** - **picarto:vod**
- **Piksel** - **Piksel**
- **Pinkbike** - **Pinkbike**
- **Pinterest** - **Pinterest**
@ -1390,16 +1394,15 @@ The only reliable way to check if a site is supported is to try it.
- **Spreaker** - **Spreaker**
- **SpreakerShow** - **SpreakerShow**
- **SpringboardPlatform** - **SpringboardPlatform**
- **Sprout**
- **SproutVideo** - **SproutVideo**
- **sr:mediathek**: Saarländischer Rundfunk (**Currently broken**) - **sr:mediathek**: Saarländischer Rundfunk
- **SRGSSR** - **SRGSSR**
- **SRGSSRPlay**: srf.ch, rts.ch, rsi.ch, rtr.ch and swissinfo.ch play sites - **SRGSSRPlay**: srf.ch, rts.ch, rsi.ch, rtr.ch and swissinfo.ch play sites
- **StacommuLive**: [*stacommu*](## "netrc machine") - **StacommuLive**: [*stacommu*](## "netrc machine")
- **StacommuVOD**: [*stacommu*](## "netrc machine") - **StacommuVOD**: [*stacommu*](## "netrc machine")
- **StagePlusVODConcert**: [*stageplus*](## "netrc machine") - **StagePlusVODConcert**: [*stageplus*](## "netrc machine")
- **stanfordoc**: Stanford Open ClassRoom - **stanfordoc**: Stanford Open ClassRoom
- **StarTrek**: (**Currently broken**) - **startrek**: STAR TREK
- **startv** - **startv**
- **Steam** - **Steam**
- **SteamCommunityBroadcast** - **SteamCommunityBroadcast**
@ -1422,12 +1425,11 @@ The only reliable way to check if a site is supported is to try it.
- **SunPorno** - **SunPorno**
- **sverigesradio:episode** - **sverigesradio:episode**
- **sverigesradio:publication** - **sverigesradio:publication**
- **SVT** - **svt:page**
- **SVTPage** - **svt:play**: SVT Play and Öppet arkiv
- **SVTPlay**: SVT Play and Öppet arkiv - **svt:play:series**
- **SVTSeries**
- **SwearnetEpisode** - **SwearnetEpisode**
- **Syfy**: (**Currently broken**) - **Syfy**
- **SYVDK** - **SYVDK**
- **SztvHu** - **SztvHu**
- **t-online.de**: (**Currently broken**) - **t-online.de**: (**Currently broken**)
@ -1471,8 +1473,6 @@ The only reliable way to check if a site is supported is to try it.
- **Telewebion**: (**Currently broken**) - **Telewebion**: (**Currently broken**)
- **Tempo** - **Tempo**
- **TennisTV**: [*tennistv*](## "netrc machine") - **TennisTV**: [*tennistv*](## "netrc machine")
- **TenPlay**: [*10play*](## "netrc machine")
- **TenPlaySeason**
- **TF1** - **TF1**
- **TFO** - **TFO**
- **theatercomplextown:ppv**: [*theatercomplextown*](## "netrc machine") - **theatercomplextown:ppv**: [*theatercomplextown*](## "netrc machine")
@ -1510,6 +1510,7 @@ The only reliable way to check if a site is supported is to try it.
- **tokfm:podcast** - **tokfm:podcast**
- **ToonGoggles** - **ToonGoggles**
- **tou.tv**: [*toutv*](## "netrc machine") - **tou.tv**: [*toutv*](## "netrc machine")
- **toutiao**: 今日头条
- **Toypics**: Toypics video (**Currently broken**) - **Toypics**: Toypics video (**Currently broken**)
- **ToypicsUser**: Toypics user profile (**Currently broken**) - **ToypicsUser**: Toypics user profile (**Currently broken**)
- **TrailerAddict**: (**Currently broken**) - **TrailerAddict**: (**Currently broken**)
@ -1599,7 +1600,7 @@ The only reliable way to check if a site is supported is to try it.
- **UKTVPlay** - **UKTVPlay**
- **UlizaPlayer** - **UlizaPlayer**
- **UlizaPortal**: ulizaportal.jp - **UlizaPortal**: ulizaportal.jp
- **umg:de**: Universal Music Deutschland (**Currently broken**) - **umg:de**: Universal Music Deutschland
- **Unistra** - **Unistra**
- **Unity**: (**Currently broken**) - **Unity**: (**Currently broken**)
- **uol.com.br** - **uol.com.br**
@ -1622,9 +1623,9 @@ The only reliable way to check if a site is supported is to try it.
- **VGTV**: VGTV, BTTV, FTV, Aftenposten and Aftonbladet - **VGTV**: VGTV, BTTV, FTV, Aftenposten and Aftonbladet
- **vh1.com** - **vh1.com**
- **vhx:embed**: [*vimeo*](## "netrc machine") - **vhx:embed**: [*vimeo*](## "netrc machine")
- **vice** - **vice**: (**Currently broken**)
- **vice:article** - **vice:article**: (**Currently broken**)
- **vice:show** - **vice:show**: (**Currently broken**)
- **Viddler** - **Viddler**
- **Videa** - **Videa**
- **video.arnes.si**: Arnes Video - **video.arnes.si**: Arnes Video
@ -1656,6 +1657,7 @@ The only reliable way to check if a site is supported is to try it.
- **vimeo**: [*vimeo*](## "netrc machine") - **vimeo**: [*vimeo*](## "netrc machine")
- **vimeo:album**: [*vimeo*](## "netrc machine") - **vimeo:album**: [*vimeo*](## "netrc machine")
- **vimeo:channel**: [*vimeo*](## "netrc machine") - **vimeo:channel**: [*vimeo*](## "netrc machine")
- **vimeo:event**: [*vimeo*](## "netrc machine")
- **vimeo:group**: [*vimeo*](## "netrc machine") - **vimeo:group**: [*vimeo*](## "netrc machine")
- **vimeo:likes**: [*vimeo*](## "netrc machine") Vimeo user likes - **vimeo:likes**: [*vimeo*](## "netrc machine") Vimeo user likes
- **vimeo:ondemand**: [*vimeo*](## "netrc machine") - **vimeo:ondemand**: [*vimeo*](## "netrc machine")

View File

@ -314,6 +314,20 @@ class TestInfoExtractor(unittest.TestCase):
}, },
{}, {},
), ),
(
# test thumbnail_url key without URL scheme
r'''
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "VideoObject",
"thumbnail_url": "//www.nobelprize.org/images/12693-landscape-medium-gallery.jpg"
}</script>''',
{
'thumbnails': [{'url': 'https://www.nobelprize.org/images/12693-landscape-medium-gallery.jpg'}],
},
{},
),
] ]
for html, expected_dict, search_json_ld_kwargs in _TESTS: for html, expected_dict, search_json_ld_kwargs in _TESTS:
expect_dict( expect_dict(
@ -1933,6 +1947,137 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
with self.assertWarns(DeprecationWarning): with self.assertWarns(DeprecationWarning):
self.assertEqual(self.ie._search_nextjs_data('', None, default='{}'), {}) self.assertEqual(self.ie._search_nextjs_data('', None, default='{}'), {})
def test_search_nuxt_json(self):
HTML_TMPL = '<script data-ssr="true" id="__NUXT_DATA__" type="application/json">[{}]</script>'
VALID_DATA = '''
["ShallowReactive",1],
{"data":2,"state":21,"once":25,"_errors":28,"_server_errors":30},
["ShallowReactive",3],
{"$abcdef123456":4},
{"podcast":5,"activeEpisodeData":7},
{"podcast":6,"seasons":14},
{"title":10,"id":11},
["Reactive",8],
{"episode":9,"creators":18,"empty_list":20},
{"title":12,"id":13,"refs":34,"empty_refs":35},
"Series Title",
"podcast-id-01",
"Episode Title",
"episode-id-99",
[15,16,17],
1,
2,
3,
[19],
"Podcast Creator",
[],
{"$ssite-config":22},
{"env":23,"name":24,"map":26,"numbers":14},
"production",
"podcast-website",
["Set"],
["Reactive",27],
["Map"],
["ShallowReactive",29],
{},
["NuxtError",31],
{"status":32,"message":33},
503,
"Service Unavailable",
[36,37],
[38,39],
["Ref",40],
["ShallowRef",41],
["EmptyRef",42],
["EmptyShallowRef",43],
"ref",
"shallow_ref",
"{\\"ref\\":1}",
"{\\"shallow_ref\\":2}"
'''
PAYLOAD = {
'data': {
'$abcdef123456': {
'podcast': {
'podcast': {
'title': 'Series Title',
'id': 'podcast-id-01',
},
'seasons': [1, 2, 3],
},
'activeEpisodeData': {
'episode': {
'title': 'Episode Title',
'id': 'episode-id-99',
'refs': ['ref', 'shallow_ref'],
'empty_refs': [{'ref': 1}, {'shallow_ref': 2}],
},
'creators': ['Podcast Creator'],
'empty_list': [],
},
},
},
'state': {
'$ssite-config': {
'env': 'production',
'name': 'podcast-website',
'map': [],
'numbers': [1, 2, 3],
},
},
'once': [],
'_errors': {},
'_server_errors': {
'status': 503,
'message': 'Service Unavailable',
},
}
PARTIALLY_INVALID = [(
'''
{"data":1},
{"invalid_raw_list":2},
[15,16,17]
''',
{'data': {'invalid_raw_list': [None, None, None]}},
), (
'''
{"data":1},
["EmptyRef",2],
"not valid JSON"
''',
{'data': None},
), (
'''
{"data":1},
["EmptyShallowRef",2],
"not valid JSON"
''',
{'data': None},
)]
INVALID = [
'''
[]
''',
'''
["unsupported",1],
{"data":2},
{}
''',
]
DEFAULT = object()
self.assertEqual(self.ie._search_nuxt_json(HTML_TMPL.format(VALID_DATA), None), PAYLOAD)
self.assertEqual(self.ie._search_nuxt_json('', None, fatal=False), {})
self.assertIs(self.ie._search_nuxt_json('', None, default=DEFAULT), DEFAULT)
for data, expected in PARTIALLY_INVALID:
self.assertEqual(
self.ie._search_nuxt_json(HTML_TMPL.format(data), None, fatal=False), expected)
for data in INVALID:
self.assertIs(
self.ie._search_nuxt_json(HTML_TMPL.format(data), None, default=DEFAULT), DEFAULT)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -1435,6 +1435,27 @@ class TestYoutubeDL(unittest.TestCase):
FakeYDL().close() FakeYDL().close()
assert all_plugins_loaded.value assert all_plugins_loaded.value
def test_close_hooks(self):
# Should call all registered close hooks on close
close_hook_called = False
close_hook_two_called = False
def close_hook():
nonlocal close_hook_called
close_hook_called = True
def close_hook_two():
nonlocal close_hook_two_called
close_hook_two_called = True
ydl = FakeYDL()
ydl.add_close_hook(close_hook)
ydl.add_close_hook(close_hook_two)
ydl.close()
self.assertTrue(close_hook_called, 'Close hook was not called')
self.assertTrue(close_hook_two_called, 'Close hook two was not called')
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -58,6 +58,14 @@ class TestCookies(unittest.TestCase):
({'DESKTOP_SESSION': 'kde'}, _LinuxDesktopEnvironment.KDE3), ({'DESKTOP_SESSION': 'kde'}, _LinuxDesktopEnvironment.KDE3),
({'DESKTOP_SESSION': 'xfce'}, _LinuxDesktopEnvironment.XFCE), ({'DESKTOP_SESSION': 'xfce'}, _LinuxDesktopEnvironment.XFCE),
({'XDG_CURRENT_DESKTOP': 'my_custom_de', 'DESKTOP_SESSION': 'gnome'}, _LinuxDesktopEnvironment.GNOME),
({'XDG_CURRENT_DESKTOP': 'my_custom_de', 'DESKTOP_SESSION': 'mate'}, _LinuxDesktopEnvironment.GNOME),
({'XDG_CURRENT_DESKTOP': 'my_custom_de', 'DESKTOP_SESSION': 'kde4'}, _LinuxDesktopEnvironment.KDE4),
({'XDG_CURRENT_DESKTOP': 'my_custom_de', 'DESKTOP_SESSION': 'kde'}, _LinuxDesktopEnvironment.KDE3),
({'XDG_CURRENT_DESKTOP': 'my_custom_de', 'DESKTOP_SESSION': 'xfce'}, _LinuxDesktopEnvironment.XFCE),
({'XDG_CURRENT_DESKTOP': 'my_custom_de', 'DESKTOP_SESSION': 'my_custom_de', 'GNOME_DESKTOP_SESSION_ID': 1}, _LinuxDesktopEnvironment.GNOME),
({'GNOME_DESKTOP_SESSION_ID': 1}, _LinuxDesktopEnvironment.GNOME), ({'GNOME_DESKTOP_SESSION_ID': 1}, _LinuxDesktopEnvironment.GNOME),
({'KDE_FULL_SESSION': 1}, _LinuxDesktopEnvironment.KDE3), ({'KDE_FULL_SESSION': 1}, _LinuxDesktopEnvironment.KDE3),
({'KDE_FULL_SESSION': 1, 'DESKTOP_SESSION': 'kde4'}, _LinuxDesktopEnvironment.KDE4), ({'KDE_FULL_SESSION': 1, 'DESKTOP_SESSION': 'kde4'}, _LinuxDesktopEnvironment.KDE4),

235
test/test_devalue.py Normal file
View File

@ -0,0 +1,235 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import datetime as dt
import json
import math
import re
import unittest
from yt_dlp.utils.jslib import devalue
TEST_CASES_EQUALS = [{
'name': 'int',
'unparsed': [-42],
'parsed': -42,
}, {
'name': 'str',
'unparsed': ['woo!!!'],
'parsed': 'woo!!!',
}, {
'name': 'Number',
'unparsed': [['Object', 42]],
'parsed': 42,
}, {
'name': 'String',
'unparsed': [['Object', 'yar']],
'parsed': 'yar',
}, {
'name': 'Infinity',
'unparsed': -4,
'parsed': math.inf,
}, {
'name': 'negative Infinity',
'unparsed': -5,
'parsed': -math.inf,
}, {
'name': 'negative zero',
'unparsed': -6,
'parsed': -0.0,
}, {
'name': 'RegExp',
'unparsed': [['RegExp', 'regexp', 'gim']], # XXX: flags are ignored
'parsed': re.compile('regexp'),
}, {
'name': 'Date',
'unparsed': [['Date', '2001-09-09T01:46:40.000Z']],
'parsed': dt.datetime.fromtimestamp(1e9, tz=dt.timezone.utc),
}, {
'name': 'Array',
'unparsed': [[1, 2, 3], 'a', 'b', 'c'],
'parsed': ['a', 'b', 'c'],
}, {
'name': 'Array (empty)',
'unparsed': [[]],
'parsed': [],
}, {
'name': 'Array (sparse)',
'unparsed': [[-2, 1, -2], 'b'],
'parsed': [None, 'b', None],
}, {
'name': 'Object',
'unparsed': [{'foo': 1, 'x-y': 2}, 'bar', 'z'],
'parsed': {'foo': 'bar', 'x-y': 'z'},
}, {
'name': 'Set',
'unparsed': [['Set', 1, 2, 3], 1, 2, 3],
'parsed': [1, 2, 3],
}, {
'name': 'Map',
'unparsed': [['Map', 1, 2], 'a', 'b'],
'parsed': [['a', 'b']],
}, {
'name': 'BigInt',
'unparsed': [['BigInt', '1']],
'parsed': 1,
}, {
'name': 'Uint8Array',
'unparsed': [['Uint8Array', 'AQID']],
'parsed': [1, 2, 3],
}, {
'name': 'ArrayBuffer',
'unparsed': [['ArrayBuffer', 'AQID']],
'parsed': [1, 2, 3],
}, {
'name': 'str (repetition)',
'unparsed': [[1, 1], 'a string'],
'parsed': ['a string', 'a string'],
}, {
'name': 'None (repetition)',
'unparsed': [[1, 1], None],
'parsed': [None, None],
}, {
'name': 'dict (repetition)',
'unparsed': [[1, 1], {}],
'parsed': [{}, {}],
}, {
'name': 'Object without prototype',
'unparsed': [['null']],
'parsed': {},
}, {
'name': 'cross-realm POJO',
'unparsed': [{}],
'parsed': {},
}]
TEST_CASES_IS = [{
'name': 'bool',
'unparsed': [True],
'parsed': True,
}, {
'name': 'Boolean',
'unparsed': [['Object', False]],
'parsed': False,
}, {
'name': 'undefined',
'unparsed': -1,
'parsed': None,
}, {
'name': 'null',
'unparsed': [None],
'parsed': None,
}, {
'name': 'NaN',
'unparsed': -3,
'parsed': math.nan,
}]
TEST_CASES_INVALID = [{
'name': 'empty string',
'unparsed': '',
'error': ValueError,
'pattern': r'expected int or list as input',
}, {
'name': 'hole',
'unparsed': -2,
'error': ValueError,
'pattern': r'invalid integer input',
}, {
'name': 'string',
'unparsed': 'hello',
'error': ValueError,
'pattern': r'expected int or list as input',
}, {
'name': 'number',
'unparsed': 42,
'error': ValueError,
'pattern': r'invalid integer input',
}, {
'name': 'boolean',
'unparsed': True,
'error': ValueError,
'pattern': r'expected int or list as input',
}, {
'name': 'null',
'unparsed': None,
'error': ValueError,
'pattern': r'expected int or list as input',
}, {
'name': 'object',
'unparsed': {},
'error': ValueError,
'pattern': r'expected int or list as input',
}, {
'name': 'empty array',
'unparsed': [],
'error': ValueError,
'pattern': r'expected a non-empty list as input',
}, {
'name': 'Python negative indexing',
'unparsed': [[1, 2, 3, 4, 5, 6, 7, -7], 1, 2, 3, 4, 5, 6, 7],
'error': IndexError,
'pattern': r'invalid index: -7',
}]
class TestDevalue(unittest.TestCase):
def test_devalue_parse_equals(self):
for tc in TEST_CASES_EQUALS:
self.assertEqual(devalue.parse(tc['unparsed']), tc['parsed'], tc['name'])
def test_devalue_parse_is(self):
for tc in TEST_CASES_IS:
self.assertIs(devalue.parse(tc['unparsed']), tc['parsed'], tc['name'])
def test_devalue_parse_invalid(self):
for tc in TEST_CASES_INVALID:
with self.assertRaisesRegex(tc['error'], tc['pattern'], msg=tc['name']):
devalue.parse(tc['unparsed'])
def test_devalue_parse_cyclical(self):
name = 'Map (cyclical)'
result = devalue.parse([['Map', 1, 0], 'self'])
self.assertEqual(result[0][0], 'self', name)
self.assertIs(result, result[0][1], name)
name = 'Set (cyclical)'
result = devalue.parse([['Set', 0, 1], 42])
self.assertEqual(result[1], 42, name)
self.assertIs(result, result[0], name)
result = devalue.parse([[0]])
self.assertIs(result, result[0], 'Array (cyclical)')
name = 'Object (cyclical)'
result = devalue.parse([{'self': 0}])
self.assertIs(result, result['self'], name)
name = 'Object with null prototype (cyclical)'
result = devalue.parse([['null', 'self', 0]])
self.assertIs(result, result['self'], name)
name = 'Objects (cyclical)'
result = devalue.parse([[1, 2], {'second': 2}, {'first': 1}])
self.assertIs(result[0], result[1]['first'], name)
self.assertIs(result[1], result[0]['second'], name)
def test_devalue_parse_revivers(self):
self.assertEqual(
devalue.parse([['indirect', 1], {'a': 2}, 'b'], revivers={'indirect': lambda x: x}),
{'a': 'b'}, 'revivers (indirect)')
self.assertEqual(
devalue.parse([['parse', 1], '{"a":0}'], revivers={'parse': lambda x: json.loads(x)}),
{'a': 0}, 'revivers (parse)')
if __name__ == '__main__':
unittest.main()

View File

@ -478,6 +478,14 @@ class TestJSInterpreter(unittest.TestCase):
func = jsi.extract_function('c', {'e': 10}, {'f': 100, 'g': 1000}) func = jsi.extract_function('c', {'e': 10}, {'f': 100, 'g': 1000})
self.assertEqual(func([1]), 1111) self.assertEqual(func([1]), 1111)
def test_increment_decrement(self):
self._test('function f() { var x = 1; return ++x; }', 2)
self._test('function f() { var x = 1; return x++; }', 1)
self._test('function f() { var x = 1; x--; return x }', 0)
self._test('function f() { var y; var x = 1; x++, --x, x--, x--, y="z", "abc", x++; return --x }', -1)
self._test('function f() { var a = "test--"; return a; }', 'test--')
self._test('function f() { var b = 1; var a = "b--"; return a; }', 'b--')
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -20,7 +20,6 @@ from yt_dlp.networking._helper import (
add_accept_encoding_header, add_accept_encoding_header,
get_redirect_method, get_redirect_method,
make_socks_proxy_opts, make_socks_proxy_opts,
select_proxy,
ssl_load_certs, ssl_load_certs,
) )
from yt_dlp.networking.exceptions import ( from yt_dlp.networking.exceptions import (
@ -28,7 +27,7 @@ from yt_dlp.networking.exceptions import (
IncompleteRead, IncompleteRead,
) )
from yt_dlp.socks import ProxyType from yt_dlp.socks import ProxyType
from yt_dlp.utils.networking import HTTPHeaderDict from yt_dlp.utils.networking import HTTPHeaderDict, select_proxy
TEST_DIR = os.path.dirname(os.path.abspath(__file__)) TEST_DIR = os.path.dirname(os.path.abspath(__file__))

View File

@ -8,6 +8,8 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import subprocess
from yt_dlp import YoutubeDL from yt_dlp import YoutubeDL
from yt_dlp.utils import shell_quote from yt_dlp.utils import shell_quote
from yt_dlp.postprocessor import ( from yt_dlp.postprocessor import (
@ -47,7 +49,18 @@ class TestConvertThumbnail(unittest.TestCase):
print('Skipping: ffmpeg not found') print('Skipping: ffmpeg not found')
return return
file = 'test/testdata/thumbnails/foo %d bar/foo_%d.{}' test_data_dir = 'test/testdata/thumbnails'
generated_file = f'{test_data_dir}/empty.webp'
subprocess.check_call([
pp.executable, '-y', '-f', 'lavfi', '-i', 'color=c=black:s=320x320',
'-c:v', 'libwebp', '-pix_fmt', 'yuv420p', '-vframes', '1', generated_file,
], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
file = test_data_dir + '/foo %d bar/foo_%d.{}'
initial_file = file.format('webp')
os.replace(generated_file, initial_file)
tests = (('webp', 'png'), ('png', 'jpg')) tests = (('webp', 'png'), ('png', 'jpg'))
for inp, out in tests: for inp, out in tests:
@ -55,11 +68,13 @@ class TestConvertThumbnail(unittest.TestCase):
if os.path.exists(out_file): if os.path.exists(out_file):
os.remove(out_file) os.remove(out_file)
pp.convert_thumbnail(file.format(inp), out) pp.convert_thumbnail(file.format(inp), out)
assert os.path.exists(out_file) self.assertTrue(os.path.exists(out_file))
for _, out in tests: for _, out in tests:
os.remove(file.format(out)) os.remove(file.format(out))
os.remove(initial_file)
class TestExec(unittest.TestCase): class TestExec(unittest.TestCase):
def test_parse_cmd(self): def test_parse_cmd(self):
@ -610,3 +625,7 @@ outpoint 10.000000
self.assertEqual( self.assertEqual(
r"'special '\'' characters '\'' galore'\'\'\'", r"'special '\'' characters '\'' galore'\'\'\'",
self._pp._quote_for_ffmpeg("special ' characters ' galore'''")) self._pp._quote_for_ffmpeg("special ' characters ' galore'''"))
if __name__ == '__main__':
unittest.main()

71
test/test_pot/conftest.py Normal file
View File

@ -0,0 +1,71 @@
import collections
import pytest
from yt_dlp import YoutubeDL
from yt_dlp.cookies import YoutubeDLCookieJar
from yt_dlp.extractor.common import InfoExtractor
from yt_dlp.extractor.youtube.pot._provider import IEContentProviderLogger
from yt_dlp.extractor.youtube.pot.provider import PoTokenRequest, PoTokenContext
from yt_dlp.utils.networking import HTTPHeaderDict
class MockLogger(IEContentProviderLogger):
log_level = IEContentProviderLogger.LogLevel.TRACE
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.messages = collections.defaultdict(list)
def trace(self, message: str):
self.messages['trace'].append(message)
def debug(self, message: str):
self.messages['debug'].append(message)
def info(self, message: str):
self.messages['info'].append(message)
def warning(self, message: str, *, once=False):
self.messages['warning'].append(message)
def error(self, message: str):
self.messages['error'].append(message)
@pytest.fixture
def ie() -> InfoExtractor:
ydl = YoutubeDL()
return ydl.get_info_extractor('Youtube')
@pytest.fixture
def logger() -> MockLogger:
return MockLogger()
@pytest.fixture()
def pot_request() -> PoTokenRequest:
return PoTokenRequest(
context=PoTokenContext.GVS,
innertube_context={'client': {'clientName': 'WEB'}},
innertube_host='youtube.com',
session_index=None,
player_url=None,
is_authenticated=False,
video_webpage=None,
visitor_data='example-visitor-data',
data_sync_id='example-data-sync-id',
video_id='example-video-id',
request_cookiejar=YoutubeDLCookieJar(),
request_proxy=None,
request_headers=HTTPHeaderDict(),
request_timeout=None,
request_source_address=None,
request_verify_tls=True,
bypass_cache=False,
)

View File

@ -0,0 +1,117 @@
import threading
import time
from collections import OrderedDict
import pytest
from yt_dlp.extractor.youtube.pot._provider import IEContentProvider, BuiltinIEContentProvider
from yt_dlp.utils import bug_reports_message
from yt_dlp.extractor.youtube.pot._builtin.memory_cache import MemoryLRUPCP, memorylru_preference, initialize_global_cache
from yt_dlp.version import __version__
from yt_dlp.extractor.youtube.pot._registry import _pot_cache_providers, _pot_memory_cache
class TestMemoryLRUPCS:
def test_base_type(self):
assert issubclass(MemoryLRUPCP, IEContentProvider)
assert issubclass(MemoryLRUPCP, BuiltinIEContentProvider)
@pytest.fixture
def pcp(self, ie, logger) -> MemoryLRUPCP:
return MemoryLRUPCP(ie, logger, {}, initialize_cache=lambda max_size: (OrderedDict(), threading.Lock(), max_size))
def test_is_registered(self):
assert _pot_cache_providers.value.get('MemoryLRU') == MemoryLRUPCP
def test_initialization(self, pcp):
assert pcp.PROVIDER_NAME == 'memory'
assert pcp.PROVIDER_VERSION == __version__
assert pcp.BUG_REPORT_MESSAGE == bug_reports_message(before='')
assert pcp.is_available()
def test_store_and_get(self, pcp):
pcp.store('key1', 'value1', int(time.time()) + 60)
assert pcp.get('key1') == 'value1'
assert len(pcp.cache) == 1
def test_store_ignore_expired(self, pcp):
pcp.store('key1', 'value1', int(time.time()) - 1)
assert len(pcp.cache) == 0
assert pcp.get('key1') is None
assert len(pcp.cache) == 0
def test_store_override_existing_key(self, ie, logger):
MAX_SIZE = 2
pcp = MemoryLRUPCP(ie, logger, {}, initialize_cache=lambda max_size: (OrderedDict(), threading.Lock(), MAX_SIZE))
pcp.store('key1', 'value1', int(time.time()) + 60)
pcp.store('key2', 'value2', int(time.time()) + 60)
assert len(pcp.cache) == 2
pcp.store('key1', 'value2', int(time.time()) + 60)
# Ensure that the override key gets added to the end of the cache instead of in the same position
pcp.store('key3', 'value3', int(time.time()) + 60)
assert pcp.get('key1') == 'value2'
def test_store_ignore_expired_existing_key(self, pcp):
pcp.store('key1', 'value2', int(time.time()) + 60)
pcp.store('key1', 'value1', int(time.time()) - 1)
assert len(pcp.cache) == 1
assert pcp.get('key1') == 'value2'
assert len(pcp.cache) == 1
def test_get_key_expired(self, pcp):
pcp.store('key1', 'value1', int(time.time()) + 60)
assert pcp.get('key1') == 'value1'
assert len(pcp.cache) == 1
pcp.cache['key1'] = ('value1', int(time.time()) - 1)
assert pcp.get('key1') is None
assert len(pcp.cache) == 0
def test_lru_eviction(self, ie, logger):
MAX_SIZE = 2
provider = MemoryLRUPCP(ie, logger, {}, initialize_cache=lambda max_size: (OrderedDict(), threading.Lock(), MAX_SIZE))
provider.store('key1', 'value1', int(time.time()) + 5)
provider.store('key2', 'value2', int(time.time()) + 5)
assert len(provider.cache) == 2
assert provider.get('key1') == 'value1'
provider.store('key3', 'value3', int(time.time()) + 5)
assert len(provider.cache) == 2
assert provider.get('key2') is None
provider.store('key4', 'value4', int(time.time()) + 5)
assert len(provider.cache) == 2
assert provider.get('key1') is None
assert provider.get('key3') == 'value3'
assert provider.get('key4') == 'value4'
def test_delete(self, pcp):
pcp.store('key1', 'value1', int(time.time()) + 5)
assert len(pcp.cache) == 1
assert pcp.get('key1') == 'value1'
pcp.delete('key1')
assert len(pcp.cache) == 0
assert pcp.get('key1') is None
def test_use_global_cache_default(self, ie, logger):
pcp = MemoryLRUPCP(ie, logger, {})
assert pcp.max_size == _pot_memory_cache.value['max_size'] == 25
assert pcp.cache is _pot_memory_cache.value['cache']
assert pcp.lock is _pot_memory_cache.value['lock']
pcp2 = MemoryLRUPCP(ie, logger, {})
assert pcp.max_size == pcp2.max_size == _pot_memory_cache.value['max_size'] == 25
assert pcp.cache is pcp2.cache is _pot_memory_cache.value['cache']
assert pcp.lock is pcp2.lock is _pot_memory_cache.value['lock']
def test_fail_max_size_change_global(self, ie, logger):
pcp = MemoryLRUPCP(ie, logger, {})
assert pcp.max_size == _pot_memory_cache.value['max_size'] == 25
with pytest.raises(ValueError, match='Cannot change max_size of initialized global memory cache'):
initialize_global_cache(50)
assert pcp.max_size == _pot_memory_cache.value['max_size'] == 25
def test_memory_lru_preference(self, pcp, ie, pot_request):
assert memorylru_preference(pcp, pot_request) == 10000

View File

@ -0,0 +1,47 @@
import pytest
from yt_dlp.extractor.youtube.pot.provider import (
PoTokenContext,
)
from yt_dlp.extractor.youtube.pot.utils import get_webpo_content_binding, ContentBindingType
class TestGetWebPoContentBinding:
@pytest.mark.parametrize('client_name, context, is_authenticated, expected', [
*[(client, context, is_authenticated, expected) for client in [
'WEB', 'MWEB', 'TVHTML5', 'WEB_EMBEDDED_PLAYER', 'WEB_CREATOR', 'TVHTML5_SIMPLY_EMBEDDED_PLAYER', 'TVHTML5_SIMPLY']
for context, is_authenticated, expected in [
(PoTokenContext.GVS, False, ('example-visitor-data', ContentBindingType.VISITOR_DATA)),
(PoTokenContext.PLAYER, False, ('example-video-id', ContentBindingType.VIDEO_ID)),
(PoTokenContext.SUBS, False, ('example-video-id', ContentBindingType.VIDEO_ID)),
(PoTokenContext.GVS, True, ('example-data-sync-id', ContentBindingType.DATASYNC_ID)),
]],
('WEB_REMIX', PoTokenContext.GVS, False, ('example-visitor-data', ContentBindingType.VISITOR_DATA)),
('WEB_REMIX', PoTokenContext.PLAYER, False, ('example-visitor-data', ContentBindingType.VISITOR_DATA)),
('ANDROID', PoTokenContext.GVS, False, (None, None)),
('IOS', PoTokenContext.GVS, False, (None, None)),
])
def test_get_webpo_content_binding(self, pot_request, client_name, context, is_authenticated, expected):
pot_request.innertube_context['client']['clientName'] = client_name
pot_request.context = context
pot_request.is_authenticated = is_authenticated
assert get_webpo_content_binding(pot_request) == expected
def test_extract_visitor_id(self, pot_request):
pot_request.visitor_data = 'CgsxMjNhYmNYWVpfLSiA4s%2DqBg%3D%3D'
assert get_webpo_content_binding(pot_request, bind_to_visitor_id=True) == ('123abcXYZ_-', ContentBindingType.VISITOR_ID)
def test_invalid_visitor_id(self, pot_request):
# visitor id not alphanumeric (i.e. protobuf extraction failed)
pot_request.visitor_data = 'CggxMjM0NTY3OCiA4s-qBg%3D%3D'
assert get_webpo_content_binding(pot_request, bind_to_visitor_id=True) == (pot_request.visitor_data, ContentBindingType.VISITOR_DATA)
def test_no_visitor_id(self, pot_request):
pot_request.visitor_data = 'KIDiz6oG'
assert get_webpo_content_binding(pot_request, bind_to_visitor_id=True) == (pot_request.visitor_data, ContentBindingType.VISITOR_DATA)
def test_invalid_base64(self, pot_request):
pot_request.visitor_data = 'invalid-base64'
assert get_webpo_content_binding(pot_request, bind_to_visitor_id=True) == (pot_request.visitor_data, ContentBindingType.VISITOR_DATA)

View File

@ -0,0 +1,92 @@
import pytest
from yt_dlp.extractor.youtube.pot._provider import IEContentProvider, BuiltinIEContentProvider
from yt_dlp.extractor.youtube.pot.cache import CacheProviderWritePolicy
from yt_dlp.utils import bug_reports_message
from yt_dlp.extractor.youtube.pot.provider import (
PoTokenRequest,
PoTokenContext,
)
from yt_dlp.version import __version__
from yt_dlp.extractor.youtube.pot._builtin.webpo_cachespec import WebPoPCSP
from yt_dlp.extractor.youtube.pot._registry import _pot_pcs_providers
@pytest.fixture()
def pot_request(pot_request) -> PoTokenRequest:
pot_request.visitor_data = 'CgsxMjNhYmNYWVpfLSiA4s%2DqBg%3D%3D' # visitor_id=123abcXYZ_-
return pot_request
class TestWebPoPCSP:
def test_base_type(self):
assert issubclass(WebPoPCSP, IEContentProvider)
assert issubclass(WebPoPCSP, BuiltinIEContentProvider)
def test_init(self, ie, logger):
pcs = WebPoPCSP(ie=ie, logger=logger, settings={})
assert pcs.PROVIDER_NAME == 'webpo'
assert pcs.PROVIDER_VERSION == __version__
assert pcs.BUG_REPORT_MESSAGE == bug_reports_message(before='')
assert pcs.is_available()
def test_is_registered(self):
assert _pot_pcs_providers.value.get('WebPo') == WebPoPCSP
@pytest.mark.parametrize('client_name, context, is_authenticated', [
('ANDROID', PoTokenContext.GVS, False),
('IOS', PoTokenContext.GVS, False),
('IOS', PoTokenContext.PLAYER, False),
])
def test_not_supports(self, ie, logger, pot_request, client_name, context, is_authenticated):
pcs = WebPoPCSP(ie=ie, logger=logger, settings={})
pot_request.innertube_context['client']['clientName'] = client_name
pot_request.context = context
pot_request.is_authenticated = is_authenticated
assert pcs.generate_cache_spec(pot_request) is None
@pytest.mark.parametrize('client_name, context, is_authenticated, remote_host, source_address, request_proxy, expected', [
*[(client, context, is_authenticated, remote_host, source_address, request_proxy, expected) for client in [
'WEB', 'MWEB', 'TVHTML5', 'WEB_EMBEDDED_PLAYER', 'WEB_CREATOR', 'TVHTML5_SIMPLY_EMBEDDED_PLAYER', 'TVHTML5_SIMPLY']
for context, is_authenticated, remote_host, source_address, request_proxy, expected in [
(PoTokenContext.GVS, False, 'example-remote-host', 'example-source-address', 'example-request-proxy', {'t': 'webpo', 'ip': 'example-remote-host', 'sa': 'example-source-address', 'px': 'example-request-proxy', 'cb': '123abcXYZ_-', 'cbt': 'visitor_id'}),
(PoTokenContext.PLAYER, False, 'example-remote-host', 'example-source-address', 'example-request-proxy', {'t': 'webpo', 'ip': 'example-remote-host', 'sa': 'example-source-address', 'px': 'example-request-proxy', 'cb': '123abcXYZ_-', 'cbt': 'video_id'}),
(PoTokenContext.GVS, True, 'example-remote-host', 'example-source-address', 'example-request-proxy', {'t': 'webpo', 'ip': 'example-remote-host', 'sa': 'example-source-address', 'px': 'example-request-proxy', 'cb': 'example-data-sync-id', 'cbt': 'datasync_id'}),
]],
('WEB_REMIX', PoTokenContext.PLAYER, False, 'example-remote-host', 'example-source-address', 'example-request-proxy', {'t': 'webpo', 'ip': 'example-remote-host', 'sa': 'example-source-address', 'px': 'example-request-proxy', 'cb': '123abcXYZ_-', 'cbt': 'visitor_id'}),
('WEB', PoTokenContext.GVS, False, None, None, None, {'t': 'webpo', 'cb': '123abcXYZ_-', 'cbt': 'visitor_id', 'ip': None, 'sa': None, 'px': None}),
('TVHTML5', PoTokenContext.PLAYER, False, None, None, 'http://example.com', {'t': 'webpo', 'cb': '123abcXYZ_-', 'cbt': 'video_id', 'ip': None, 'sa': None, 'px': 'http://example.com'}),
])
def test_generate_key_bindings(self, ie, logger, pot_request, client_name, context, is_authenticated, remote_host, source_address, request_proxy, expected):
pcs = WebPoPCSP(ie=ie, logger=logger, settings={})
pot_request.innertube_context['client']['clientName'] = client_name
pot_request.context = context
pot_request.is_authenticated = is_authenticated
pot_request.innertube_context['client']['remoteHost'] = remote_host
pot_request.request_source_address = source_address
pot_request.request_proxy = request_proxy
pot_request.video_id = '123abcXYZ_-' # same as visitor id to test type
assert pcs.generate_cache_spec(pot_request).key_bindings == expected
def test_no_bind_visitor_id(self, ie, logger, pot_request):
# Should not bind to visitor id if setting is set to False
pcs = WebPoPCSP(ie=ie, logger=logger, settings={'bind_to_visitor_id': ['false']})
pot_request.innertube_context['client']['clientName'] = 'WEB'
pot_request.context = PoTokenContext.GVS
pot_request.is_authenticated = False
assert pcs.generate_cache_spec(pot_request).key_bindings == {'t': 'webpo', 'ip': None, 'sa': None, 'px': None, 'cb': 'CgsxMjNhYmNYWVpfLSiA4s%2DqBg%3D%3D', 'cbt': 'visitor_data'}
def test_default_ttl(self, ie, logger, pot_request):
pcs = WebPoPCSP(ie=ie, logger=logger, settings={})
assert pcs.generate_cache_spec(pot_request).default_ttl == 6 * 60 * 60 # should default to 6 hours
def test_write_policy(self, ie, logger, pot_request):
pcs = WebPoPCSP(ie=ie, logger=logger, settings={})
pot_request.context = PoTokenContext.GVS
assert pcs.generate_cache_spec(pot_request).write_policy == CacheProviderWritePolicy.WRITE_ALL
pot_request.context = PoTokenContext.PLAYER
assert pcs.generate_cache_spec(pot_request).write_policy == CacheProviderWritePolicy.WRITE_FIRST

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,629 @@
import pytest
from yt_dlp.extractor.youtube.pot._provider import IEContentProvider
from yt_dlp.cookies import YoutubeDLCookieJar
from yt_dlp.utils.networking import HTTPHeaderDict
from yt_dlp.extractor.youtube.pot.provider import (
PoTokenRequest,
PoTokenContext,
ExternalRequestFeature,
)
from yt_dlp.extractor.youtube.pot.cache import (
PoTokenCacheProvider,
PoTokenCacheSpec,
PoTokenCacheSpecProvider,
CacheProviderWritePolicy,
)
import yt_dlp.extractor.youtube.pot.cache as cache
from yt_dlp.networking import Request
from yt_dlp.extractor.youtube.pot.provider import (
PoTokenResponse,
PoTokenProvider,
PoTokenProviderRejectedRequest,
provider_bug_report_message,
register_provider,
register_preference,
)
from yt_dlp.extractor.youtube.pot._registry import _pot_providers, _ptp_preferences, _pot_pcs_providers, _pot_cache_providers, _pot_cache_provider_preferences
class ExamplePTP(PoTokenProvider):
PROVIDER_NAME = 'example'
PROVIDER_VERSION = '0.0.1'
BUG_REPORT_LOCATION = 'https://example.com/issues'
_SUPPORTED_CLIENTS = ('WEB',)
_SUPPORTED_CONTEXTS = (PoTokenContext.GVS, )
_SUPPORTED_EXTERNAL_REQUEST_FEATURES = (
ExternalRequestFeature.PROXY_SCHEME_HTTP,
ExternalRequestFeature.PROXY_SCHEME_SOCKS5H,
)
def is_available(self) -> bool:
return True
def _real_request_pot(self, request: PoTokenRequest) -> PoTokenResponse:
return PoTokenResponse('example-token', expires_at=123)
class ExampleCacheProviderPCP(PoTokenCacheProvider):
PROVIDER_NAME = 'example'
PROVIDER_VERSION = '0.0.1'
BUG_REPORT_LOCATION = 'https://example.com/issues'
def is_available(self) -> bool:
return True
def get(self, key: str):
return 'example-cache'
def store(self, key: str, value: str, expires_at: int):
pass
def delete(self, key: str):
pass
class ExampleCacheSpecProviderPCSP(PoTokenCacheSpecProvider):
PROVIDER_NAME = 'example'
PROVIDER_VERSION = '0.0.1'
BUG_REPORT_LOCATION = 'https://example.com/issues'
def generate_cache_spec(self, request: PoTokenRequest):
return PoTokenCacheSpec(
key_bindings={'field': 'example-key'},
default_ttl=60,
write_policy=CacheProviderWritePolicy.WRITE_FIRST,
)
class TestPoTokenProvider:
def test_base_type(self):
assert issubclass(PoTokenProvider, IEContentProvider)
def test_create_provider_missing_fetch_method(self, ie, logger):
class MissingMethodsPTP(PoTokenProvider):
def is_available(self) -> bool:
return True
with pytest.raises(TypeError):
MissingMethodsPTP(ie=ie, logger=logger, settings={})
def test_create_provider_missing_available_method(self, ie, logger):
class MissingMethodsPTP(PoTokenProvider):
def _real_request_pot(self, request: PoTokenRequest) -> PoTokenResponse:
raise PoTokenProviderRejectedRequest('Not implemented')
with pytest.raises(TypeError):
MissingMethodsPTP(ie=ie, logger=logger, settings={})
def test_barebones_provider(self, ie, logger):
class BarebonesProviderPTP(PoTokenProvider):
def is_available(self) -> bool:
return True
def _real_request_pot(self, request: PoTokenRequest) -> PoTokenResponse:
raise PoTokenProviderRejectedRequest('Not implemented')
provider = BarebonesProviderPTP(ie=ie, logger=logger, settings={})
assert provider.PROVIDER_NAME == 'BarebonesProvider'
assert provider.PROVIDER_KEY == 'BarebonesProvider'
assert provider.PROVIDER_VERSION == '0.0.0'
assert provider.BUG_REPORT_MESSAGE == 'please report this issue to the provider developer at (developer has not provided a bug report location) .'
def test_example_provider_success(self, ie, logger, pot_request):
provider = ExamplePTP(ie=ie, logger=logger, settings={})
assert provider.PROVIDER_NAME == 'example'
assert provider.PROVIDER_KEY == 'Example'
assert provider.PROVIDER_VERSION == '0.0.1'
assert provider.BUG_REPORT_MESSAGE == 'please report this issue to the provider developer at https://example.com/issues .'
assert provider.is_available()
response = provider.request_pot(pot_request)
assert response.po_token == 'example-token'
assert response.expires_at == 123
def test_provider_unsupported_context(self, ie, logger, pot_request):
provider = ExamplePTP(ie=ie, logger=logger, settings={})
pot_request.context = PoTokenContext.PLAYER
with pytest.raises(PoTokenProviderRejectedRequest):
provider.request_pot(pot_request)
def test_provider_unsupported_client(self, ie, logger, pot_request):
provider = ExamplePTP(ie=ie, logger=logger, settings={})
pot_request.innertube_context['client']['clientName'] = 'ANDROID'
with pytest.raises(PoTokenProviderRejectedRequest):
provider.request_pot(pot_request)
def test_provider_unsupported_proxy_scheme(self, ie, logger, pot_request):
provider = ExamplePTP(ie=ie, logger=logger, settings={})
pot_request.request_proxy = 'socks4://example.com'
with pytest.raises(
PoTokenProviderRejectedRequest,
match='External requests by "example" provider do not support proxy scheme "socks4". Supported proxy '
'schemes: http, socks5h',
):
provider.request_pot(pot_request)
pot_request.request_proxy = 'http://example.com'
assert provider.request_pot(pot_request)
def test_provider_ignore_external_request_features(self, ie, logger, pot_request):
class InternalPTP(ExamplePTP):
_SUPPORTED_EXTERNAL_REQUEST_FEATURES = None
provider = InternalPTP(ie=ie, logger=logger, settings={})
pot_request.request_proxy = 'socks5://example.com'
assert provider.request_pot(pot_request)
pot_request.request_source_address = '0.0.0.0'
assert provider.request_pot(pot_request)
def test_provider_unsupported_external_request_source_address(self, ie, logger, pot_request):
class InternalPTP(ExamplePTP):
_SUPPORTED_EXTERNAL_REQUEST_FEATURES = tuple()
provider = InternalPTP(ie=ie, logger=logger, settings={})
pot_request.request_source_address = None
assert provider.request_pot(pot_request)
pot_request.request_source_address = '0.0.0.0'
with pytest.raises(
PoTokenProviderRejectedRequest,
match='External requests by "example" provider do not support setting source address',
):
provider.request_pot(pot_request)
def test_provider_supported_external_request_source_address(self, ie, logger, pot_request):
class InternalPTP(ExamplePTP):
_SUPPORTED_EXTERNAL_REQUEST_FEATURES = (
ExternalRequestFeature.SOURCE_ADDRESS,
)
provider = InternalPTP(ie=ie, logger=logger, settings={})
pot_request.request_source_address = None
assert provider.request_pot(pot_request)
pot_request.request_source_address = '0.0.0.0'
assert provider.request_pot(pot_request)
def test_provider_unsupported_external_request_tls_verification(self, ie, logger, pot_request):
class InternalPTP(ExamplePTP):
_SUPPORTED_EXTERNAL_REQUEST_FEATURES = tuple()
provider = InternalPTP(ie=ie, logger=logger, settings={})
pot_request.request_verify_tls = True
assert provider.request_pot(pot_request)
pot_request.request_verify_tls = False
with pytest.raises(
PoTokenProviderRejectedRequest,
match='External requests by "example" provider do not support ignoring TLS certificate failures',
):
provider.request_pot(pot_request)
def test_provider_supported_external_request_tls_verification(self, ie, logger, pot_request):
class InternalPTP(ExamplePTP):
_SUPPORTED_EXTERNAL_REQUEST_FEATURES = (
ExternalRequestFeature.DISABLE_TLS_VERIFICATION,
)
provider = InternalPTP(ie=ie, logger=logger, settings={})
pot_request.request_verify_tls = True
assert provider.request_pot(pot_request)
pot_request.request_verify_tls = False
assert provider.request_pot(pot_request)
def test_provider_request_webpage(self, ie, logger, pot_request):
provider = ExamplePTP(ie=ie, logger=logger, settings={})
cookiejar = YoutubeDLCookieJar()
pot_request.request_headers = HTTPHeaderDict({'User-Agent': 'example-user-agent'})
pot_request.request_proxy = 'socks5://example-proxy.com'
pot_request.request_cookiejar = cookiejar
def mock_urlopen(request):
return request
ie._downloader.urlopen = mock_urlopen
sent_request = provider._request_webpage(Request(
'https://example.com',
), pot_request=pot_request)
assert sent_request.url == 'https://example.com'
assert sent_request.headers['User-Agent'] == 'example-user-agent'
assert sent_request.proxies == {'all': 'socks5://example-proxy.com'}
assert sent_request.extensions['cookiejar'] is cookiejar
assert 'Requesting webpage' in logger.messages['info']
def test_provider_request_webpage_override(self, ie, logger, pot_request):
provider = ExamplePTP(ie=ie, logger=logger, settings={})
cookiejar_request = YoutubeDLCookieJar()
pot_request.request_headers = HTTPHeaderDict({'User-Agent': 'example-user-agent'})
pot_request.request_proxy = 'socks5://example-proxy.com'
pot_request.request_cookiejar = cookiejar_request
def mock_urlopen(request):
return request
ie._downloader.urlopen = mock_urlopen
sent_request = provider._request_webpage(Request(
'https://example.com',
headers={'User-Agent': 'override-user-agent-override'},
proxies={'http': 'http://example-proxy-override.com'},
extensions={'cookiejar': YoutubeDLCookieJar()},
), pot_request=pot_request, note='Custom requesting webpage')
assert sent_request.url == 'https://example.com'
assert sent_request.headers['User-Agent'] == 'override-user-agent-override'
assert sent_request.proxies == {'http': 'http://example-proxy-override.com'}
assert sent_request.extensions['cookiejar'] is not cookiejar_request
assert 'Custom requesting webpage' in logger.messages['info']
def test_provider_request_webpage_no_log(self, ie, logger, pot_request):
provider = ExamplePTP(ie=ie, logger=logger, settings={})
def mock_urlopen(request):
return request
ie._downloader.urlopen = mock_urlopen
sent_request = provider._request_webpage(Request(
'https://example.com',
), note=False)
assert sent_request.url == 'https://example.com'
assert 'info' not in logger.messages
def test_provider_request_webpage_no_pot_request(self, ie, logger):
provider = ExamplePTP(ie=ie, logger=logger, settings={})
def mock_urlopen(request):
return request
ie._downloader.urlopen = mock_urlopen
sent_request = provider._request_webpage(Request(
'https://example.com',
), pot_request=None)
assert sent_request.url == 'https://example.com'
def test_get_config_arg(self, ie, logger):
provider = ExamplePTP(ie=ie, logger=logger, settings={'abc': ['123D'], 'xyz': ['456a', '789B']})
assert provider._configuration_arg('abc') == ['123d']
assert provider._configuration_arg('abc', default=['default']) == ['123d']
assert provider._configuration_arg('ABC', default=['default']) == ['default']
assert provider._configuration_arg('abc', casesense=True) == ['123D']
assert provider._configuration_arg('xyz', casesense=False) == ['456a', '789b']
def test_require_class_end_with_suffix(self, ie, logger):
class InvalidSuffix(PoTokenProvider):
PROVIDER_NAME = 'invalid-suffix'
def _real_request_pot(self, request: PoTokenRequest) -> PoTokenResponse:
raise PoTokenProviderRejectedRequest('Not implemented')
def is_available(self) -> bool:
return True
provider = InvalidSuffix(ie=ie, logger=logger, settings={})
with pytest.raises(AssertionError):
provider.PROVIDER_KEY # noqa: B018
class TestPoTokenCacheProvider:
def test_base_type(self):
assert issubclass(PoTokenCacheProvider, IEContentProvider)
def test_create_provider_missing_get_method(self, ie, logger):
class MissingMethodsPCP(PoTokenCacheProvider):
def store(self, key: str, value: str, expires_at: int):
pass
def delete(self, key: str):
pass
def is_available(self) -> bool:
return True
with pytest.raises(TypeError):
MissingMethodsPCP(ie=ie, logger=logger, settings={})
def test_create_provider_missing_store_method(self, ie, logger):
class MissingMethodsPCP(PoTokenCacheProvider):
def get(self, key: str):
pass
def delete(self, key: str):
pass
def is_available(self) -> bool:
return True
with pytest.raises(TypeError):
MissingMethodsPCP(ie=ie, logger=logger, settings={})
def test_create_provider_missing_delete_method(self, ie, logger):
class MissingMethodsPCP(PoTokenCacheProvider):
def get(self, key: str):
pass
def store(self, key: str, value: str, expires_at: int):
pass
def is_available(self) -> bool:
return True
with pytest.raises(TypeError):
MissingMethodsPCP(ie=ie, logger=logger, settings={})
def test_create_provider_missing_is_available_method(self, ie, logger):
class MissingMethodsPCP(PoTokenCacheProvider):
def get(self, key: str):
pass
def store(self, key: str, value: str, expires_at: int):
pass
def delete(self, key: str):
pass
with pytest.raises(TypeError):
MissingMethodsPCP(ie=ie, logger=logger, settings={})
def test_barebones_provider(self, ie, logger):
class BarebonesProviderPCP(PoTokenCacheProvider):
def is_available(self) -> bool:
return True
def get(self, key: str):
return 'example-cache'
def store(self, key: str, value: str, expires_at: int):
pass
def delete(self, key: str):
pass
provider = BarebonesProviderPCP(ie=ie, logger=logger, settings={})
assert provider.PROVIDER_NAME == 'BarebonesProvider'
assert provider.PROVIDER_KEY == 'BarebonesProvider'
assert provider.PROVIDER_VERSION == '0.0.0'
assert provider.BUG_REPORT_MESSAGE == 'please report this issue to the provider developer at (developer has not provided a bug report location) .'
def test_create_provider_example(self, ie, logger):
provider = ExampleCacheProviderPCP(ie=ie, logger=logger, settings={})
assert provider.PROVIDER_NAME == 'example'
assert provider.PROVIDER_KEY == 'ExampleCacheProvider'
assert provider.PROVIDER_VERSION == '0.0.1'
assert provider.BUG_REPORT_MESSAGE == 'please report this issue to the provider developer at https://example.com/issues .'
assert provider.is_available()
def test_get_config_arg(self, ie, logger):
provider = ExampleCacheProviderPCP(ie=ie, logger=logger, settings={'abc': ['123D'], 'xyz': ['456a', '789B']})
assert provider._configuration_arg('abc') == ['123d']
assert provider._configuration_arg('abc', default=['default']) == ['123d']
assert provider._configuration_arg('ABC', default=['default']) == ['default']
assert provider._configuration_arg('abc', casesense=True) == ['123D']
assert provider._configuration_arg('xyz', casesense=False) == ['456a', '789b']
def test_require_class_end_with_suffix(self, ie, logger):
class InvalidSuffix(PoTokenCacheProvider):
def get(self, key: str):
return 'example-cache'
def store(self, key: str, value: str, expires_at: int):
pass
def delete(self, key: str):
pass
def is_available(self) -> bool:
return True
provider = InvalidSuffix(ie=ie, logger=logger, settings={})
with pytest.raises(AssertionError):
provider.PROVIDER_KEY # noqa: B018
class TestPoTokenCacheSpecProvider:
def test_base_type(self):
assert issubclass(PoTokenCacheSpecProvider, IEContentProvider)
def test_create_provider_missing_supports_method(self, ie, logger):
class MissingMethodsPCS(PoTokenCacheSpecProvider):
pass
with pytest.raises(TypeError):
MissingMethodsPCS(ie=ie, logger=logger, settings={})
def test_create_provider_barebones(self, ie, pot_request, logger):
class BarebonesProviderPCSP(PoTokenCacheSpecProvider):
def generate_cache_spec(self, request: PoTokenRequest):
return PoTokenCacheSpec(
default_ttl=100,
key_bindings={},
)
provider = BarebonesProviderPCSP(ie=ie, logger=logger, settings={})
assert provider.PROVIDER_NAME == 'BarebonesProvider'
assert provider.PROVIDER_KEY == 'BarebonesProvider'
assert provider.PROVIDER_VERSION == '0.0.0'
assert provider.BUG_REPORT_MESSAGE == 'please report this issue to the provider developer at (developer has not provided a bug report location) .'
assert provider.is_available()
assert provider.generate_cache_spec(request=pot_request).default_ttl == 100
assert provider.generate_cache_spec(request=pot_request).key_bindings == {}
assert provider.generate_cache_spec(request=pot_request).write_policy == CacheProviderWritePolicy.WRITE_ALL
def test_create_provider_example(self, ie, pot_request, logger):
provider = ExampleCacheSpecProviderPCSP(ie=ie, logger=logger, settings={})
assert provider.PROVIDER_NAME == 'example'
assert provider.PROVIDER_KEY == 'ExampleCacheSpecProvider'
assert provider.PROVIDER_VERSION == '0.0.1'
assert provider.BUG_REPORT_MESSAGE == 'please report this issue to the provider developer at https://example.com/issues .'
assert provider.is_available()
assert provider.generate_cache_spec(pot_request)
assert provider.generate_cache_spec(pot_request).key_bindings == {'field': 'example-key'}
assert provider.generate_cache_spec(pot_request).default_ttl == 60
assert provider.generate_cache_spec(pot_request).write_policy == CacheProviderWritePolicy.WRITE_FIRST
def test_get_config_arg(self, ie, logger):
provider = ExampleCacheSpecProviderPCSP(ie=ie, logger=logger, settings={'abc': ['123D'], 'xyz': ['456a', '789B']})
assert provider._configuration_arg('abc') == ['123d']
assert provider._configuration_arg('abc', default=['default']) == ['123d']
assert provider._configuration_arg('ABC', default=['default']) == ['default']
assert provider._configuration_arg('abc', casesense=True) == ['123D']
assert provider._configuration_arg('xyz', casesense=False) == ['456a', '789b']
def test_require_class_end_with_suffix(self, ie, logger):
class InvalidSuffix(PoTokenCacheSpecProvider):
def generate_cache_spec(self, request: PoTokenRequest):
return None
provider = InvalidSuffix(ie=ie, logger=logger, settings={})
with pytest.raises(AssertionError):
provider.PROVIDER_KEY # noqa: B018
class TestPoTokenRequest:
def test_copy_request(self, pot_request):
copied_request = pot_request.copy()
assert copied_request is not pot_request
assert copied_request.context == pot_request.context
assert copied_request.innertube_context == pot_request.innertube_context
assert copied_request.innertube_context is not pot_request.innertube_context
copied_request.innertube_context['client']['clientName'] = 'ANDROID'
assert pot_request.innertube_context['client']['clientName'] != 'ANDROID'
assert copied_request.innertube_host == pot_request.innertube_host
assert copied_request.session_index == pot_request.session_index
assert copied_request.player_url == pot_request.player_url
assert copied_request.is_authenticated == pot_request.is_authenticated
assert copied_request.visitor_data == pot_request.visitor_data
assert copied_request.data_sync_id == pot_request.data_sync_id
assert copied_request.video_id == pot_request.video_id
assert copied_request.request_cookiejar is pot_request.request_cookiejar
assert copied_request.request_proxy == pot_request.request_proxy
assert copied_request.request_headers == pot_request.request_headers
assert copied_request.request_headers is not pot_request.request_headers
assert copied_request.request_timeout == pot_request.request_timeout
assert copied_request.request_source_address == pot_request.request_source_address
assert copied_request.request_verify_tls == pot_request.request_verify_tls
assert copied_request.bypass_cache == pot_request.bypass_cache
def test_provider_bug_report_message(ie, logger):
provider = ExamplePTP(ie=ie, logger=logger, settings={})
assert provider.BUG_REPORT_MESSAGE == 'please report this issue to the provider developer at https://example.com/issues .'
message = provider_bug_report_message(provider)
assert message == '; please report this issue to the provider developer at https://example.com/issues .'
message_before = provider_bug_report_message(provider, before='custom message!')
assert message_before == 'custom message! Please report this issue to the provider developer at https://example.com/issues .'
def test_register_provider(ie):
@register_provider
class UnavailableProviderPTP(PoTokenProvider):
def is_available(self) -> bool:
return False
def _real_request_pot(self, request: PoTokenRequest) -> PoTokenResponse:
raise PoTokenProviderRejectedRequest('Not implemented')
assert _pot_providers.value.get('UnavailableProvider') == UnavailableProviderPTP
_pot_providers.value.pop('UnavailableProvider')
def test_register_pot_preference(ie):
before = len(_ptp_preferences.value)
@register_preference(ExamplePTP)
def unavailable_preference(provider: PoTokenProvider, request: PoTokenRequest):
return 1
assert len(_ptp_preferences.value) == before + 1
def test_register_cache_provider(ie):
@cache.register_provider
class UnavailableCacheProviderPCP(PoTokenCacheProvider):
def is_available(self) -> bool:
return False
def get(self, key: str):
return 'example-cache'
def store(self, key: str, value: str, expires_at: int):
pass
def delete(self, key: str):
pass
assert _pot_cache_providers.value.get('UnavailableCacheProvider') == UnavailableCacheProviderPCP
_pot_cache_providers.value.pop('UnavailableCacheProvider')
def test_register_cache_provider_spec(ie):
@cache.register_spec
class UnavailableCacheProviderPCSP(PoTokenCacheSpecProvider):
def is_available(self) -> bool:
return False
def generate_cache_spec(self, request: PoTokenRequest):
return None
assert _pot_pcs_providers.value.get('UnavailableCacheProvider') == UnavailableCacheProviderPCSP
_pot_pcs_providers.value.pop('UnavailableCacheProvider')
def test_register_cache_provider_preference(ie):
before = len(_pot_cache_provider_preferences.value)
@cache.register_preference(ExampleCacheProviderPCP)
def unavailable_preference(provider: PoTokenCacheProvider, request: PoTokenRequest):
return 1
assert len(_pot_cache_provider_preferences.value) == before + 1
def test_logger_log_level(logger):
assert logger.LogLevel('INFO') == logger.LogLevel.INFO
assert logger.LogLevel('debuG') == logger.LogLevel.DEBUG
assert logger.LogLevel(10) == logger.LogLevel.DEBUG
assert logger.LogLevel('UNKNOWN') == logger.LogLevel.INFO

View File

@ -416,18 +416,8 @@ class TestTraversal:
'`any` should allow further branching' '`any` should allow further branching'
def test_traversal_morsel(self): def test_traversal_morsel(self):
values = {
'expires': 'a',
'path': 'b',
'comment': 'c',
'domain': 'd',
'max-age': 'e',
'secure': 'f',
'httponly': 'g',
'version': 'h',
'samesite': 'i',
}
morsel = http.cookies.Morsel() morsel = http.cookies.Morsel()
values = dict(zip(morsel, 'abcdefghijklmnop'))
morsel.set('item_key', 'item_value', 'coded_value') morsel.set('item_key', 'item_value', 'coded_value')
morsel.update(values) morsel.update(values)
values['key'] = 'item_key' values['key'] = 'item_key'

View File

@ -316,6 +316,18 @@ _NSIG_TESTS = [
'https://www.youtube.com/s/player/8a8ac953/tv-player-es6.vflset/tv-player-es6.js', 'https://www.youtube.com/s/player/8a8ac953/tv-player-es6.vflset/tv-player-es6.js',
'MiBYeXx_vRREbiCCmh', 'RtZYMVvmkE0JE', 'MiBYeXx_vRREbiCCmh', 'RtZYMVvmkE0JE',
), ),
(
'https://www.youtube.com/s/player/59b252b9/player_ias.vflset/en_US/base.js',
'D3XWVpYgwhLLKNK4AGX', 'aZrQ1qWJ5yv5h',
),
(
'https://www.youtube.com/s/player/fc2a56a5/player_ias.vflset/en_US/base.js',
'qTKWg_Il804jd2kAC', 'OtUAm2W6gyzJjB9u',
),
(
'https://www.youtube.com/s/player/fc2a56a5/tv-player-ias.vflset/tv-player-ias.js',
'qTKWg_Il804jd2kAC', 'OtUAm2W6gyzJjB9u',
),
] ]

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.8 KiB

View File

View File

@ -490,7 +490,7 @@ class YoutubeDL:
The template is mapped on a dictionary with keys 'progress' and 'info' The template is mapped on a dictionary with keys 'progress' and 'info'
retry_sleep_functions: Dictionary of functions that takes the number of attempts retry_sleep_functions: Dictionary of functions that takes the number of attempts
as argument and returns the time to sleep in seconds. as argument and returns the time to sleep in seconds.
Allowed keys are 'http', 'fragment', 'file_access' Allowed keys are 'http', 'fragment', 'file_access', 'extractor'
download_ranges: A callback function that gets called for every video with download_ranges: A callback function that gets called for every video with
the signature (info_dict, ydl) -> Iterable[Section]. the signature (info_dict, ydl) -> Iterable[Section].
Only the returned sections will be downloaded. Only the returned sections will be downloaded.
@ -640,6 +640,7 @@ class YoutubeDL:
self._printed_messages = set() self._printed_messages = set()
self._first_webpage_request = True self._first_webpage_request = True
self._post_hooks = [] self._post_hooks = []
self._close_hooks = []
self._progress_hooks = [] self._progress_hooks = []
self._postprocessor_hooks = [] self._postprocessor_hooks = []
self._download_retcode = 0 self._download_retcode = 0
@ -908,6 +909,11 @@ class YoutubeDL:
"""Add the post hook""" """Add the post hook"""
self._post_hooks.append(ph) self._post_hooks.append(ph)
def add_close_hook(self, ch):
"""Add a close hook, called when YoutubeDL.close() is called"""
assert callable(ch), 'Close hook must be callable'
self._close_hooks.append(ch)
def add_progress_hook(self, ph): def add_progress_hook(self, ph):
"""Add the download progress hook""" """Add the download progress hook"""
self._progress_hooks.append(ph) self._progress_hooks.append(ph)
@ -1016,6 +1022,9 @@ class YoutubeDL:
self._request_director.close() self._request_director.close()
del self._request_director del self._request_director
for close_hook in self._close_hooks:
close_hook()
def trouble(self, message=None, tb=None, is_error=True): def trouble(self, message=None, tb=None, is_error=True):
"""Determine action to take when a download problem appears. """Determine action to take when a download problem appears.

View File

@ -764,11 +764,11 @@ def _get_linux_desktop_environment(env, logger):
GetDesktopEnvironment GetDesktopEnvironment
""" """
xdg_current_desktop = env.get('XDG_CURRENT_DESKTOP', None) xdg_current_desktop = env.get('XDG_CURRENT_DESKTOP', None)
desktop_session = env.get('DESKTOP_SESSION', None) desktop_session = env.get('DESKTOP_SESSION', '')
if xdg_current_desktop is not None: if xdg_current_desktop is not None:
for part in map(str.strip, xdg_current_desktop.split(':')): for part in map(str.strip, xdg_current_desktop.split(':')):
if part == 'Unity': if part == 'Unity':
if desktop_session is not None and 'gnome-fallback' in desktop_session: if 'gnome-fallback' in desktop_session:
return _LinuxDesktopEnvironment.GNOME return _LinuxDesktopEnvironment.GNOME
else: else:
return _LinuxDesktopEnvironment.UNITY return _LinuxDesktopEnvironment.UNITY
@ -797,9 +797,8 @@ def _get_linux_desktop_environment(env, logger):
return _LinuxDesktopEnvironment.UKUI return _LinuxDesktopEnvironment.UKUI
elif part == 'LXQt': elif part == 'LXQt':
return _LinuxDesktopEnvironment.LXQT return _LinuxDesktopEnvironment.LXQT
logger.info(f'XDG_CURRENT_DESKTOP is set to an unknown value: "{xdg_current_desktop}"') logger.debug(f'XDG_CURRENT_DESKTOP is set to an unknown value: "{xdg_current_desktop}"')
elif desktop_session is not None:
if desktop_session == 'deepin': if desktop_session == 'deepin':
return _LinuxDesktopEnvironment.DEEPIN return _LinuxDesktopEnvironment.DEEPIN
elif desktop_session in ('mate', 'gnome'): elif desktop_session in ('mate', 'gnome'):
@ -816,9 +815,8 @@ def _get_linux_desktop_environment(env, logger):
elif desktop_session == 'ukui': elif desktop_session == 'ukui':
return _LinuxDesktopEnvironment.UKUI return _LinuxDesktopEnvironment.UKUI
else: else:
logger.info(f'DESKTOP_SESSION is set to an unknown value: "{desktop_session}"') logger.debug(f'DESKTOP_SESSION is set to an unknown value: "{desktop_session}"')
else:
if 'GNOME_DESKTOP_SESSION_ID' in env: if 'GNOME_DESKTOP_SESSION_ID' in env:
return _LinuxDesktopEnvironment.GNOME return _LinuxDesktopEnvironment.GNOME
elif 'KDE_FULL_SESSION' in env: elif 'KDE_FULL_SESSION' in env:
@ -826,6 +824,7 @@ def _get_linux_desktop_environment(env, logger):
return _LinuxDesktopEnvironment.KDE4 return _LinuxDesktopEnvironment.KDE4
else: else:
return _LinuxDesktopEnvironment.KDE3 return _LinuxDesktopEnvironment.KDE3
return _LinuxDesktopEnvironment.OTHER return _LinuxDesktopEnvironment.OTHER

View File

@ -300,7 +300,6 @@ from .brainpop import (
BrainPOPIlIE, BrainPOPIlIE,
BrainPOPJrIE, BrainPOPJrIE,
) )
from .bravotv import BravoTVIE
from .breitbart import BreitBartIE from .breitbart import BreitBartIE
from .brightcove import ( from .brightcove import (
BrightcoveLegacyIE, BrightcoveLegacyIE,
@ -338,7 +337,6 @@ from .canalc2 import Canalc2IE
from .canalplus import CanalplusIE from .canalplus import CanalplusIE
from .canalsurmas import CanalsurmasIE from .canalsurmas import CanalsurmasIE
from .caracoltv import CaracolTvPlayIE from .caracoltv import CaracolTvPlayIE
from .cartoonnetwork import CartoonNetworkIE
from .cbc import ( from .cbc import (
CBCIE, CBCIE,
CBCGemIE, CBCGemIE,
@ -929,7 +927,10 @@ from .jiocinema import (
) )
from .jiosaavn import ( from .jiosaavn import (
JioSaavnAlbumIE, JioSaavnAlbumIE,
JioSaavnArtistIE,
JioSaavnPlaylistIE, JioSaavnPlaylistIE,
JioSaavnShowIE,
JioSaavnShowPlaylistIE,
JioSaavnSongIE, JioSaavnSongIE,
) )
from .joj import JojIE from .joj import JojIE
@ -1260,6 +1261,7 @@ from .nba import (
) )
from .nbc import ( from .nbc import (
NBCIE, NBCIE,
BravoTVIE,
NBCNewsIE, NBCNewsIE,
NBCOlympicsIE, NBCOlympicsIE,
NBCOlympicsStreamIE, NBCOlympicsStreamIE,
@ -1267,6 +1269,7 @@ from .nbc import (
NBCSportsStreamIE, NBCSportsStreamIE,
NBCSportsVPlayerIE, NBCSportsVPlayerIE,
NBCStationsIE, NBCStationsIE,
SyfyIE,
) )
from .ndr import ( from .ndr import (
NDRIE, NDRIE,
@ -1964,7 +1967,6 @@ from .spreaker import (
SpreakerShowIE, SpreakerShowIE,
) )
from .springboardplatform import SpringboardPlatformIE from .springboardplatform import SpringboardPlatformIE
from .sprout import SproutIE
from .sproutvideo import ( from .sproutvideo import (
SproutVideoIE, SproutVideoIE,
VidsIoIE, VidsIoIE,
@ -2015,13 +2017,11 @@ from .sverigesradio import (
SverigesRadioPublicationIE, SverigesRadioPublicationIE,
) )
from .svt import ( from .svt import (
SVTIE,
SVTPageIE, SVTPageIE,
SVTPlayIE, SVTPlayIE,
SVTSeriesIE, SVTSeriesIE,
) )
from .swearnet import SwearnetEpisodeIE from .swearnet import SwearnetEpisodeIE
from .syfy import SyfyIE
from .syvdk import SYVDKIE from .syvdk import SYVDKIE
from .sztvhu import SztvHuIE from .sztvhu import SztvHuIE
from .tagesschau import TagesschauIE from .tagesschau import TagesschauIE
@ -2146,6 +2146,7 @@ from .toggle import (
from .toggo import ToggoIE from .toggo import ToggoIE
from .tonline import TOnlineIE from .tonline import TOnlineIE
from .toongoggles import ToonGogglesIE from .toongoggles import ToonGogglesIE
from .toutiao import ToutiaoIE
from .toutv import TouTvIE from .toutv import TouTvIE
from .toypics import ( from .toypics import (
ToypicsIE, ToypicsIE,
@ -2368,6 +2369,7 @@ from .vimeo import (
VHXEmbedIE, VHXEmbedIE,
VimeoAlbumIE, VimeoAlbumIE,
VimeoChannelIE, VimeoChannelIE,
VimeoEventIE,
VimeoGroupsIE, VimeoGroupsIE,
VimeoIE, VimeoIE,
VimeoLikesIE, VimeoLikesIE,

View File

@ -3,6 +3,7 @@ import json
import re import re
import time import time
import urllib.parse import urllib.parse
import uuid
import xml.etree.ElementTree as etree import xml.etree.ElementTree as etree
from .common import InfoExtractor from .common import InfoExtractor
@ -10,6 +11,7 @@ from ..networking.exceptions import HTTPError
from ..utils import ( from ..utils import (
NO_DEFAULT, NO_DEFAULT,
ExtractorError, ExtractorError,
parse_qs,
unescapeHTML, unescapeHTML,
unified_timestamp, unified_timestamp,
urlencode_postdata, urlencode_postdata,
@ -45,6 +47,8 @@ MSO_INFO = {
'name': 'Comcast XFINITY', 'name': 'Comcast XFINITY',
'username_field': 'user', 'username_field': 'user',
'password_field': 'passwd', 'password_field': 'passwd',
'login_hostname': 'login.xfinity.com',
'needs_newer_ua': True,
}, },
'TWC': { 'TWC': {
'name': 'Time Warner Cable | Spectrum', 'name': 'Time Warner Cable | Spectrum',
@ -74,6 +78,12 @@ MSO_INFO = {
'name': 'Verizon FiOS', 'name': 'Verizon FiOS',
'username_field': 'IDToken1', 'username_field': 'IDToken1',
'password_field': 'IDToken2', 'password_field': 'IDToken2',
'login_hostname': 'ssoauth.verizon.com',
},
'Fubo': {
'name': 'Fubo',
'username_field': 'username',
'password_field': 'password',
}, },
'Cablevision': { 'Cablevision': {
'name': 'Optimum/Cablevision', 'name': 'Optimum/Cablevision',
@ -1338,6 +1348,7 @@ MSO_INFO = {
'name': 'Sling TV', 'name': 'Sling TV',
'username_field': 'username', 'username_field': 'username',
'password_field': 'password', 'password_field': 'password',
'login_hostname': 'identity.sling.com',
}, },
'Suddenlink': { 'Suddenlink': {
'name': 'Suddenlink', 'name': 'Suddenlink',
@ -1355,7 +1366,6 @@ MSO_INFO = {
class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should end with BaseIE/InfoExtractor class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should end with BaseIE/InfoExtractor
_SERVICE_PROVIDER_TEMPLATE = 'https://sp.auth.adobe.com/adobe-services/%s' _SERVICE_PROVIDER_TEMPLATE = 'https://sp.auth.adobe.com/adobe-services/%s'
_USER_AGENT = 'Mozilla/5.0 (X11; Linux i686; rv:47.0) Gecko/20100101 Firefox/47.0' _USER_AGENT = 'Mozilla/5.0 (X11; Linux i686; rv:47.0) Gecko/20100101 Firefox/47.0'
_MODERN_USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; rv:131.0) Gecko/20100101 Firefox/131.0'
_MVPD_CACHE = 'ap-mvpd' _MVPD_CACHE = 'ap-mvpd'
_DOWNLOADING_LOGIN_PAGE = 'Downloading Provider Login Page' _DOWNLOADING_LOGIN_PAGE = 'Downloading Provider Login Page'
@ -1367,6 +1377,14 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
return super()._download_webpage_handle( return super()._download_webpage_handle(
*args, **kwargs) *args, **kwargs)
@staticmethod
def _get_mso_headers(mso_info):
# yt-dlp's default user-agent is usually too old for some MSO's like Comcast_SSO
# See: https://github.com/yt-dlp/yt-dlp/issues/10848
return {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; rv:131.0) Gecko/20100101 Firefox/131.0',
} if mso_info.get('needs_newer_ua') else {}
@staticmethod @staticmethod
def _get_mvpd_resource(provider_id, title, guid, rating): def _get_mvpd_resource(provider_id, title, guid, rating):
channel = etree.Element('channel') channel = etree.Element('channel')
@ -1382,7 +1400,13 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
resource_rating.text = rating resource_rating.text = rating
return '<rss version="2.0" xmlns:media="http://search.yahoo.com/mrss/">' + etree.tostring(channel).decode() + '</rss>' return '<rss version="2.0" xmlns:media="http://search.yahoo.com/mrss/">' + etree.tostring(channel).decode() + '</rss>'
def _extract_mvpd_auth(self, url, video_id, requestor_id, resource): def _extract_mvpd_auth(self, url, video_id, requestor_id, resource, software_statement):
mso_id = self.get_param('ap_mso')
if mso_id:
mso_info = MSO_INFO[mso_id]
else:
mso_info = {}
def xml_text(xml_str, tag): def xml_text(xml_str, tag):
return self._search_regex( return self._search_regex(
f'<{tag}>(.+?)</{tag}>', xml_str, tag) f'<{tag}>(.+?)</{tag}>', xml_str, tag)
@ -1391,15 +1415,27 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
token_expires = unified_timestamp(re.sub(r'[_ ]GMT', '', xml_text(token, date_ele))) token_expires = unified_timestamp(re.sub(r'[_ ]GMT', '', xml_text(token, date_ele)))
return token_expires and token_expires <= int(time.time()) return token_expires and token_expires <= int(time.time())
def post_form(form_page_res, note, data={}): def post_form(form_page_res, note, data={}, validate_url=False):
form_page, urlh = form_page_res form_page, urlh = form_page_res
post_url = self._html_search_regex(r'<form[^>]+action=(["\'])(?P<url>.+?)\1', form_page, 'post url', group='url') post_url = self._html_search_regex(r'<form[^>]+action=(["\'])(?P<url>.+?)\1', form_page, 'post url', group='url')
if not re.match(r'https?://', post_url): if not re.match(r'https?://', post_url):
post_url = urllib.parse.urljoin(urlh.url, post_url) post_url = urllib.parse.urljoin(urlh.url, post_url)
if validate_url:
# This request is submitting credentials so we should validate it when possible
url_parsed = urllib.parse.urlparse(post_url)
expected_hostname = mso_info.get('login_hostname')
if expected_hostname and expected_hostname != url_parsed.hostname:
raise ExtractorError(
f'Unexpected login URL hostname; expected "{expected_hostname}" but got '
f'"{url_parsed.hostname}". Aborting before submitting credentials')
if url_parsed.scheme != 'https':
self.write_debug('Upgrading login URL scheme to https')
post_url = urllib.parse.urlunparse(url_parsed._replace(scheme='https'))
form_data = self._hidden_inputs(form_page) form_data = self._hidden_inputs(form_page)
form_data.update(data) form_data.update(data)
return self._download_webpage_handle( return self._download_webpage_handle(
post_url, video_id, note, data=urlencode_postdata(form_data), headers={ post_url, video_id, note, data=urlencode_postdata(form_data), headers={
**self._get_mso_headers(mso_info),
'Content-Type': 'application/x-www-form-urlencoded', 'Content-Type': 'application/x-www-form-urlencoded',
}) })
@ -1432,19 +1468,58 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
} }
guid = xml_text(resource, 'guid') if '<' in resource else resource guid = xml_text(resource, 'guid') if '<' in resource else resource
count = 0 for _ in range(2):
while count < 2:
requestor_info = self.cache.load(self._MVPD_CACHE, requestor_id) or {} requestor_info = self.cache.load(self._MVPD_CACHE, requestor_id) or {}
authn_token = requestor_info.get('authn_token') authn_token = requestor_info.get('authn_token')
if authn_token and is_expired(authn_token, 'simpleTokenExpires'): if authn_token and is_expired(authn_token, 'simpleTokenExpires'):
authn_token = None authn_token = None
if not authn_token: if not authn_token:
mso_id = self.get_param('ap_mso') if not mso_id:
if mso_id: raise_mvpd_required()
username, password = self._get_login_info('ap_username', 'ap_password', mso_id) username, password = self._get_login_info('ap_username', 'ap_password', mso_id)
if not username or not password: if not username or not password:
raise_mvpd_required() raise_mvpd_required()
mso_info = MSO_INFO[mso_id]
device_info, urlh = self._download_json_handle(
'https://sp.auth.adobe.com/indiv/devices',
video_id, 'Registering device with Adobe',
data=json.dumps({'fingerprint': uuid.uuid4().hex}).encode(),
headers={'Content-Type': 'application/json; charset=UTF-8'})
device_id = device_info['deviceId']
mvpd_headers['pass_sfp'] = urlh.get_header('pass_sfp')
mvpd_headers['Ap_21'] = device_id
registration = self._download_json(
'https://sp.auth.adobe.com/o/client/register',
video_id, 'Registering client with Adobe',
data=json.dumps({'software_statement': software_statement}).encode(),
headers={'Content-Type': 'application/json; charset=UTF-8'})
access_token = self._download_json(
'https://sp.auth.adobe.com/o/client/token', video_id,
'Obtaining access token', data=urlencode_postdata({
'grant_type': 'client_credentials',
'client_id': registration['client_id'],
'client_secret': registration['client_secret'],
}),
headers={
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
})['access_token']
mvpd_headers['Authorization'] = f'Bearer {access_token}'
reg_code = self._download_json(
f'https://sp.auth.adobe.com/reggie/v1/{requestor_id}/regcode',
video_id, 'Obtaining registration code',
data=urlencode_postdata({
'requestor': requestor_id,
'deviceId': device_id,
'format': 'json',
}),
headers={
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'Authorization': f'Bearer {access_token}',
})['code']
provider_redirect_page_res = self._download_webpage_handle( provider_redirect_page_res = self._download_webpage_handle(
self._SERVICE_PROVIDER_TEMPLATE % 'authenticate/saml', video_id, self._SERVICE_PROVIDER_TEMPLATE % 'authenticate/saml', video_id,
@ -1455,17 +1530,10 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
'no_iframe': 'false', 'no_iframe': 'false',
'domain_name': 'adobe.com', 'domain_name': 'adobe.com',
'redirect_url': url, 'redirect_url': url,
}, headers={ 'reg_code': reg_code,
# yt-dlp's default user-agent is usually too old for Comcast_SSO }, headers=self._get_mso_headers(mso_info))
# See: https://github.com/yt-dlp/yt-dlp/issues/10848
'User-Agent': self._MODERN_USER_AGENT,
} if mso_id == 'Comcast_SSO' else None)
elif not self._cookies_passed:
raise_mvpd_required()
if not mso_id: if mso_id == 'Comcast_SSO':
pass
elif mso_id == 'Comcast_SSO':
# Comcast page flow varies by video site and whether you # Comcast page flow varies by video site and whether you
# are on Comcast's network. # are on Comcast's network.
provider_redirect_page, urlh = provider_redirect_page_res provider_redirect_page, urlh = provider_redirect_page_res
@ -1489,8 +1557,8 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
oauth_redirect_url = extract_redirect_url( oauth_redirect_url = extract_redirect_url(
provider_redirect_page, fatal=True) provider_redirect_page, fatal=True)
provider_login_page_res = self._download_webpage_handle( provider_login_page_res = self._download_webpage_handle(
oauth_redirect_url, video_id, oauth_redirect_url, video_id, self._DOWNLOADING_LOGIN_PAGE,
self._DOWNLOADING_LOGIN_PAGE) headers=self._get_mso_headers(mso_info))
else: else:
provider_login_page_res = post_form( provider_login_page_res = post_form(
provider_redirect_page_res, provider_redirect_page_res,
@ -1500,24 +1568,35 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
provider_login_page_res, 'Logging in', { provider_login_page_res, 'Logging in', {
mso_info['username_field']: username, mso_info['username_field']: username,
mso_info['password_field']: password, mso_info['password_field']: password,
}) }, validate_url=True)
mvpd_confirm_page, urlh = mvpd_confirm_page_res mvpd_confirm_page, urlh = mvpd_confirm_page_res
if '<button class="submit" value="Resume">Resume</button>' in mvpd_confirm_page: if '<button class="submit" value="Resume">Resume</button>' in mvpd_confirm_page:
post_form(mvpd_confirm_page_res, 'Confirming Login') post_form(mvpd_confirm_page_res, 'Confirming Login')
elif mso_id == 'Philo': elif mso_id == 'Philo':
# Philo has very unique authentication method # Philo has very unique authentication method
self._download_webpage( self._request_webpage(
'https://idp.philo.com/auth/init/login_code', video_id, 'Requesting auth code', data=urlencode_postdata({ 'https://idp.philo.com/auth/init/login_code', video_id,
'Requesting Philo auth code', data=json.dumps({
'ident': username, 'ident': username,
'device': 'web', 'device': 'web',
'send_confirm_link': False, 'send_confirm_link': False,
'send_token': True, 'send_token': True,
})) 'device_ident': f'web-{uuid.uuid4().hex}',
'include_login_link': True,
}).encode(), headers={
'Content-Type': 'application/json',
'Accept': 'application/json',
})
philo_code = getpass.getpass('Type auth code you have received [Return]: ') philo_code = getpass.getpass('Type auth code you have received [Return]: ')
self._download_webpage( self._request_webpage(
'https://idp.philo.com/auth/update/login_code', video_id, 'Submitting token', data=urlencode_postdata({ 'https://idp.philo.com/auth/update/login_code', video_id,
'token': philo_code, 'Submitting token', data=json.dumps({'token': philo_code}).encode(),
})) headers={
'Content-Type': 'application/json',
'Accept': 'application/json',
})
mvpd_confirm_page_res = self._download_webpage_handle('https://idp.philo.com/idp/submit', video_id, 'Confirming Philo Login') mvpd_confirm_page_res = self._download_webpage_handle('https://idp.philo.com/idp/submit', video_id, 'Confirming Philo Login')
post_form(mvpd_confirm_page_res, 'Confirming Login') post_form(mvpd_confirm_page_res, 'Confirming Login')
elif mso_id == 'Verizon': elif mso_id == 'Verizon':
@ -1539,7 +1618,7 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
provider_redirect_page_res, 'Logging in', { provider_redirect_page_res, 'Logging in', {
mso_info['username_field']: username, mso_info['username_field']: username,
mso_info['password_field']: password, mso_info['password_field']: password,
}) }, validate_url=True)
saml_login_page, urlh = saml_login_page_res saml_login_page, urlh = saml_login_page_res
if 'Please try again.' in saml_login_page: if 'Please try again.' in saml_login_page:
raise ExtractorError( raise ExtractorError(
@ -1560,7 +1639,7 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
[saml_login_page, saml_redirect_url], 'Logging in', { [saml_login_page, saml_redirect_url], 'Logging in', {
mso_info['username_field']: username, mso_info['username_field']: username,
mso_info['password_field']: password, mso_info['password_field']: password,
}) }, validate_url=True)
if 'Please try again.' in saml_login_page: if 'Please try again.' in saml_login_page:
raise ExtractorError( raise ExtractorError(
'Failed to login, incorrect User ID or Password.') 'Failed to login, incorrect User ID or Password.')
@ -1631,7 +1710,7 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
provider_login_page_res, 'Logging in', { provider_login_page_res, 'Logging in', {
mso_info['username_field']: username, mso_info['username_field']: username,
mso_info['password_field']: password, mso_info['password_field']: password,
}) }, validate_url=True)
provider_refresh_redirect_url = extract_redirect_url( provider_refresh_redirect_url = extract_redirect_url(
provider_association_redirect, url=urlh.url) provider_association_redirect, url=urlh.url)
@ -1682,7 +1761,7 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
provider_login_page_res, 'Logging in', { provider_login_page_res, 'Logging in', {
mso_info['username_field']: username, mso_info['username_field']: username,
mso_info['password_field']: password, mso_info['password_field']: password,
}) }, validate_url=True)
provider_refresh_redirect_url = extract_redirect_url( provider_refresh_redirect_url = extract_redirect_url(
provider_association_redirect, url=urlh.url) provider_association_redirect, url=urlh.url)
@ -1699,6 +1778,27 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
query=hidden_data) query=hidden_data)
post_form(mvpd_confirm_page_res, 'Confirming Login') post_form(mvpd_confirm_page_res, 'Confirming Login')
elif mso_id == 'Fubo':
_, urlh = provider_redirect_page_res
fubo_response = self._download_json(
'https://api.fubo.tv/partners/tve/connect', video_id,
'Authenticating with Fubo', 'Unable to authenticate with Fubo',
query=parse_qs(urlh.url), data=json.dumps({
'username': username,
'password': password,
}).encode(), headers={
'Accept': 'application/json',
'Content-Type': 'application/json',
})
self._request_webpage(
'https://sp.auth.adobe.com/adobe-services/oauth2', video_id,
'Authenticating with Adobe', 'Failed to authenticate with Adobe',
query={
'code': fubo_response['code'],
'state': fubo_response['state'],
})
else: else:
# Some providers (e.g. DIRECTV NOW) have another meta refresh # Some providers (e.g. DIRECTV NOW) have another meta refresh
# based redirect that should be followed. # based redirect that should be followed.
@ -1717,7 +1817,8 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
} }
if mso_id in ('Cablevision', 'AlticeOne'): if mso_id in ('Cablevision', 'AlticeOne'):
form_data['_eventId_proceed'] = '' form_data['_eventId_proceed'] = ''
mvpd_confirm_page_res = post_form(provider_login_page_res, 'Logging in', form_data) mvpd_confirm_page_res = post_form(
provider_login_page_res, 'Logging in', form_data, validate_url=True)
if mso_id != 'Rogers': if mso_id != 'Rogers':
post_form(mvpd_confirm_page_res, 'Confirming Login') post_form(mvpd_confirm_page_res, 'Confirming Login')
@ -1727,6 +1828,7 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
'Retrieving Session', data=urlencode_postdata({ 'Retrieving Session', data=urlencode_postdata({
'_method': 'GET', '_method': 'GET',
'requestor_id': requestor_id, 'requestor_id': requestor_id,
'reg_code': reg_code,
}), headers=mvpd_headers) }), headers=mvpd_headers)
except ExtractorError as e: except ExtractorError as e:
if not mso_id and isinstance(e.cause, HTTPError) and e.cause.status == 401: if not mso_id and isinstance(e.cause, HTTPError) and e.cause.status == 401:
@ -1734,7 +1836,6 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
raise raise
if '<pendingLogout' in session: if '<pendingLogout' in session:
self.cache.store(self._MVPD_CACHE, requestor_id, {}) self.cache.store(self._MVPD_CACHE, requestor_id, {})
count += 1
continue continue
authn_token = unescapeHTML(xml_text(session, 'authnToken')) authn_token = unescapeHTML(xml_text(session, 'authnToken'))
requestor_info['authn_token'] = authn_token requestor_info['authn_token'] = authn_token
@ -1755,7 +1856,6 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
}), headers=mvpd_headers) }), headers=mvpd_headers)
if '<pendingLogout' in authorize: if '<pendingLogout' in authorize:
self.cache.store(self._MVPD_CACHE, requestor_id, {}) self.cache.store(self._MVPD_CACHE, requestor_id, {})
count += 1
continue continue
if '<error' in authorize: if '<error' in authorize:
raise ExtractorError(xml_text(authorize, 'details'), expected=True) raise ExtractorError(xml_text(authorize, 'details'), expected=True)
@ -1778,6 +1878,5 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
}), headers=mvpd_headers) }), headers=mvpd_headers)
if '<pendingLogout' in short_authorize: if '<pendingLogout' in short_authorize:
self.cache.store(self._MVPD_CACHE, requestor_id, {}) self.cache.store(self._MVPD_CACHE, requestor_id, {})
count += 1
continue continue
return short_authorize return short_authorize

View File

@ -84,6 +84,8 @@ class AdultSwimIE(TurnerBaseIE):
'skip': '404 Not Found', 'skip': '404 Not Found',
}] }]
_SOFTWARE_STATEMENT = 'eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiIwNjg5ZmU2My00OTc5LTQxZmQtYWYxNC1hYjVlNmJjNWVkZWIiLCJuYmYiOjE1MzcxOTA2NzQsImlzcyI6ImF1dGguYWRvYmUuY29tIiwiaWF0IjoxNTM3MTkwNjc0fQ.Xl3AEduM0s1TxDQ6-XssdKIiLm261hhsEv1C1yo_nitIajZThSI9rXILqtIzO0aujoHhdzUnu_dUCq9ffiSBzEG632tTa1la-5tegHtce80cMhewBN4n2t8n9O5tiaPx8MPY8ALdm5wS7QzWE6DO_LTJKgE8Bl7Yv-CWJT4q4SywtNiQWLVOuhBRnDyfsRezxRwptw8qTn9dv5ZzUrVJaby5fDZ_nOncMKvegOgaKd5KEuCAGQ-mg-PSuValMjGuf6FwDguGaK7IyI5Y2oOrzXmD4Dj7q4WBg8w9QoZhtLeAU56mcsGILolku2R5FHlVLO9xhjResyt-pfmegOkpSw'
def _real_extract(self, url): def _real_extract(self, url):
show_path, episode_path = self._match_valid_url(url).groups() show_path, episode_path = self._match_valid_url(url).groups()
display_id = episode_path or show_path display_id = episode_path or show_path
@ -152,7 +154,7 @@ class AdultSwimIE(TurnerBaseIE):
# CDN_TOKEN_APP_ID from: # CDN_TOKEN_APP_ID from:
# https://d2gg02c3xr550i.cloudfront.net/assets/asvp.e9c8bef24322d060ef87.bundle.js # https://d2gg02c3xr550i.cloudfront.net/assets/asvp.e9c8bef24322d060ef87.bundle.js
'appId': 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhcHBJZCI6ImFzLXR2ZS1kZXNrdG9wLXB0enQ2bSIsInByb2R1Y3QiOiJ0dmUiLCJuZXR3b3JrIjoiYXMiLCJwbGF0Zm9ybSI6ImRlc2t0b3AiLCJpYXQiOjE1MzI3MDIyNzl9.BzSCk-WYOZ2GMCIaeVb8zWnzhlgnXuJTCu0jGp_VaZE', 'appId': 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhcHBJZCI6ImFzLXR2ZS1kZXNrdG9wLXB0enQ2bSIsInByb2R1Y3QiOiJ0dmUiLCJuZXR3b3JrIjoiYXMiLCJwbGF0Zm9ybSI6ImRlc2t0b3AiLCJpYXQiOjE1MzI3MDIyNzl9.BzSCk-WYOZ2GMCIaeVb8zWnzhlgnXuJTCu0jGp_VaZE',
}, { }, self._SOFTWARE_STATEMENT, {
'url': url, 'url': url,
'site_name': 'AdultSwim', 'site_name': 'AdultSwim',
'auth_required': auth, 'auth_required': auth,

View File

@ -1,3 +1,5 @@
import json
from .theplatform import ThePlatformIE from .theplatform import ThePlatformIE
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
@ -6,7 +8,6 @@ from ..utils import (
remove_start, remove_start,
traverse_obj, traverse_obj,
update_url_query, update_url_query,
urlencode_postdata,
) )
@ -20,13 +21,13 @@ class AENetworksBaseIE(ThePlatformIE): # XXX: Do not subclass from concrete IE
_THEPLATFORM_KEY = '43jXaGRQud' _THEPLATFORM_KEY = '43jXaGRQud'
_THEPLATFORM_SECRET = 'S10BPXHMlb' _THEPLATFORM_SECRET = 'S10BPXHMlb'
_DOMAIN_MAP = { _DOMAIN_MAP = {
'history.com': ('HISTORY', 'history'), 'history.com': ('HISTORY', 'history', 'eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiI1MzZlMTQ3ZS0zMzFhLTQxY2YtYTMwNC01MDA2NzNlOGYwYjYiLCJuYmYiOjE1Mzg2NjMzMDksImlzcyI6ImF1dGguYWRvYmUuY29tIiwiaWF0IjoxNTM4NjYzMzA5fQ.n24-FVHLGXJe2D4atIQZ700aiXKIajKh5PWFoHJ40Az4itjtwwSFHnvufnoal3T8lYkwNLxce7H-IEGxIykRkZEdwq09pMKMT-ft9ASzE4vQ8fAWbf5ZgDME86x4Jq_YaxkRc9Ne0eShGhl8fgTJHvk07sfWcol61HJ7kU7K8FzzcHR0ucFQgA5VNd8RyjoGWY7c6VxnXR214LOpXsywmit04-vGJC102b_WA2EQfqI93UzG6M6l0EeV4n0_ijP3s8_i8WMJZ_uwnTafCIY6G_731i01dKXDLSFzG1vYglAwDa8DTcdrAAuIFFDF6QNGItCCmwbhjufjmoeVb7R1Gg'),
'aetv.com': ('AETV', 'aetv'), 'aetv.com': ('AETV', 'aetv', 'eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiI5Y2IwNjg2Yy03ODUxLTRiZDUtODcyMC00MjNlZTg1YTQ1NzMiLCJuYmYiOjE1Mzg2NjMyOTAsImlzcyI6ImF1dGguYWRvYmUuY29tIiwiaWF0IjoxNTM4NjYzMjkwfQ.T5Elf0X4TndO4NEgqBas1gDxNHGPVk_daO2Ha5FBzVO6xi3zM7eavdAKfYMCN7gpWYJx03iADaVPtczO_t_aGZczDjpwJHgTUzDgvcLZAVsVDqtDIAMy3S846rPgT6UDbVoxurA7B2VTPm9phjrSXhejvd0LBO8MQL4AZ3sy2VmiPJ2noT1ily5PuHCYlkrT1fheO064duR__Cd9DQ5VTMnKjzY3Cx345CEwKDkUk5gwgxhXM-aY0eblehrq8VD81_aRM_O3tvh7nbTydHOnUpV-k_iKVi49gqz7Sf8zb6Zh5z2Uftn3vYCfE5NQuesitoRMnsH17nW7o_D59hkRgg'),
'mylifetime.com': ('LIFETIME', 'lifetime'), 'mylifetime.com': ('LIFETIME', 'lifetime', 'eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJmODg0MDM1ZC1mZGRmLTRmYjgtYmRkMC05MzRhZDdiYTAwYTciLCJuYmYiOjE1NDkzOTI2NDQsImlzcyI6ImF1dGguYWRvYmUuY29tIiwiaWF0IjoxNTQ5MzkyNjQ0fQ.vkTIaCpheKdKQd__2-3ec4qkcpbAhyCTvwe5iTl922ItSQfVhpEJG4wseVSNmBTrpBi0hvLedcw6Hj1_UuzBMVuVcCqLprU-pI8recEwL0u7G-eVkylsxe1OTUm1o3V6OykXQ9KlA-QQLL1neUhdhR1n5B1LZ4cmtBmiEpfgf4rFwXD1ScFylIcaWKLBqHoRBNUmxyTmoXXvn_A-GGSj9eCizFzY8W5uBwUcsoiw2Cr1skx7PbB2RSP1I5DsoIJKG-8XV1KS7MWl-fNLjE-hVAsI9znqfEEFcPBiv3LhCP4Nf4OIs7xAselMn0M0c8igRUZhURWX_hdygUAxkbKFtQ'),
'lifetimemovieclub.com': ('LIFETIMEMOVIECLUB', 'lmc'), 'fyi.tv': ('FYI', 'fyi', 'eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiIxOGZiOWM3Ny1mYmMzLTQxYTktYmE1Yi1lMzM0ZmUzNzU4NjEiLCJuYmYiOjE1ODc1ODAzNzcsImlzcyI6ImF1dGguYWRvYmUuY29tIiwiaWF0IjoxNTg3NTgwMzc3fQ.AYDuipKswmIfLBfOjHRsfc5fMV5NmJUmiJnkpiep4VEw9QiXkygFj4bN06Si5tFc5Mee5TDrGzDpV6iuKbVpLT5kuqXhAn-Wozf5zKPsg_IpdEKO7gsiCq4calt72ct44KTqtKD_hVcoxQU24_HaJsRgXzu3B-6Ff6UrmsXkyvYifYVC9v2DSkdCuA02_IrlllzVT2kRuefUXgL4vQRtTFf77uYa0RKSTG7uVkiQ_AU41eXevKlO2qgtc14Hk5cZ7-ZNrDyMCXYA5ngdIHP7Gs9PWaFXT36PFHI_rC4EfxUABPzjQFxjpP75aX5qn8SH__HbM9q3hoPWgaEaf76qIQ'),
'fyi.tv': ('FYI', 'fyi'), 'lifetimemovieclub.com': ('LIFETIMEMOVIECLUB', 'lmc', None),
'historyvault.com': (None, 'historyvault'), 'historyvault.com': (None, 'historyvault', None),
'biography.com': (None, 'biography'), 'biography.com': (None, 'biography', None),
} }
def _extract_aen_smil(self, smil_url, video_id, auth=None): def _extract_aen_smil(self, smil_url, video_id, auth=None):
@ -71,7 +72,7 @@ class AENetworksBaseIE(ThePlatformIE): # XXX: Do not subclass from concrete IE
} }
def _extract_aetn_info(self, domain, filter_key, filter_value, url): def _extract_aetn_info(self, domain, filter_key, filter_value, url):
requestor_id, brand = self._DOMAIN_MAP[domain] requestor_id, brand, software_statement = self._DOMAIN_MAP[domain]
result = self._download_json( result = self._download_json(
f'https://feeds.video.aetnd.com/api/v2/{brand}/videos', f'https://feeds.video.aetnd.com/api/v2/{brand}/videos',
filter_value, query={f'filter[{filter_key}]': filter_value}) filter_value, query={f'filter[{filter_key}]': filter_value})
@ -95,7 +96,7 @@ class AENetworksBaseIE(ThePlatformIE): # XXX: Do not subclass from concrete IE
theplatform_metadata.get('AETN$PPL_pplProgramId') or theplatform_metadata.get('AETN$PPL_pplProgramId_OLD'), theplatform_metadata.get('AETN$PPL_pplProgramId') or theplatform_metadata.get('AETN$PPL_pplProgramId_OLD'),
traverse_obj(theplatform_metadata, ('ratings', 0, 'rating'))) traverse_obj(theplatform_metadata, ('ratings', 0, 'rating')))
auth = self._extract_mvpd_auth( auth = self._extract_mvpd_auth(
url, video_id, requestor_id, resource) url, video_id, requestor_id, resource, software_statement)
info.update(self._extract_aen_smil(media_url, video_id, auth)) info.update(self._extract_aen_smil(media_url, video_id, auth))
info.update({ info.update({
'title': title, 'title': title,
@ -132,10 +133,11 @@ class AENetworksIE(AENetworksBaseIE):
'tags': 'count:14', 'tags': 'count:14',
'categories': ['Mountain Men'], 'categories': ['Mountain Men'],
'episode_number': 1, 'episode_number': 1,
'episode': 'Episode 1', 'episode': 'Winter Is Coming',
'season': 'Season 1', 'season': 'Season 1',
'season_number': 1, 'season_number': 1,
'series': 'Mountain Men', 'series': 'Mountain Men',
'age_limit': 0,
}, },
'params': { 'params': {
# m3u8 download # m3u8 download
@ -157,18 +159,18 @@ class AENetworksIE(AENetworksBaseIE):
'thumbnail': r're:^https?://.*\.jpe?g$', 'thumbnail': r're:^https?://.*\.jpe?g$',
'chapters': 'count:4', 'chapters': 'count:4',
'tags': 'count:23', 'tags': 'count:23',
'episode': 'Episode 1', 'episode': 'Inlawful Entry',
'episode_number': 1, 'episode_number': 1,
'season': 'Season 9', 'season': 'Season 9',
'season_number': 9, 'season_number': 9,
'series': 'Duck Dynasty', 'series': 'Duck Dynasty',
'age_limit': 0,
}, },
'params': { 'params': {
# m3u8 download # m3u8 download
'skip_download': True, 'skip_download': True,
}, },
'add_ie': ['ThePlatform'], 'add_ie': ['ThePlatform'],
'skip': 'This video is only available for users of participating TV providers.',
}, { }, {
'url': 'http://www.fyi.tv/shows/tiny-house-nation/season-1/episode-8', 'url': 'http://www.fyi.tv/shows/tiny-house-nation/season-1/episode-8',
'only_matching': True, 'only_matching': True,
@ -203,18 +205,19 @@ class AENetworksIE(AENetworksBaseIE):
class AENetworksListBaseIE(AENetworksBaseIE): class AENetworksListBaseIE(AENetworksBaseIE):
def _call_api(self, resource, slug, brand, fields): def _call_api(self, resource, slug, brand, fields):
return self._download_json( return self._download_json(
'https://yoga.appsvcs.aetnd.com/graphql', 'https://yoga.appsvcs.aetnd.com/graphql', slug,
slug, query={'brand': brand}, data=urlencode_postdata({ query={'brand': brand}, headers={'Content-Type': 'application/json'},
data=json.dumps({
'query': '''{ 'query': '''{
%s(slug: "%s") { %s(slug: "%s") {
%s %s
} }
}''' % (resource, slug, fields), # noqa: UP031 }''' % (resource, slug, fields), # noqa: UP031
}))['data'][resource] }).encode())['data'][resource]
def _real_extract(self, url): def _real_extract(self, url):
domain, slug = self._match_valid_url(url).groups() domain, slug = self._match_valid_url(url).groups()
_, brand = self._DOMAIN_MAP[domain] _, brand, _ = self._DOMAIN_MAP[domain]
playlist = self._call_api(self._RESOURCE, slug, brand, self._FIELDS) playlist = self._call_api(self._RESOURCE, slug, brand, self._FIELDS)
base_url = f'http://watch.{domain}' base_url = f'http://watch.{domain}'

View File

@ -1,32 +1,24 @@
import re from .brightcove import BrightcoveNewIE
from .common import InfoExtractor
from .theplatform import ThePlatformIE from ..utils.traversal import traverse_obj
from ..utils import (
int_or_none,
parse_age_limit,
try_get,
update_url_query,
)
class AMCNetworksIE(ThePlatformIE): # XXX: Do not subclass from concrete IE class AMCNetworksIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?(?P<site>amc|bbcamerica|ifc|(?:we|sundance)tv)\.com/(?P<id>(?:movies|shows(?:/[^/]+)+)/[^/?#&]+)' _VALID_URL = r'https?://(?:www\.)?(?:amc|bbcamerica|ifc|(?:we|sundance)tv)\.com/(?P<id>(?:movies|shows(?:/[^/?#]+)+)/[^/?#&]+)'
_TESTS = [{ _TESTS = [{
'url': 'https://www.bbcamerica.com/shows/the-graham-norton-show/videos/tina-feys-adorable-airline-themed-family-dinner--51631', 'url': 'https://www.amc.com/shows/dark-winds/videos/dark-winds-a-look-at-season-3--1072027',
'info_dict': { 'info_dict': {
'id': '4Lq1dzOnZGt0', 'id': '6369261343112',
'ext': 'mp4', 'ext': 'mp4',
'title': "The Graham Norton Show - Season 28 - Tina Fey's Adorable Airline-Themed Family Dinner", 'title': 'Dark Winds: A Look at Season 3',
'description': "It turns out child stewardesses are very generous with the wine! All-new episodes of 'The Graham Norton Show' premiere Fridays at 11/10c on BBC America.", 'uploader_id': '6240731308001',
'upload_date': '20201120', 'duration': 176.427,
'timestamp': 1605904350, 'thumbnail': r're:https://[^/]+\.boltdns\.net/.+/image\.jpg',
'uploader': 'AMCN', 'tags': [],
'timestamp': 1740414792,
'upload_date': '20250224',
}, },
'params': { 'params': {'skip_download': 'm3u8'},
# m3u8 download
'skip_download': True,
},
'skip': '404 Not Found',
}, { }, {
'url': 'http://www.bbcamerica.com/shows/the-hunt/full-episodes/season-1/episode-01-the-hardest-challenge', 'url': 'http://www.bbcamerica.com/shows/the-hunt/full-episodes/season-1/episode-01-the-hardest-challenge',
'only_matching': True, 'only_matching': True,
@ -52,96 +44,18 @@ class AMCNetworksIE(ThePlatformIE): # XXX: Do not subclass from concrete IE
'url': 'https://www.sundancetv.com/shows/riviera/full-episodes/season-1/episode-01-episode-1', 'url': 'https://www.sundancetv.com/shows/riviera/full-episodes/season-1/episode-01-episode-1',
'only_matching': True, 'only_matching': True,
}] }]
_REQUESTOR_ID_MAP = {
'amc': 'AMC',
'bbcamerica': 'BBCA',
'ifc': 'IFC',
'sundancetv': 'SUNDANCE',
'wetv': 'WETV',
}
def _real_extract(self, url): def _real_extract(self, url):
site, display_id = self._match_valid_url(url).groups() display_id = self._match_id(url)
requestor_id = self._REQUESTOR_ID_MAP[site] webpage = self._download_webpage(url, display_id)
page_data = self._download_json( initial_data = self._search_json(
f'https://content-delivery-gw.svc.ds.amcn.com/api/v2/content/amcn/{requestor_id.lower()}/url/{display_id}', r'window\.initialData\s*=\s*JSON\.parse\(String\.raw`', webpage, 'initial data', display_id)
display_id)['data'] video_id = traverse_obj(initial_data, ('initialData', 'properties', 'videoId', {str}))
properties = page_data.get('properties') or {} if not video_id: # All locked videos are now DRM-protected
query = { self.report_drm(display_id)
'mbr': 'true', account_id = initial_data['config']['brightcove']['accountId']
'manifest': 'm3u', player_id = initial_data['config']['brightcove']['playerId']
}
video_player_count = 0 return self.url_result(
try: f'https://players.brightcove.net/{account_id}/{player_id}_default/index.html?videoId={video_id}',
for v in page_data['children']: BrightcoveNewIE, video_id)
if v.get('type') == 'video-player':
release_pid = v['properties']['currentVideo']['meta']['releasePid']
tp_path = 'M_UwQC/' + release_pid
media_url = 'https://link.theplatform.com/s/' + tp_path
video_player_count += 1
except KeyError:
pass
if video_player_count > 1:
self.report_warning(
f'The JSON data has {video_player_count} video players. Only one will be extracted')
# Fall back to videoPid if releasePid not found.
# TODO: Fall back to videoPid if releasePid manifest uses DRM.
if not video_player_count:
tp_path = 'M_UwQC/media/' + properties['videoPid']
media_url = 'https://link.theplatform.com/s/' + tp_path
theplatform_metadata = self._download_theplatform_metadata(tp_path, display_id)
info = self._parse_theplatform_metadata(theplatform_metadata)
video_id = theplatform_metadata['pid']
title = theplatform_metadata['title']
rating = try_get(
theplatform_metadata, lambda x: x['ratings'][0]['rating'])
video_category = properties.get('videoCategory')
if video_category and video_category.endswith('-Auth'):
resource = self._get_mvpd_resource(
requestor_id, title, video_id, rating)
query['auth'] = self._extract_mvpd_auth(
url, video_id, requestor_id, resource)
media_url = update_url_query(media_url, query)
formats, subtitles = self._extract_theplatform_smil(
media_url, video_id)
thumbnails = []
thumbnail_urls = [properties.get('imageDesktop')]
if 'thumbnail' in info:
thumbnail_urls.append(info.pop('thumbnail'))
for thumbnail_url in thumbnail_urls:
if not thumbnail_url:
continue
mobj = re.search(r'(\d+)x(\d+)', thumbnail_url)
thumbnails.append({
'url': thumbnail_url,
'width': int(mobj.group(1)) if mobj else None,
'height': int(mobj.group(2)) if mobj else None,
})
info.update({
'age_limit': parse_age_limit(rating),
'formats': formats,
'id': video_id,
'subtitles': subtitles,
'thumbnails': thumbnails,
})
ns_keys = theplatform_metadata.get('$xmlns', {}).keys()
if ns_keys:
ns = next(iter(ns_keys))
episode = theplatform_metadata.get(ns + '$episodeTitle') or None
episode_number = int_or_none(
theplatform_metadata.get(ns + '$episode'))
season_number = int_or_none(
theplatform_metadata.get(ns + '$season'))
series = theplatform_metadata.get(ns + '$show') or None
info.update({
'episode': episode,
'episode_number': episode_number,
'season_number': season_number,
'series': series,
})
return info

View File

@ -816,6 +816,26 @@ class BiliBiliBangumiIE(BilibiliBaseIE):
'upload_date': '20111104', 'upload_date': '20111104',
'thumbnail': r're:^https?://.*\.(jpg|jpeg|png)$', 'thumbnail': r're:^https?://.*\.(jpg|jpeg|png)$',
}, },
}, {
'note': 'new playurlSSRData scheme',
'url': 'https://www.bilibili.com/bangumi/play/ep678060',
'info_dict': {
'id': '678060',
'ext': 'mp4',
'series': '去你家吃饭好吗',
'series_id': '6198',
'season': '第二季',
'season_id': '42542',
'season_number': 2,
'episode': '吴老二:你家大公鸡养不熟,能煮熟吗…',
'episode_id': '678060',
'episode_number': 61,
'title': '一只小九九丫 吴老二:你家大公鸡养不熟,能煮熟吗…',
'duration': 266.123,
'timestamp': 1663315904,
'upload_date': '20220916',
'thumbnail': r're:^https?://.*\.(jpg|jpeg|png)$',
},
}, { }, {
'url': 'https://www.bilibili.com/bangumi/play/ep267851', 'url': 'https://www.bilibili.com/bangumi/play/ep267851',
'info_dict': { 'info_dict': {
@ -879,13 +899,27 @@ class BiliBiliBangumiIE(BilibiliBaseIE):
'Extracting episode', query={'fnval': 12240, 'ep_id': episode_id}, 'Extracting episode', query={'fnval': 12240, 'ep_id': episode_id},
headers=headers)) headers=headers))
geo_blocked = traverse_obj(play_info, (
'raw', 'data', 'plugins', lambda _, v: v['name'] == 'AreaLimitPanel', 'config', 'is_block', {bool}, any))
premium_only = play_info.get('code') == -10403 premium_only = play_info.get('code') == -10403
play_info = traverse_obj(play_info, ('result', 'video_info', {dict})) or {}
formats = self.extract_formats(play_info) video_info = traverse_obj(play_info, (('result', ('raw', 'data')), 'video_info', {dict}, any)) or {}
if not formats and (premium_only or '成为大会员抢先看' in webpage or '开通大会员观看' in webpage): formats = self.extract_formats(video_info)
if not formats:
if geo_blocked:
self.raise_geo_restricted()
elif premium_only or '成为大会员抢先看' in webpage or '开通大会员观看' in webpage:
self.raise_login_required('This video is for premium members only') self.raise_login_required('This video is for premium members only')
if traverse_obj(play_info, ((
('result', 'play_check', 'play_detail'), # 'PLAY_PREVIEW' vs 'PLAY_WHOLE'
('raw', 'data', 'play_video_type'), # 'preview' vs 'whole'
), any, {lambda x: x in ('PLAY_PREVIEW', 'preview')})):
self.report_warning(
'Only preview format is available, '
f'you have to become a premium member to access full video. {self._login_hint()}')
bangumi_info = self._download_json( bangumi_info = self._download_json(
'https://api.bilibili.com/pgc/view/web/season', episode_id, 'Get episode details', 'https://api.bilibili.com/pgc/view/web/season', episode_id, 'Get episode details',
query={'ep_id': episode_id}, headers=headers)['result'] query={'ep_id': episode_id}, headers=headers)['result']
@ -922,7 +956,7 @@ class BiliBiliBangumiIE(BilibiliBaseIE):
'season': str_or_none(season_title), 'season': str_or_none(season_title),
'season_id': str_or_none(season_id), 'season_id': str_or_none(season_id),
'season_number': season_number, 'season_number': season_number,
'duration': float_or_none(play_info.get('timelength'), scale=1000), 'duration': float_or_none(video_info.get('timelength'), scale=1000),
'subtitles': self.extract_subtitles(episode_id, episode_info.get('cid'), aid=aid), 'subtitles': self.extract_subtitles(episode_id, episode_info.get('cid'), aid=aid),
'__post_extractor': self.extract_comments(aid), '__post_extractor': self.extract_comments(aid),
'http_headers': {'Referer': url}, 'http_headers': {'Referer': url},

View File

@ -1,30 +1,32 @@
import functools import functools
import json
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..networking import HEADRequest from ..networking import HEADRequest
from ..networking.exceptions import HTTPError
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
OnDemandPagedList, OnDemandPagedList,
clean_html, clean_html,
extract_attributes, determine_ext,
format_field,
get_element_by_class, get_element_by_class,
get_element_by_id,
get_element_html_by_class,
get_elements_html_by_class, get_elements_html_by_class,
int_or_none, int_or_none,
orderedSet, orderedSet,
parse_count, parse_count,
parse_duration, parse_duration,
traverse_obj, parse_iso8601,
unified_strdate, url_or_none,
urlencode_postdata, urlencode_postdata,
urljoin, urljoin,
) )
from ..utils.traversal import traverse_obj
class BitChuteIE(InfoExtractor): class BitChuteIE(InfoExtractor):
_VALID_URL = r'https?://(?:(?:www|old)\.)?bitchute\.com/(?:video|embed|torrent/[^/]+)/(?P<id>[^/?#&]+)' _VALID_URL = r'https?://(?:(?:www|old)\.)?bitchute\.com/(?:video|embed|torrent/[^/?#]+)/(?P<id>[^/?#&]+)'
_EMBED_REGEX = [rf'<(?:script|iframe)[^>]+\bsrc=(["\'])(?P<url>{_VALID_URL})'] _EMBED_REGEX = [rf'<(?:script|iframe)[^>]+\bsrc=(["\'])(?P<url>{_VALID_URL})']
_TESTS = [{ _TESTS = [{
'url': 'https://www.bitchute.com/video/UGlrF9o9b-Q/', 'url': 'https://www.bitchute.com/video/UGlrF9o9b-Q/',
@ -34,12 +36,17 @@ class BitChuteIE(InfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'title': 'This is the first video on #BitChute !', 'title': 'This is the first video on #BitChute !',
'description': 'md5:a0337e7b1fe39e32336974af8173a034', 'description': 'md5:a0337e7b1fe39e32336974af8173a034',
'thumbnail': r're:^https?://.*\.jpg$', 'thumbnail': r're:https?://.+/.+\.jpg$',
'uploader': 'BitChute', 'uploader': 'BitChute',
'upload_date': '20170103', 'upload_date': '20170103',
'uploader_url': 'https://www.bitchute.com/profile/I5NgtHZn9vPj/', 'uploader_url': 'https://www.bitchute.com/profile/I5NgtHZn9vPj/',
'channel': 'BitChute', 'channel': 'BitChute',
'channel_url': 'https://www.bitchute.com/channel/bitchute/', 'channel_url': 'https://www.bitchute.com/channel/bitchute/',
'uploader_id': 'I5NgtHZn9vPj',
'channel_id': '1VBwRfyNcKdX',
'view_count': int,
'duration': 16.0,
'timestamp': 1483425443,
}, },
}, { }, {
# test case: video with different channel and uploader # test case: video with different channel and uploader
@ -49,13 +56,18 @@ class BitChuteIE(InfoExtractor):
'id': 'Yti_j9A-UZ4', 'id': 'Yti_j9A-UZ4',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Israel at War | Full Measure', 'title': 'Israel at War | Full Measure',
'description': 'md5:38cf7bc6f42da1a877835539111c69ef', 'description': 'md5:e60198b89971966d6030d22b3268f08f',
'thumbnail': r're:^https?://.*\.jpg$', 'thumbnail': r're:https?://.+/.+\.jpg$',
'uploader': 'sharylattkisson', 'uploader': 'sharylattkisson',
'upload_date': '20231106', 'upload_date': '20231106',
'uploader_url': 'https://www.bitchute.com/profile/9K0kUWA9zmd9/', 'uploader_url': 'https://www.bitchute.com/profile/9K0kUWA9zmd9/',
'channel': 'Full Measure with Sharyl Attkisson', 'channel': 'Full Measure with Sharyl Attkisson',
'channel_url': 'https://www.bitchute.com/channel/sharylattkisson/', 'channel_url': 'https://www.bitchute.com/channel/sharylattkisson/',
'uploader_id': '9K0kUWA9zmd9',
'channel_id': 'NpdxoCRv3ZLb',
'view_count': int,
'duration': 554.0,
'timestamp': 1699296106,
}, },
}, { }, {
# video not downloadable in browser, but we can recover it # video not downloadable in browser, but we can recover it
@ -66,25 +78,21 @@ class BitChuteIE(InfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'filesize': 71537926, 'filesize': 71537926,
'title': 'STYXHEXENHAMMER666 - Election Fraud, Clinton 2020, EU Armies, and Gun Control', 'title': 'STYXHEXENHAMMER666 - Election Fraud, Clinton 2020, EU Armies, and Gun Control',
'description': 'md5:228ee93bd840a24938f536aeac9cf749', 'description': 'md5:2029c7c212ccd4b040f52bb2d036ef4e',
'thumbnail': r're:^https?://.*\.jpg$', 'thumbnail': r're:https?://.+/.+\.jpg$',
'uploader': 'BitChute', 'uploader': 'BitChute',
'upload_date': '20181113', 'upload_date': '20181113',
'uploader_url': 'https://www.bitchute.com/profile/I5NgtHZn9vPj/', 'uploader_url': 'https://www.bitchute.com/profile/I5NgtHZn9vPj/',
'channel': 'BitChute', 'channel': 'BitChute',
'channel_url': 'https://www.bitchute.com/channel/bitchute/', 'channel_url': 'https://www.bitchute.com/channel/bitchute/',
'uploader_id': 'I5NgtHZn9vPj',
'channel_id': '1VBwRfyNcKdX',
'view_count': int,
'duration': 1701.0,
'tags': ['bitchute'],
'timestamp': 1542130287,
}, },
'params': {'check_formats': None}, 'params': {'check_formats': None},
}, {
# restricted video
'url': 'https://www.bitchute.com/video/WEnQU7XGcTdl/',
'info_dict': {
'id': 'WEnQU7XGcTdl',
'ext': 'mp4',
'title': 'Impartial Truth - Ein Letzter Appell an die Vernunft',
},
'params': {'skip_download': True},
'skip': 'Georestricted in DE',
}, { }, {
'url': 'https://www.bitchute.com/embed/lbb5G1hjPhw/', 'url': 'https://www.bitchute.com/embed/lbb5G1hjPhw/',
'only_matching': True, 'only_matching': True,
@ -96,11 +104,8 @@ class BitChuteIE(InfoExtractor):
'only_matching': True, 'only_matching': True,
}] }]
_GEO_BYPASS = False _GEO_BYPASS = False
_UPLOADER_URL_TMPL = 'https://www.bitchute.com/profile/%s/'
_HEADERS = { _CHANNEL_URL_TMPL = 'https://www.bitchute.com/channel/%s/'
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.57 Safari/537.36',
'Referer': 'https://www.bitchute.com/',
}
def _check_format(self, video_url, video_id): def _check_format(self, video_url, video_id):
urls = orderedSet( urls = orderedSet(
@ -112,7 +117,7 @@ class BitChuteIE(InfoExtractor):
for url in urls: for url in urls:
try: try:
response = self._request_webpage( response = self._request_webpage(
HEADRequest(url), video_id=video_id, note=f'Checking {url}', headers=self._HEADERS) HEADRequest(url), video_id=video_id, note=f'Checking {url}')
except ExtractorError as e: except ExtractorError as e:
self.to_screen(f'{video_id}: URL is invalid, skipping: {e.cause}') self.to_screen(f'{video_id}: URL is invalid, skipping: {e.cause}')
continue continue
@ -121,54 +126,79 @@ class BitChuteIE(InfoExtractor):
'filesize': int_or_none(response.headers.get('Content-Length')), 'filesize': int_or_none(response.headers.get('Content-Length')),
} }
def _raise_if_restricted(self, webpage): def _call_api(self, endpoint, data, display_id, fatal=True):
page_title = clean_html(get_element_by_class('page-title', webpage)) or '' note = endpoint.rpartition('/')[2]
if re.fullmatch(r'(?:Channel|Video) Restricted', page_title): try:
reason = clean_html(get_element_by_id('page-detail', webpage)) or page_title return self._download_json(
self.raise_geo_restricted(reason) f'https://api.bitchute.com/api/beta/{endpoint}', display_id,
f'Downloading {note} API JSON', f'Unable to download {note} API JSON',
@staticmethod data=json.dumps(data).encode(),
def _make_url(html): headers={
path = extract_attributes(get_element_html_by_class('spa', html) or '').get('href') 'Accept': 'application/json',
return urljoin('https://www.bitchute.com', path) 'Content-Type': 'application/json',
})
except ExtractorError as e:
if isinstance(e.cause, HTTPError) and e.cause.status == 403:
errors = '. '.join(traverse_obj(e.cause.response.read().decode(), (
{json.loads}, 'errors', lambda _, v: v['context'] == 'reason', 'message', {str})))
if errors and 'location' in errors:
# Can always be fatal since the video/media call will reach this code first
self.raise_geo_restricted(errors)
if fatal:
raise
self.report_warning(e.msg)
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage( data = {'video_id': video_id}
f'https://old.bitchute.com/video/{video_id}', video_id, headers=self._HEADERS) media_url = self._call_api('video/media', data, video_id)['media_url']
self._raise_if_restricted(webpage)
publish_date = clean_html(get_element_by_class('video-publish-date', webpage))
entries = self._parse_html5_media_entries(url, webpage, video_id)
formats = [] formats = []
for format_ in traverse_obj(entries, (0, 'formats', ...)): if determine_ext(media_url) == 'm3u8':
formats.extend(
self._extract_m3u8_formats(media_url, video_id, 'mp4', m3u8_id='hls', live=True))
else:
if self.get_param('check_formats') is not False: if self.get_param('check_formats') is not False:
format_.update(self._check_format(format_.pop('url'), video_id) or {}) if fmt := self._check_format(media_url, video_id):
if 'url' not in format_: formats.append(fmt)
continue else:
formats.append(format_) formats.append({'url': media_url})
if not formats: if not formats:
self.raise_no_formats( self.raise_no_formats(
'Video is unavailable. Please make sure this video is playable in the browser ' 'Video is unavailable. Please make sure this video is playable in the browser '
'before reporting this issue.', expected=True, video_id=video_id) 'before reporting this issue.', expected=True, video_id=video_id)
details = get_element_by_class('details', webpage) or '' video = self._call_api('video', data, video_id, fatal=False)
uploader_html = get_element_html_by_class('creator', details) or '' channel = None
channel_html = get_element_html_by_class('name', details) or '' if channel_id := traverse_obj(video, ('channel', 'channel_id', {str})):
channel = self._call_api('channel', {'channel_id': channel_id}, video_id, fatal=False)
return { return {
**traverse_obj(video, {
'title': ('video_name', {str}),
'description': ('description', {str}),
'thumbnail': ('thumbnail_url', {url_or_none}),
'channel': ('channel', 'channel_name', {str}),
'channel_id': ('channel', 'channel_id', {str}),
'channel_url': ('channel', 'channel_url', {urljoin('https://www.bitchute.com/')}),
'uploader_id': ('profile_id', {str}),
'uploader_url': ('profile_id', {format_field(template=self._UPLOADER_URL_TMPL)}, filter),
'timestamp': ('date_published', {parse_iso8601}),
'duration': ('duration', {parse_duration}),
'tags': ('hashtags', ..., {str}, filter, all, filter),
'view_count': ('view_count', {int_or_none}),
'is_live': ('state_id', {lambda x: x == 'live'}),
}),
**traverse_obj(channel, {
'channel': ('channel_name', {str}),
'channel_id': ('channel_id', {str}),
'channel_url': ('url_slug', {format_field(template=self._CHANNEL_URL_TMPL)}, filter),
'uploader': ('profile_name', {str}),
'uploader_id': ('profile_id', {str}),
'uploader_url': ('profile_id', {format_field(template=self._UPLOADER_URL_TMPL)}, filter),
}),
'id': video_id, 'id': video_id,
'title': self._html_extract_title(webpage) or self._og_search_title(webpage),
'description': self._og_search_description(webpage, default=None),
'thumbnail': self._og_search_thumbnail(webpage),
'uploader': clean_html(uploader_html),
'uploader_url': self._make_url(uploader_html),
'channel': clean_html(channel_html),
'channel_url': self._make_url(channel_html),
'upload_date': unified_strdate(self._search_regex(
r'at \d+:\d+ UTC on (.+?)\.', publish_date, 'upload date', fatal=False)),
'formats': formats, 'formats': formats,
} }
@ -190,7 +220,7 @@ class BitChuteChannelIE(InfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'title': 'This is the first video on #BitChute !', 'title': 'This is the first video on #BitChute !',
'description': 'md5:a0337e7b1fe39e32336974af8173a034', 'description': 'md5:a0337e7b1fe39e32336974af8173a034',
'thumbnail': r're:^https?://.*\.jpg$', 'thumbnail': r're:https?://.+/.+\.jpg$',
'uploader': 'BitChute', 'uploader': 'BitChute',
'upload_date': '20170103', 'upload_date': '20170103',
'uploader_url': 'https://www.bitchute.com/profile/I5NgtHZn9vPj/', 'uploader_url': 'https://www.bitchute.com/profile/I5NgtHZn9vPj/',
@ -198,6 +228,9 @@ class BitChuteChannelIE(InfoExtractor):
'channel_url': 'https://www.bitchute.com/channel/bitchute/', 'channel_url': 'https://www.bitchute.com/channel/bitchute/',
'duration': 16, 'duration': 16,
'view_count': int, 'view_count': int,
'uploader_id': 'I5NgtHZn9vPj',
'channel_id': '1VBwRfyNcKdX',
'timestamp': 1483425443,
}, },
}, },
], ],
@ -213,6 +246,7 @@ class BitChuteChannelIE(InfoExtractor):
'title': 'Bruce MacDonald and "The Light of Darkness"', 'title': 'Bruce MacDonald and "The Light of Darkness"',
'description': 'md5:747724ef404eebdfc04277714f81863e', 'description': 'md5:747724ef404eebdfc04277714f81863e',
}, },
'skip': '404 Not Found',
}, { }, {
'url': 'https://old.bitchute.com/playlist/wV9Imujxasw9/', 'url': 'https://old.bitchute.com/playlist/wV9Imujxasw9/',
'only_matching': True, 'only_matching': True,

View File

@ -1,188 +0,0 @@
from .adobepass import AdobePassIE
from ..networking import HEADRequest
from ..utils import (
extract_attributes,
float_or_none,
get_element_html_by_class,
int_or_none,
merge_dicts,
parse_age_limit,
remove_end,
str_or_none,
traverse_obj,
unescapeHTML,
unified_timestamp,
update_url_query,
url_or_none,
)
class BravoTVIE(AdobePassIE):
_VALID_URL = r'https?://(?:www\.)?(?P<site>bravotv|oxygen)\.com/(?:[^/]+/)+(?P<id>[^/?#]+)'
_TESTS = [{
'url': 'https://www.bravotv.com/top-chef/season-16/episode-15/videos/the-top-chef-season-16-winner-is',
'info_dict': {
'id': '3923059',
'ext': 'mp4',
'title': 'The Top Chef Season 16 Winner Is...',
'description': 'Find out who takes the title of Top Chef!',
'upload_date': '20190314',
'timestamp': 1552591860,
'season_number': 16,
'episode_number': 15,
'series': 'Top Chef',
'episode': 'The Top Chef Season 16 Winner Is...',
'duration': 190.357,
'season': 'Season 16',
'thumbnail': r're:^https://.+\.jpg',
},
'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://www.bravotv.com/top-chef/season-20/episode-1/london-calling',
'info_dict': {
'id': '9000234570',
'ext': 'mp4',
'title': 'London Calling',
'description': 'md5:5af95a8cbac1856bd10e7562f86bb759',
'upload_date': '20230310',
'timestamp': 1678410000,
'season_number': 20,
'episode_number': 1,
'series': 'Top Chef',
'episode': 'London Calling',
'duration': 3266.03,
'season': 'Season 20',
'chapters': 'count:7',
'thumbnail': r're:^https://.+\.jpg',
'age_limit': 14,
},
'params': {'skip_download': 'm3u8'},
'skip': 'This video requires AdobePass MSO credentials',
}, {
'url': 'https://www.oxygen.com/in-ice-cold-blood/season-1/closing-night',
'info_dict': {
'id': '3692045',
'ext': 'mp4',
'title': 'Closing Night',
'description': 'md5:3170065c5c2f19548d72a4cbc254af63',
'upload_date': '20180401',
'timestamp': 1522623600,
'season_number': 1,
'episode_number': 1,
'series': 'In Ice Cold Blood',
'episode': 'Closing Night',
'duration': 2629.051,
'season': 'Season 1',
'chapters': 'count:6',
'thumbnail': r're:^https://.+\.jpg',
'age_limit': 14,
},
'params': {'skip_download': 'm3u8'},
'skip': 'This video requires AdobePass MSO credentials',
}, {
'url': 'https://www.oxygen.com/in-ice-cold-blood/season-2/episode-16/videos/handling-the-horwitz-house-after-the-murder-season-2',
'info_dict': {
'id': '3974019',
'ext': 'mp4',
'title': '\'Handling The Horwitz House After The Murder (Season 2, Episode 16)',
'description': 'md5:f9d638dd6946a1c1c0533a9c6100eae5',
'upload_date': '20190617',
'timestamp': 1560790800,
'season_number': 2,
'episode_number': 16,
'series': 'In Ice Cold Blood',
'episode': '\'Handling The Horwitz House After The Murder (Season 2, Episode 16)',
'duration': 68.235,
'season': 'Season 2',
'thumbnail': r're:^https://.+\.jpg',
'age_limit': 14,
},
'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://www.bravotv.com/below-deck/season-3/ep-14-reunion-part-1',
'only_matching': True,
}]
def _real_extract(self, url):
site, display_id = self._match_valid_url(url).group('site', 'id')
webpage = self._download_webpage(url, display_id)
settings = self._search_json(
r'<script[^>]+data-drupal-selector="drupal-settings-json"[^>]*>', webpage, 'settings', display_id)
tve = extract_attributes(get_element_html_by_class('tve-video-deck-app', webpage) or '')
query = {
'manifest': 'm3u',
'formats': 'm3u,mpeg4',
}
if tve:
account_pid = tve.get('data-mpx-media-account-pid') or 'HNK2IC'
account_id = tve['data-mpx-media-account-id']
metadata = self._parse_json(
tve.get('data-normalized-video', ''), display_id, fatal=False, transform_source=unescapeHTML)
video_id = tve.get('data-guid') or metadata['guid']
if tve.get('data-entitlement') == 'auth':
auth = traverse_obj(settings, ('tve_adobe_auth', {dict})) or {}
site = remove_end(site, 'tv')
release_pid = tve['data-release-pid']
resource = self._get_mvpd_resource(
tve.get('data-adobe-pass-resource-id') or auth.get('adobePassResourceId') or site,
tve['data-title'], release_pid, tve.get('data-rating'))
query.update({
'switch': 'HLSServiceSecure',
'auth': self._extract_mvpd_auth(
url, release_pid, auth.get('adobePassRequestorId') or site, resource),
})
else:
ls_playlist = traverse_obj(settings, ('ls_playlist', ..., {dict}), get_all=False) or {}
account_pid = ls_playlist.get('mpxMediaAccountPid') or 'PHSl-B'
account_id = ls_playlist['mpxMediaAccountId']
video_id = ls_playlist['defaultGuid']
metadata = traverse_obj(
ls_playlist, ('videos', lambda _, v: v['guid'] == video_id, {dict}), get_all=False)
tp_url = f'https://link.theplatform.com/s/{account_pid}/media/guid/{account_id}/{video_id}'
tp_metadata = self._download_json(
update_url_query(tp_url, {'format': 'preview'}), video_id, fatal=False)
chapters = traverse_obj(tp_metadata, ('chapters', ..., {
'start_time': ('startTime', {float_or_none(scale=1000)}),
'end_time': ('endTime', {float_or_none(scale=1000)}),
}))
# prune pointless single chapters that span the entire duration from short videos
if len(chapters) == 1 and not traverse_obj(chapters, (0, 'end_time')):
chapters = None
m3u8_url = self._request_webpage(HEADRequest(
update_url_query(f'{tp_url}/stream.m3u8', query)), video_id, 'Checking m3u8 URL').url
if 'mpeg_cenc' in m3u8_url:
self.report_drm(video_id)
formats, subtitles = self._extract_m3u8_formats_and_subtitles(m3u8_url, video_id, 'mp4', m3u8_id='hls')
return {
'id': video_id,
'formats': formats,
'subtitles': subtitles,
'chapters': chapters,
**merge_dicts(traverse_obj(tp_metadata, {
'title': 'title',
'description': 'description',
'duration': ('duration', {float_or_none(scale=1000)}),
'timestamp': ('pubDate', {float_or_none(scale=1000)}),
'season_number': (('pl1$seasonNumber', 'nbcu$seasonNumber'), {int_or_none}),
'episode_number': (('pl1$episodeNumber', 'nbcu$episodeNumber'), {int_or_none}),
'series': (('pl1$show', 'nbcu$show'), (None, ...), {str}),
'episode': (('title', 'pl1$episodeNumber', 'nbcu$episodeNumber'), {str_or_none}),
'age_limit': ('ratings', ..., 'rating', {parse_age_limit}),
}, get_all=False), traverse_obj(metadata, {
'title': 'title',
'description': 'description',
'duration': ('durationInSeconds', {int_or_none}),
'timestamp': ('airDate', {unified_timestamp}),
'thumbnail': ('thumbnailUrl', {url_or_none}),
'season_number': ('seasonNumber', {int_or_none}),
'episode_number': ('episodeNumber', {int_or_none}),
'episode': 'episodeTitle',
'series': 'show',
})),
}

View File

@ -495,8 +495,6 @@ class BrightcoveLegacyIE(InfoExtractor):
class BrightcoveNewBaseIE(AdobePassIE): class BrightcoveNewBaseIE(AdobePassIE):
def _parse_brightcove_metadata(self, json_data, video_id, headers={}): def _parse_brightcove_metadata(self, json_data, video_id, headers={}):
title = json_data['name'].strip()
formats, subtitles = [], {} formats, subtitles = [], {}
sources = json_data.get('sources') or [] sources = json_data.get('sources') or []
for source in sources: for source in sources:
@ -600,16 +598,18 @@ class BrightcoveNewBaseIE(AdobePassIE):
return { return {
'id': video_id, 'id': video_id,
'title': title,
'description': clean_html(json_data.get('description')),
'thumbnails': thumbnails, 'thumbnails': thumbnails,
'duration': duration, 'duration': duration,
'timestamp': parse_iso8601(json_data.get('published_at')),
'uploader_id': json_data.get('account_id'),
'formats': formats, 'formats': formats,
'subtitles': subtitles, 'subtitles': subtitles,
'tags': json_data.get('tags', []),
'is_live': is_live, 'is_live': is_live,
**traverse_obj(json_data, {
'title': ('name', {clean_html}),
'description': ('description', {clean_html}),
'tags': ('tags', ..., {str}, filter, all, filter),
'timestamp': ('published_at', {parse_iso8601}),
'uploader_id': ('account_id', {str}),
}),
} }
@ -645,10 +645,7 @@ class BrightcoveNewIE(BrightcoveNewBaseIE):
'uploader_id': '4036320279001', 'uploader_id': '4036320279001',
'formats': 'mincount:39', 'formats': 'mincount:39',
}, },
'params': { 'skip': '404 Not Found',
# m3u8 download
'skip_download': True,
},
}, { }, {
# playlist stream # playlist stream
'url': 'https://players.brightcove.net/1752604059001/S13cJdUBz_default/index.html?playlistId=5718313430001', 'url': 'https://players.brightcove.net/1752604059001/S13cJdUBz_default/index.html?playlistId=5718313430001',
@ -709,7 +706,6 @@ class BrightcoveNewIE(BrightcoveNewBaseIE):
'ext': 'mp4', 'ext': 'mp4',
'title': 'TGD_01-032_5', 'title': 'TGD_01-032_5',
'thumbnail': r're:^https?://.*\.jpg$', 'thumbnail': r're:^https?://.*\.jpg$',
'tags': [],
'timestamp': 1646078943, 'timestamp': 1646078943,
'uploader_id': '1569565978001', 'uploader_id': '1569565978001',
'upload_date': '20220228', 'upload_date': '20220228',
@ -721,7 +717,6 @@ class BrightcoveNewIE(BrightcoveNewBaseIE):
'ext': 'mp4', 'ext': 'mp4',
'title': 'TGD 01-087 (Airs 05.25.22)_Segment 5', 'title': 'TGD 01-087 (Airs 05.25.22)_Segment 5',
'thumbnail': r're:^https?://.*\.jpg$', 'thumbnail': r're:^https?://.*\.jpg$',
'tags': [],
'timestamp': 1651604591, 'timestamp': 1651604591,
'uploader_id': '1569565978001', 'uploader_id': '1569565978001',
'upload_date': '20220503', 'upload_date': '20220503',
@ -923,10 +918,18 @@ class BrightcoveNewIE(BrightcoveNewBaseIE):
errors = json_data.get('errors') errors = json_data.get('errors')
if errors and errors[0].get('error_subcode') == 'TVE_AUTH': if errors and errors[0].get('error_subcode') == 'TVE_AUTH':
custom_fields = json_data['custom_fields'] custom_fields = json_data['custom_fields']
missing_fields = ', '.join(
key for key in ('source_url', 'software_statement') if not smuggled_data.get(key))
if missing_fields:
raise ExtractorError(
f'Missing fields in smuggled data: {missing_fields}. '
f'This video can be only extracted from the webpage where it is embedded. '
f'Pass the URL of the embedding webpage instead of the Brightcove URL', expected=True)
tve_token = self._extract_mvpd_auth( tve_token = self._extract_mvpd_auth(
smuggled_data['source_url'], video_id, smuggled_data['source_url'], video_id,
custom_fields['bcadobepassrequestorid'], custom_fields['bcadobepassrequestorid'],
custom_fields['bcadobepassresourceid']) custom_fields['bcadobepassresourceid'],
smuggled_data['software_statement'])
json_data = self._download_json( json_data = self._download_json(
api_url, video_id, headers={ api_url, video_id, headers={
'Accept': f'application/json;pk={policy_key}', 'Accept': f'application/json;pk={policy_key}',

View File

@ -1,59 +0,0 @@
from .turner import TurnerBaseIE
from ..utils import int_or_none
class CartoonNetworkIE(TurnerBaseIE):
_VALID_URL = r'https?://(?:www\.)?cartoonnetwork\.com/video/(?:[^/]+/)+(?P<id>[^/?#]+)-(?:clip|episode)\.html'
_TEST = {
'url': 'https://www.cartoonnetwork.com/video/ben-10/how-to-draw-upgrade-episode.html',
'info_dict': {
'id': '6e3375097f63874ebccec7ef677c1c3845fa850e',
'ext': 'mp4',
'title': 'How to Draw Upgrade',
'description': 'md5:2061d83776db7e8be4879684eefe8c0f',
},
'params': {
# m3u8 download
'skip_download': True,
},
}
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
def find_field(global_re, name, content_re=None, value_re='[^"]+', fatal=False):
metadata_re = ''
if content_re:
metadata_re = r'|video_metadata\.content_' + content_re
return self._search_regex(
rf'(?:_cnglobal\.currentVideo\.{global_re}{metadata_re})\s*=\s*"({value_re})";',
webpage, name, fatal=fatal)
media_id = find_field('mediaId', 'media id', 'id', '[0-9a-f]{40}', True)
title = find_field('episodeTitle', 'title', '(?:episodeName|name)', fatal=True)
info = self._extract_ngtv_info(
media_id, {'networkId': 'cartoonnetwork'}, {
'url': url,
'site_name': 'CartoonNetwork',
'auth_required': find_field('authType', 'auth type') != 'unauth',
})
series = find_field(
'propertyName', 'series', 'showName') or self._html_search_meta('partOfSeries', webpage)
info.update({
'id': media_id,
'display_id': display_id,
'title': title,
'description': self._html_search_meta('description', webpage),
'series': series,
'episode': title,
})
for field in ('season', 'episode'):
field_name = field + 'Number'
info[field + '_number'] = int_or_none(find_field(
field_name, field + ' number', value_re=r'\d+') or self._html_search_meta(field_name, webpage))
return info

View File

@ -101,6 +101,7 @@ from ..utils import (
xpath_with_ns, xpath_with_ns,
) )
from ..utils._utils import _request_dump_filename from ..utils._utils import _request_dump_filename
from ..utils.jslib import devalue
class InfoExtractor: class InfoExtractor:
@ -1675,9 +1676,9 @@ class InfoExtractor:
'ext': mimetype2ext(e.get('encodingFormat')), 'ext': mimetype2ext(e.get('encodingFormat')),
'title': unescapeHTML(e.get('name')), 'title': unescapeHTML(e.get('name')),
'description': unescapeHTML(e.get('description')), 'description': unescapeHTML(e.get('description')),
'thumbnails': [{'url': unescapeHTML(url)} 'thumbnails': traverse_obj(e, (('thumbnailUrl', 'thumbnailURL', 'thumbnail_url'), (None, ...), {
for url in variadic(traverse_obj(e, 'thumbnailUrl', 'thumbnailURL')) 'url': ({str}, {unescapeHTML}, {self._proto_relative_url}, {url_or_none}),
if url_or_none(url)], })),
'duration': parse_duration(e.get('duration')), 'duration': parse_duration(e.get('duration')),
'timestamp': unified_timestamp(e.get('uploadDate')), 'timestamp': unified_timestamp(e.get('uploadDate')),
# author can be an instance of 'Organization' or 'Person' types. # author can be an instance of 'Organization' or 'Person' types.
@ -1795,6 +1796,63 @@ class InfoExtractor:
ret = self._parse_json(js, video_id, transform_source=functools.partial(js_to_json, vars=args), fatal=fatal) ret = self._parse_json(js, video_id, transform_source=functools.partial(js_to_json, vars=args), fatal=fatal)
return traverse_obj(ret, traverse) or {} return traverse_obj(ret, traverse) or {}
def _resolve_nuxt_array(self, array, video_id, *, fatal=True, default=NO_DEFAULT):
"""Resolves Nuxt rich JSON payload arrays"""
# Ref: https://github.com/nuxt/nuxt/commit/9e503be0f2a24f4df72a3ccab2db4d3e63511f57
# https://github.com/nuxt/nuxt/pull/19205
if default is not NO_DEFAULT:
fatal = False
if not isinstance(array, list) or not array:
error_msg = 'Unable to resolve Nuxt JSON data: invalid input'
if fatal:
raise ExtractorError(error_msg, video_id=video_id)
elif default is NO_DEFAULT:
self.report_warning(error_msg, video_id=video_id)
return {} if default is NO_DEFAULT else default
def indirect_reviver(data):
return data
def json_reviver(data):
return json.loads(data)
gen = devalue.parse_iter(array, revivers={
'NuxtError': indirect_reviver,
'EmptyShallowRef': json_reviver,
'EmptyRef': json_reviver,
'ShallowRef': indirect_reviver,
'ShallowReactive': indirect_reviver,
'Ref': indirect_reviver,
'Reactive': indirect_reviver,
})
while True:
try:
error_msg = f'Error resolving Nuxt JSON: {gen.send(None)}'
if fatal:
raise ExtractorError(error_msg, video_id=video_id)
elif default is NO_DEFAULT:
self.report_warning(error_msg, video_id=video_id, only_once=True)
else:
self.write_debug(f'{video_id}: {error_msg}', only_once=True)
except StopIteration as error:
return error.value or ({} if default is NO_DEFAULT else default)
def _search_nuxt_json(self, webpage, video_id, *, fatal=True, default=NO_DEFAULT):
"""Parses metadata from Nuxt rich JSON payloads embedded in HTML"""
passed_default = default is not NO_DEFAULT
array = self._search_json(
r'<script\b[^>]+\bid="__NUXT_DATA__"[^>]*>', webpage,
'Nuxt JSON data', video_id, contains_pattern=r'\[(?s:.+)\]',
fatal=fatal, default=NO_DEFAULT if not passed_default else None)
if not array:
return default if passed_default else {}
return self._resolve_nuxt_array(array, video_id, fatal=fatal, default=default)
@staticmethod @staticmethod
def _hidden_inputs(html): def _hidden_inputs(html):
html = re.sub(r'<!--(?:(?!<!--).)*-->', '', html) html = re.sub(r'<!--(?:(?!<!--).)*-->', '', html)

View File

@ -206,7 +206,7 @@ class DouyuTVIE(DouyuBaseIE):
'is_live': True, 'is_live': True,
**traverse_obj(room, { **traverse_obj(room, {
'display_id': ('url', {str}, {lambda i: i[1:]}), 'display_id': ('url', {str}, {lambda i: i[1:]}),
'title': ('room_name', {unescapeHTML}), 'title': ('room_name', {str}, {unescapeHTML}),
'description': ('show_details', {str}), 'description': ('show_details', {str}),
'uploader': ('nickname', {str}), 'uploader': ('nickname', {str}),
'thumbnail': ('room_src', {url_or_none}), 'thumbnail': ('room_src', {url_or_none}),

View File

@ -64,7 +64,7 @@ class DreiSatIE(ZDFBaseIE):
'title': 'dein buch - Das Beste von der Leipziger Buchmesse 2025 - Teil 1', 'title': 'dein buch - Das Beste von der Leipziger Buchmesse 2025 - Teil 1',
'description': 'md5:bae51bfc22f15563ce3acbf97d2e8844', 'description': 'md5:bae51bfc22f15563ce3acbf97d2e8844',
'duration': 5399.0, 'duration': 5399.0,
'thumbnail': 'https://www.3sat.de/assets/buchmesse-kerkeling-100~original?cb=1743329640903', 'thumbnail': 'https://www.3sat.de/assets/buchmesse-kerkeling-100~original?cb=1747256996338',
'chapters': 'count:24', 'chapters': 'count:24',
'episode': 'dein buch - Das Beste von der Leipziger Buchmesse 2025 - Teil 1', 'episode': 'dein buch - Das Beste von der Leipziger Buchmesse 2025 - Teil 1',
'episode_id': 'POS_1ef236cc-b390-401e-acd0-4fb4b04315fb', 'episode_id': 'POS_1ef236cc-b390-401e-acd0-4fb4b04315fb',

View File

@ -5,7 +5,6 @@ import urllib.parse
from .adobepass import AdobePassIE from .adobepass import AdobePassIE
from .common import InfoExtractor from .common import InfoExtractor
from .once import OnceIE
from ..utils import ( from ..utils import (
determine_ext, determine_ext,
dict_get, dict_get,
@ -16,7 +15,7 @@ from ..utils import (
) )
class ESPNIE(OnceIE): class ESPNIE(InfoExtractor):
_VALID_URL = r'''(?x) _VALID_URL = r'''(?x)
https?:// https?://
(?: (?:
@ -131,9 +130,7 @@ class ESPNIE(OnceIE):
return return
format_urls.add(source_url) format_urls.add(source_url)
ext = determine_ext(source_url) ext = determine_ext(source_url)
if OnceIE.suitable(source_url): if ext == 'smil':
formats.extend(self._extract_once_formats(source_url))
elif ext == 'smil':
formats.extend(self._extract_smil_formats( formats.extend(self._extract_smil_formats(
source_url, video_id, fatal=False)) source_url, video_id, fatal=False))
elif ext == 'f4m': elif ext == 'f4m':
@ -332,6 +329,7 @@ class WatchESPNIE(AdobePassIE):
}] }]
_API_KEY = 'ZXNwbiZicm93c2VyJjEuMC4w.ptUt7QxsteaRruuPmGZFaJByOoqKvDP2a5YkInHrc7c' _API_KEY = 'ZXNwbiZicm93c2VyJjEuMC4w.ptUt7QxsteaRruuPmGZFaJByOoqKvDP2a5YkInHrc7c'
_SOFTWARE_STATEMENT = 'eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiIyZGJmZWM4My03OWE1LTQyNzEtYTVmZC04NTZjYTMxMjRjNjMiLCJuYmYiOjE1NDAyMTI3NjEsImlzcyI6ImF1dGguYWRvYmUuY29tIiwiaWF0IjoxNTQwMjEyNzYxfQ.yaK3r4AI2uLVvsyN1GLzqzgzRlxMPtasSaiYYBV0wIstqih5tvjTmeoLmi8Xy9Kp_U7Md-bOffwiyK3srHkpUkhhwXLH2x6RPjmS1tPmhaG7-3LBcHTf2ySPvXhVf7cN4ngldawK4tdtLtsw6rF_JoZE2yaC6XbS2F51nXSFEDDnOQWIHEQRG3aYAj-38P2CLGf7g-Yfhbp5cKXeksHHQ90u3eOO4WH0EAjc9oO47h33U8KMEXxJbvjV5J8Va2G2fQSgLDZ013NBI3kQnE313qgqQh2feQILkyCENpB7g-TVBreAjOaH1fU471htSoGGYepcAXv-UDtpgitDiLy7CQ'
def _call_bamgrid_api(self, path, video_id, payload=None, headers={}): def _call_bamgrid_api(self, path, video_id, payload=None, headers={}):
if 'Authorization' not in headers: if 'Authorization' not in headers:
@ -408,8 +406,8 @@ class WatchESPNIE(AdobePassIE):
# TV Provider required # TV Provider required
else: else:
resource = self._get_mvpd_resource('ESPN', video_data['name'], video_id, None) resource = self._get_mvpd_resource('espn1', video_data['name'], video_id, None)
auth = self._extract_mvpd_auth(url, video_id, 'ESPN', resource).encode() auth = self._extract_mvpd_auth(url, video_id, 'ESPN', resource, self._SOFTWARE_STATEMENT).encode()
asset = self._download_json( asset = self._download_json(
f'https://watch.auth.api.espn.com/video/auth/media/{video_id}/asset?apikey=uiqlbgzdwuru14v627vdusswb', f'https://watch.auth.api.espn.com/video/auth/media/{video_id}/asset?apikey=uiqlbgzdwuru14v627vdusswb',

View File

@ -2,11 +2,15 @@ import urllib.parse
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
determine_ext,
int_or_none, int_or_none,
qualities, join_nonempty,
mimetype2ext,
parse_qs,
unified_strdate, unified_strdate,
url_or_none, url_or_none,
) )
from ..utils.traversal import traverse_obj
class FirstTVIE(InfoExtractor): class FirstTVIE(InfoExtractor):
@ -15,40 +19,51 @@ class FirstTVIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?(?:sport)?1tv\.ru/(?:[^/?#]+/)+(?P<id>[^/?#]+)' _VALID_URL = r'https?://(?:www\.)?(?:sport)?1tv\.ru/(?:[^/?#]+/)+(?P<id>[^/?#]+)'
_TESTS = [{ _TESTS = [{
# single format # single format; has item.id
'url': 'http://www.1tv.ru/shows/naedine-so-vsemi/vypuski/gost-lyudmila-senchina-naedine-so-vsemi-vypusk-ot-12-02-2015', 'url': 'https://www.1tv.ru/shows/naedine-so-vsemi/vypuski/gost-lyudmila-senchina-naedine-so-vsemi-vypusk-ot-12-02-2015',
'md5': 'a1b6b60d530ebcf8daacf4565762bbaf', 'md5': '8011ae8e88ff4150107ab9c5a8f5b659',
'info_dict': { 'info_dict': {
'id': '40049', 'id': '40049',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Гость Людмила Сенчина. Наедине со всеми. Выпуск от 12.02.2015', 'title': 'Гость Людмила Сенчина. Наедине со всеми. Выпуск от 12.02.2015',
'thumbnail': r're:^https?://.*\.(?:jpg|JPG)$', 'thumbnail': r're:https?://.+/.+\.jpg',
'upload_date': '20150212', 'upload_date': '20150212',
'duration': 2694, 'duration': 2694,
}, },
'params': {'skip_download': 'm3u8'},
}, { }, {
# multiple formats # multiple formats; has item.id
'url': 'http://www.1tv.ru/shows/dobroe-utro/pro-zdorove/vesennyaya-allergiya-dobroe-utro-fragment-vypuska-ot-07042016', 'url': 'https://www.1tv.ru/shows/dobroe-utro/pro-zdorove/vesennyaya-allergiya-dobroe-utro-fragment-vypuska-ot-07042016',
'info_dict': { 'info_dict': {
'id': '364746', 'id': '364746',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Весенняя аллергия. Доброе утро. Фрагмент выпуска от 07.04.2016', 'title': 'Весенняя аллергия. Доброе утро. Фрагмент выпуска от 07.04.2016',
'thumbnail': r're:^https?://.*\.(?:jpg|JPG)$', 'thumbnail': r're:https?://.+/.+\.jpg',
'upload_date': '20160407', 'upload_date': '20160407',
'duration': 179, 'duration': 179,
'formats': 'mincount:3', 'formats': 'mincount:3',
}, },
'params': { 'params': {'skip_download': 'm3u8'},
'skip_download': True,
},
}, { }, {
'url': 'http://www.1tv.ru/news/issue/2016-12-01/14:00', 'url': 'https://www.1tv.ru/news/issue/2016-12-01/14:00',
'info_dict': { 'info_dict': {
'id': '14:00', 'id': '14:00',
'title': 'Выпуск новостей в 14:00 1 декабря 2016 года. Новости. Первый канал', 'title': 'Выпуск программы «Время» в 20:00 1 декабря 2016 года. Новости. Первый канал',
'description': 'md5:2e921b948f8c1ff93901da78ebdb1dfd', 'thumbnail': 'https://static.1tv.ru/uploads/photo/image/8/big/338448_big_8fc7eb236f.jpg',
}, },
'playlist_count': 13, 'playlist_count': 13,
}, {
# has timestamp; has item.uid but not item.id
'url': 'https://www.1tv.ru/shows/segodnya-vecherom/vypuski/avtory-odnogo-hita-segodnya-vecherom-vypusk-ot-03-05-2025',
'info_dict': {
'id': '270411',
'ext': 'mp4',
'title': 'Авторы одного хита. Сегодня вечером. Выпуск от 03.05.2025',
'thumbnail': r're:https?://.+/.+\.jpg',
'timestamp': 1746286020,
'upload_date': '20250503',
},
'params': {'skip_download': 'm3u8'},
}, { }, {
'url': 'http://www.1tv.ru/shows/tochvtoch-supersezon/vystupleniya/evgeniy-dyatlov-vladimir-vysockiy-koni-priveredlivye-toch-v-toch-supersezon-fragment-vypuska-ot-06-11-2016', 'url': 'http://www.1tv.ru/shows/tochvtoch-supersezon/vystupleniya/evgeniy-dyatlov-vladimir-vysockiy-koni-priveredlivye-toch-v-toch-supersezon-fragment-vypuska-ot-06-11-2016',
'only_matching': True, 'only_matching': True,
@ -57,96 +72,60 @@ class FirstTVIE(InfoExtractor):
'only_matching': True, 'only_matching': True,
}] }]
def _entries(self, items):
for item in items:
video_id = str(item.get('id') or item['uid'])
formats, subtitles = [], {}
for f in traverse_obj(item, ('sources', lambda _, v: url_or_none(v['src']))):
src = f['src']
ext = mimetype2ext(f.get('type'), default=determine_ext(src))
if ext == 'm3u8':
fmts, subs = self._extract_m3u8_formats_and_subtitles(
src, video_id, 'mp4', m3u8_id='hls', fatal=False)
elif ext == 'mpd':
fmts, subs = self._extract_mpd_formats_and_subtitles(
src, video_id, mpd_id='dash', fatal=False)
else:
tbr = self._search_regex(fr'_(\d{{3,}})\.{ext}', src, 'tbr', default=None)
formats.append({
'url': src,
'ext': ext,
'format_id': join_nonempty('http', ext, tbr),
'tbr': int_or_none(tbr),
# quality metadata of http formats may be incorrect
'quality': -10,
})
continue
formats.extend(fmts)
self._merge_subtitles(subs, target=subtitles)
yield {
**traverse_obj(item, {
'title': ('title', {str}),
'thumbnail': ('poster', {url_or_none}),
'timestamp': ('dvr_begin_at', {int_or_none}),
'upload_date': ('date_air', {unified_strdate}),
'duration': ('duration', {int_or_none}),
}),
'id': video_id,
'formats': formats,
'subtitles': subtitles,
}
def _real_extract(self, url): def _real_extract(self, url):
display_id = self._match_id(url) display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id) webpage = self._download_webpage(url, display_id)
playlist_url = urllib.parse.urljoin(url, self._search_regex( playlist_url = urllib.parse.urljoin(url, self._html_search_regex(
r'data-playlist-url=(["\'])(?P<url>(?:(?!\1).)+)\1', r'data-playlist-url=(["\'])(?P<url>(?:(?!\1).)+)\1',
webpage, 'playlist url', group='url')) webpage, 'playlist url', group='url'))
parsed_url = urllib.parse.urlparse(playlist_url) item_ids = traverse_obj(parse_qs(playlist_url), 'video_id', 'videos_ids[]', 'news_ids[]')
qs = urllib.parse.parse_qs(parsed_url.query) items = traverse_obj(
item_ids = qs.get('videos_ids[]') or qs.get('news_ids[]') self._download_json(playlist_url, display_id),
lambda _, v: v['uid'] and (str(v['uid']) in item_ids if item_ids else True))
items = self._download_json(playlist_url, display_id) return self.playlist_result(
self._entries(items), display_id, self._og_search_title(webpage, default=None),
if item_ids: thumbnail=self._og_search_thumbnail(webpage, default=None))
items = [
item for item in items
if item.get('uid') and str(item['uid']) in item_ids]
else:
items = [items[0]]
entries = []
QUALITIES = ('ld', 'sd', 'hd')
for item in items:
title = item['title']
quality = qualities(QUALITIES)
formats = []
path = None
for f in item.get('mbr', []):
src = url_or_none(f.get('src'))
if not src:
continue
tbr = int_or_none(self._search_regex(
r'_(\d{3,})\.mp4', src, 'tbr', default=None))
if not path:
path = self._search_regex(
r'//[^/]+/(.+?)_\d+\.mp4', src,
'm3u8 path', default=None)
formats.append({
'url': src,
'format_id': f.get('name'),
'tbr': tbr,
'source_preference': quality(f.get('name')),
# quality metadata of http formats may be incorrect
'preference': -10,
})
# m3u8 URL format is reverse engineered from [1] (search for
# master.m3u8). dashEdges (that is currently balancer-vod.1tv.ru)
# is taken from [2].
# 1. http://static.1tv.ru/player/eump1tv-current/eump-1tv.all.min.js?rnd=9097422834:formatted
# 2. http://static.1tv.ru/player/eump1tv-config/config-main.js?rnd=9097422834
if not path and len(formats) == 1:
path = self._search_regex(
r'//[^/]+/(.+?$)', formats[0]['url'],
'm3u8 path', default=None)
if path:
if len(formats) == 1:
m3u8_path = ','
else:
tbrs = [str(t) for t in sorted(f['tbr'] for f in formats)]
m3u8_path = '_,{},{}'.format(','.join(tbrs), '.mp4')
formats.extend(self._extract_m3u8_formats(
f'http://balancer-vod.1tv.ru/{path}{m3u8_path}.urlset/master.m3u8',
display_id, 'mp4',
entry_protocol='m3u8_native', m3u8_id='hls', fatal=False))
thumbnail = item.get('poster') or self._og_search_thumbnail(webpage)
duration = int_or_none(item.get('duration') or self._html_search_meta(
'video:duration', webpage, 'video duration', fatal=False))
upload_date = unified_strdate(self._html_search_meta(
'ya:ovs:upload_date', webpage, 'upload date', default=None))
entries.append({
'id': str(item.get('id') or item['uid']),
'thumbnail': thumbnail,
'title': title,
'upload_date': upload_date,
'duration': int_or_none(duration),
'formats': formats,
})
title = self._html_search_regex(
(r'<div class="tv_translation">\s*<h1><a href="[^"]+">([^<]*)</a>',
r"'title'\s*:\s*'([^']+)'"),
webpage, 'title', default=None) or self._og_search_title(
webpage, default=None)
description = self._html_search_regex(
r'<div class="descr">\s*<div>&nbsp;</div>\s*<p>([^<]*)</p></div>',
webpage, 'description', default=None) or self._html_search_meta(
'description', webpage, 'description', default=None)
return self.playlist_result(entries, display_id, title, description)

View File

@ -1,9 +1,9 @@
import urllib.parse import urllib.parse
from .once import OnceIE from .common import InfoExtractor
class GameSpotIE(OnceIE): class GameSpotIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?gamespot\.com/(?:video|article|review)s/(?:[^/]+/\d+-|embed/)(?P<id>\d+)' _VALID_URL = r'https?://(?:www\.)?gamespot\.com/(?:video|article|review)s/(?:[^/]+/\d+-|embed/)(?P<id>\d+)'
_TESTS = [{ _TESTS = [{
'url': 'http://www.gamespot.com/videos/arma-3-community-guide-sitrep-i/2300-6410818/', 'url': 'http://www.gamespot.com/videos/arma-3-community-guide-sitrep-i/2300-6410818/',

View File

@ -7,161 +7,157 @@ from ..utils import (
int_or_none, int_or_none,
join_nonempty, join_nonempty,
parse_age_limit, parse_age_limit,
remove_end,
remove_start,
traverse_obj,
try_get,
unified_timestamp, unified_timestamp,
urlencode_postdata, urlencode_postdata,
) )
from ..utils.traversal import traverse_obj
class GoIE(AdobePassIE): class GoIE(AdobePassIE):
_SITE_INFO = { _SITE_INFO = {
'abc': { 'abc': {
'brand': '001', 'brand': '001',
'requestor_id': 'ABC', 'requestor_id': 'dtci',
'provider_id': 'ABC',
'software_statement': 'eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiI4OTcwMjlkYS0yYjM1LTQyOWUtYWQ0NS02ZjZiZjVkZTdhOTUiLCJuYmYiOjE2MjAxNzM5NjksImlzcyI6ImF1dGguYWRvYmUuY29tIiwiaWF0IjoxNjIwMTczOTY5fQ.SC69DVJWSL8sIe-vVUrP6xS_kzHKqwz9PdKYexs_y-f7Vin6mM-7S-W1TE_-K55O0pyf-TL4xYgvm6LIye8CckG-nZfVwNPV4huduov0jmIcxCQFeUwkHULG2IaA44wfBVUBdaHgkhPweZ2amjycO_IXtez-gBXOLbE3B7Gx9j_5ISCFtyVUblThKfoGyQv6KT6t8Vpmc4ZSKCCQp74KWFFypydb9ucego1taW_nQD06Cdf4yByLd6NaTBceMcIKbug9b9gxFm3XBgJ5q3z7KGo1Kr6XalAV5j4m-fQ91wczlTilX8FM4AljMupyRM9mA_aEADILQ4hS79q4SM0w6w',
}, },
'freeform': { 'freeform': {
'brand': '002', 'brand': '002',
'requestor_id': 'ABCFamily', 'requestor_id': 'ABCFamily',
}, 'provider_id': 'ABCFamily',
'watchdisneychannel': { 'software_statement': 'eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZWM2MGYyNC0xYzRjLTQ1NzQtYjc0Zi03ZmM4N2E5YWMzMzgiLCJuYmYiOjE1ODc2NjU5MjMsImlzcyI6ImF1dGguYWRvYmUuY29tIiwiaWF0IjoxNTg3NjY1OTIzfQ.flCn3dhvmvPnWmV0JV8Fm0YFyj07yPez9-n1GFEwVIm_S2wQVWbWyJhqsAyLZVFrhOMZYTqmPS3OHxGwTwXkEYn6PD7o_vIVG3oqi-Xn1m5jRt_Gazw5qEtpat6VE7bvKGSD3ZhcidOrsCk8NcYyq75u61NHDvSl81pcedJjVRVUpsqrEwmo0aVbA0C8PX3ri0mEbGvkMKvHn8E60xp-PSE-VK8SDT0plwPu_TwUszkZ6-_I8_2xcv_WBqcXFkAVg7Q-iNJXgQvmNsrpcrYuLvi6hEH4ZLtoDcXU6MhwTQAJTiHSo8x9aHX1_qFP09CzlNOFQbC2ZEJdP9SvA53SLQ',
'brand': '004',
'resource_id': 'Disney',
},
'watchdisneyjunior': {
'brand': '008',
'resource_id': 'DisneyJunior',
},
'watchdisneyxd': {
'brand': '009',
'resource_id': 'DisneyXD',
}, },
'disneynow': { 'disneynow': {
'brand': '011', 'brand': '011', # also: '004', '008', '009'
'requestor_id': 'DisneyChannels',
'provider_id': 'DisneyChannels',
'software_statement': 'eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiI1MzAzNTRiOS04NDNiLTRkNjAtYTQ3ZS0yNzk1MzlkOTIyNTciLCJuYmYiOjE1NTg5ODc0NDksImlzcyI6ImF1dGguYWRvYmUuY29tIiwiaWF0IjoxNTU4OTg3NDQ5fQ.Jud6YS6-J2h0h6po0oMheDym0qRTJQGj4kzacrz4DFuEwhcBkkykW6pF5pKuAUJy9HCZ40oDAHe2KcTlDJjCZF5tDaUEfdihakZ9cC_rG7MU-QoRne8qaB_dPDKwGuk-ZyWD8eV3zwTJmbGo8hDxYTEU81YNCxwhyc_BPDr5TYiubbmpP3_pTnXmSpuL58isJ2peSKWlX9BacuXtBY25c_QnPFKk-_EETm7IHkTpDazde1QfHWGu4s4yJpKGk8RVVujVG6h6ELlL-ZeYLilBm7iS7h1TYG1u7fJhyZRL7isaom6NvAzsvN3ngss1fLwt8decP8wzdFHrbYTdTjW8qw',
'resource_id': 'Disney', 'resource_id': 'Disney',
}, },
'fxnow.fxnetworks': { 'fxnetworks': {
'brand': '025', 'brand': '025', # also: '020'
'requestor_id': 'dtci', 'requestor_id': 'dtci',
'provider_id': 'fx', # also 'fxx', 'fxm'
'software_statement': 'eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiIzYWRhYWZiNC02OTAxLTRlYzktOTdmNy1lYWZkZTJkODJkN2EiLCJuYmYiOjE1NjIwMjQwNzYsImlzcyI6ImF1dGguYWRvYmUuY29tIiwiaWF0IjoxNTYyMDI0MDc2fQ.dhKMpZK50AObbZYrMiYPSfWtzXHUaeMP3jrIY4Cgfvh0GaEgk0Mns_zp78jypFeZgRtPVleQMQDNq2YEloRLcAGqP1aa6WVDglnK77ZWUm4IKai14Rwf3A6YBhSRoO2_lMmUGkuTf6gZY-kMIPqBYKqzTQiQl4HbniPFodIzFRiuI9QJVrkoyTGrJL4oqiX08PoFI3Z-TOti1Heu3EbFC-GveQHhlinYrzU7rbiAqLEz7FImtfBDsnXX1Y3uJDLYM3Bq4Oh0nrzTv1Fd62wNsCNErHHIbELidh1zZF0ujvt7ReuZUwAitm0UhEJ7OxNOUbEQWtae6pVNscvdvTFMpg',
},
'nationalgeographic': {
'brand': '026', # also '023'
'requestor_id': 'dtci',
'provider_id': 'ngc', # also 'ngw'
'software_statement': 'eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiIxMzE4YTM1Ni05Mjc4LTQ4NjEtYTFmNi1jMTIzMzg1ZWMzYzMiLCJuYmYiOjE1NjIwMjM4MjgsImlzcyI6ImF1dGguYWRvYmUuY29tIiwiaWF0IjoxNTYyMDIzODI4fQ.Le-2OzF9-jrhJ7ZfWtLWk5iSHGVZoxeU1w0_fO--Heli0OwRZsRq2slSmx-oZTzxuWmAgDEiBkWSDcDK6sM25DrCLsdsJa3MBuZ-slBRtH8aq3HpNoqqLkU-vg6gRUEKMtwBUtwCu_9aKUCayYtndWv4b1DjVQeSrteOW5NNudWVYleAe0kxeNJQHo5If9SCzDudKVJktFUjhNks4QPOC_uONPkRRlL9D0fNvtOY-LRFckfcHhf5z9l1iZjeukV0YhdKnuw1wyiaWrQXBUDiBfbkCRd2DM-KnelqPxfiXCaTjGKDURRBO3pz33ebge3IFXSiU5vl4qHQ8xvunzGpFw',
}, },
} }
_VALID_URL = r'''(?x) _URL_PATH_RE = r'(?:video|episode|movies-and-specials)/(?P<id>[\da-f]{8}-(?:[\da-f]{4}-){3}[\da-f]{12})'
https?:// _VALID_URL = [
(?P<sub_domain> fr'https?://(?:www\.)?(?P<site>abc)\.com/{_URL_PATH_RE}',
(?:{}\.)?go|fxnow\.fxnetworks| fr'https?://(?:www\.)?(?P<site>freeform)\.com/{_URL_PATH_RE}',
(?:www\.)?(?:abc|freeform|disneynow) fr'https?://(?:www\.)?(?P<site>disneynow)\.com/{_URL_PATH_RE}',
)\.com/ fr'https?://fxnow\.(?P<site>fxnetworks)\.com/{_URL_PATH_RE}',
(?: fr'https?://(?:www\.)?(?P<site>nationalgeographic)\.com/tv/{_URL_PATH_RE}',
(?:[^/]+/)*(?P<id>[Vv][Dd][Kk][Aa]\w+)| ]
(?:[^/]+/)*(?P<display_id>[^/?\#]+)
)
'''.format(r'\.|'.join(list(_SITE_INFO.keys())))
_TESTS = [{ _TESTS = [{
'url': 'http://abc.go.com/shows/designated-survivor/video/most-recent/VDKA3807643', 'url': 'https://abc.com/episode/4192c0e6-26e5-47a8-817b-ce8272b9e440/playlist/PL551127435',
'info_dict': { 'info_dict': {
'id': 'VDKA3807643', 'id': 'VDKA10805898',
'ext': 'mp4', 'ext': 'mp4',
'title': 'The Traitor in the White House', 'title': 'Switch the Flip',
'description': 'md5:05b009d2d145a1e85d25111bd37222e8', 'description': 'To help get Brians life in order, Stewie and Brian swap bodies using a machine that Stewie invents.',
},
'params': {
# m3u8 download
'skip_download': True,
},
'skip': 'This content is no longer available.',
}, {
'url': 'https://disneynow.com/shows/big-hero-6-the-series',
'info_dict': {
'title': 'Doraemon',
'id': 'SH55574025',
},
'playlist_mincount': 51,
}, {
'url': 'http://freeform.go.com/shows/shadowhunters/episodes/season-2/1-this-guilty-blood',
'info_dict': {
'id': 'VDKA3609139',
'title': 'This Guilty Blood',
'description': 'md5:f18e79ad1c613798d95fdabfe96cd292',
'age_limit': 14, 'age_limit': 14,
'duration': 1297,
'thumbnail': r're:https?://.+/.+\.jpg',
'series': 'Family Guy',
'season': 'Season 16',
'season_number': 16,
'episode': 'Episode 17',
'episode_number': 17,
'timestamp': 1746082800.0,
'upload_date': '20250501',
},
'params': {'skip_download': 'm3u8'},
'skip': 'This video requires AdobePass MSO credentials',
}, {
'url': 'https://disneynow.com/episode/21029660-ba06-4406-adb0-a9a78f6e265e/playlist/PL553044961',
'info_dict': {
'id': 'VDKA39546942',
'ext': 'mp4',
'title': 'Zero Friends Again',
'description': 'Relationships fray under the pressures of a difficult journey.',
'age_limit': 0,
'duration': 1721,
'thumbnail': r're:https?://.+/.+\.jpg',
'series': 'Star Wars: Skeleton Crew',
'season': 'Season 1',
'season_number': 1,
'episode': 'Episode 6',
'episode_number': 6,
'timestamp': 1746946800.0,
'upload_date': '20250511',
},
'params': {'skip_download': 'm3u8'},
'skip': 'This video requires AdobePass MSO credentials',
}, {
'url': 'https://fxnow.fxnetworks.com/episode/09f4fa6f-c293-469e-aebe-32c9ca5842a7/playlist/PL554408064',
'info_dict': {
'id': 'VDKA38112033',
'ext': 'mp4',
'title': 'The Return of Jerry',
'description': 'The vampires long-lost fifth roommate returns. Written by Paul Simms; directed by Kyle Newacheck.',
'age_limit': 17,
'duration': 1493,
'thumbnail': r're:https?://.+/.+\.jpg',
'series': 'What We Do in the Shadows',
'season': 'Season 6',
'season_number': 6,
'episode': 'Episode 1', 'episode': 'Episode 1',
'upload_date': '20170102',
'season': 'Season 2',
'thumbnail': 'http://cdn1.edgedatg.com/aws/v2/abcf/Shadowhunters/video/201/ae5f75608d86bf88aa4f9f4aa76ab1b7/579x325-Q100_ae5f75608d86bf88aa4f9f4aa76ab1b7.jpg',
'duration': 2544,
'season_number': 2,
'series': 'Shadowhunters',
'episode_number': 1, 'episode_number': 1,
'timestamp': 1483387200, 'timestamp': 1729573200.0,
'ext': 'mp4', 'upload_date': '20241022',
},
'params': {
'geo_bypass_ip_block': '3.244.239.0/24',
# m3u8 download
'skip_download': True,
}, },
'params': {'skip_download': 'm3u8'},
'skip': 'This video requires AdobePass MSO credentials',
}, { }, {
'url': 'https://abc.com/shows/the-rookie/episode-guide/season-04/12-the-knock', 'url': 'https://www.freeform.com/episode/bda0eaf7-761a-4838-aa44-96f794000844/playlist/PL553044961',
'info_dict': { 'info_dict': {
'id': 'VDKA26050359', 'id': 'VDKA39007340',
'title': 'The Knock',
'description': 'md5:0c2947e3ada4c31f28296db7db14aa64',
'age_limit': 14,
'ext': 'mp4', 'ext': 'mp4',
'thumbnail': 'http://cdn1.edgedatg.com/aws/v2/abc/TheRookie/video/412/daf830d06e83b11eaf5c0a299d993ae3/1556x876-Q75_daf830d06e83b11eaf5c0a299d993ae3.jpg', 'title': 'Angel\'s Landing',
'episode': 'Episode 12', 'description': 'md5:91bf084e785c968fab16734df7313446',
'season_number': 4, 'age_limit': 14,
'season': 'Season 4', 'duration': 2523,
'timestamp': 1642975200, 'thumbnail': r're:https?://.+/.+\.jpg',
'episode_number': 12, 'series': 'How I Escaped My Cult',
'upload_date': '20220123', 'season': 'Season 1',
'series': 'The Rookie', 'season_number': 1,
'duration': 2572, 'episode': 'Episode 2',
}, 'episode_number': 2,
'params': { 'timestamp': 1740038400.0,
'geo_bypass_ip_block': '3.244.239.0/24', 'upload_date': '20250220',
# m3u8 download
'skip_download': True,
}, },
'params': {'skip_download': 'm3u8'},
}, { }, {
'url': 'https://fxnow.fxnetworks.com/shows/better-things/video/vdka12782841', 'url': 'https://www.nationalgeographic.com/tv/episode/ca694661-1186-41ae-8089-82f64d69b16d/playlist/PL554408064',
'info_dict': { 'info_dict': {
'id': 'VDKA12782841', 'id': 'VDKA39492078',
'title': 'First Look: Better Things - Season 2',
'description': 'md5:fa73584a95761c605d9d54904e35b407',
'ext': 'mp4', 'ext': 'mp4',
'age_limit': 14, 'title': 'Heart of the Emperors',
'upload_date': '20170825', 'description': 'md5:4fc50a2878f030bb3a7eac9124dca677',
'duration': 161, 'age_limit': 0,
'series': 'Better Things', 'duration': 2775,
'thumbnail': 'http://cdn1.edgedatg.com/aws/v2/fx/BetterThings/video/12782841/b6b05e58264121cc2c98811318e6d507/1556x876-Q75_b6b05e58264121cc2c98811318e6d507.jpg', 'thumbnail': r're:https?://.+/.+\.jpg',
'timestamp': 1503661074, 'series': 'Secrets of the Penguins',
}, 'season': 'Season 1',
'params': { 'season_number': 1,
'geo_bypass_ip_block': '3.244.239.0/24', 'episode': 'Episode 1',
# m3u8 download 'episode_number': 1,
'skip_download': True, 'timestamp': 1745204400.0,
'upload_date': '20250421',
}, },
'params': {'skip_download': 'm3u8'},
}, { }, {
'url': 'http://abc.go.com/shows/the-catch/episode-guide/season-01/10-the-wedding', 'url': 'https://www.freeform.com/movies-and-specials/c38281fc-9f8f-47c7-8220-22394f9df2e1',
'only_matching': True, 'only_matching': True,
}, { }, {
'url': 'http://abc.go.com/shows/world-news-tonight/episode-guide/2017-02/17-021717-intense-stand-off-between-man-with-rifle-and-police-in-oakland', 'url': 'https://abc.com/video/219a454a-172c-41bf-878a-d169e6bc0bdc/playlist/PL5523098420',
'only_matching': True,
}, {
# brand 004
'url': 'http://disneynow.go.com/shows/big-hero-6-the-series/season-01/episode-10-mr-sparkles-loses-his-sparkle/vdka4637915',
'only_matching': True,
}, {
# brand 008
'url': 'http://disneynow.go.com/shows/minnies-bow-toons/video/happy-campers/vdka4872013',
'only_matching': True,
}, {
'url': 'https://disneynow.com/shows/minnies-bow-toons/video/happy-campers/vdka4872013',
'only_matching': True,
}, {
'url': 'https://www.freeform.com/shows/cruel-summer/episode-guide/season-01/01-happy-birthday-jeanette-turner',
'only_matching': True, 'only_matching': True,
}] }]
@ -171,58 +167,29 @@ class GoIE(AdobePassIE):
f'http://api.contents.watchabc.go.com/vp2/ws/contents/3000/videos/{brand}/001/-1/{show_id}/-1/{video_id}/-1/-1.json', f'http://api.contents.watchabc.go.com/vp2/ws/contents/3000/videos/{brand}/001/-1/{show_id}/-1/{video_id}/-1/-1.json',
display_id)['video'] display_id)['video']
def _extract_global_var(self, name, webpage, video_id):
return self._search_json(
fr'window\[["\']{re.escape(name)}["\']\]\s*=',
webpage, f'{name.strip("_")} JSON', video_id)
def _real_extract(self, url): def _real_extract(self, url):
mobj = self._match_valid_url(url) site, display_id = self._match_valid_url(url).group('site', 'id')
sub_domain = remove_start(remove_end(mobj.group('sub_domain') or '', '.go'), 'www.') webpage = self._download_webpage(url, display_id)
video_id, display_id = mobj.group('id', 'display_id') config = self._extract_global_var('__CONFIG__', webpage, display_id)
site_info = self._SITE_INFO.get(sub_domain, {}) data = self._extract_global_var(config['globalVar'], webpage, display_id)
brand = site_info.get('brand') video_id = traverse_obj(data, (
if not video_id or not site_info: 'page', 'content', 'video', 'layout', (('video', 'id'), 'videoid'), {str}, any))
webpage = self._download_webpage(url, display_id or video_id)
data = self._parse_json(
self._search_regex(
r'["\']__abc_com__["\']\s*\]\s*=\s*({.+?})\s*;', webpage,
'data', default='{}'),
display_id or video_id, fatal=False)
# https://abc.com/shows/modern-family/episode-guide/season-01/101-pilot
layout = try_get(data, lambda x: x['page']['content']['video']['layout'], dict)
video_id = None
if layout:
video_id = try_get(
layout,
(lambda x: x['videoid'], lambda x: x['video']['id']),
str)
if not video_id: if not video_id:
video_id = self._search_regex( video_id = self._search_regex([
( # data-track-video_id="VDKA39492078"
# There may be inner quotes, e.g. data-video-id="'VDKA3609139'" # data-track-video_id_code="vdka39492078"
# from http://freeform.go.com/shows/shadowhunters/episodes/season-2/1-this-guilty-blood # data-video-id="'VDKA3609139'"
r'data-video-id=["\']*(VDKA\w+)', r'data-(?:track-)?video[_-]id(?:_code)?=["\']*((?:vdka|VDKA)\d+)',
# page.analytics.videoIdCode # page.analytics.videoIdCode
r'\bvideoIdCode["\']\s*:\s*["\']((?:vdka|VDKA)\w+)', r'\bvideoIdCode["\']\s*:\s*["\']((?:vdka|VDKA)\d+)'], webpage, 'video ID')
# https://abc.com/shows/the-rookie/episode-guide/season-02/03-the-bet
r'\b(?:video)?id["\']\s*:\s*["\'](VDKA\w+)', site_info = self._SITE_INFO[site]
), webpage, 'video id', default=video_id) brand = site_info['brand']
if not site_info:
brand = self._search_regex(
(r'data-brand=\s*["\']\s*(\d+)',
r'data-page-brand=\s*["\']\s*(\d+)'), webpage, 'brand',
default='004')
site_info = next(
si for _, si in self._SITE_INFO.items()
if si.get('brand') == brand)
if not video_id:
# show extraction works for Disney, DisneyJunior and DisneyXD
# ABC and Freeform has different layout
show_id = self._search_regex(r'data-show-id=["\']*(SH\d+)', webpage, 'show id')
videos = self._extract_videos(brand, show_id=show_id)
show_title = self._search_regex(r'data-show-title="([^"]+)"', webpage, 'show title', fatal=False)
entries = []
for video in videos:
entries.append(self.url_result(
video['url'], 'Go', video.get('id'), video.get('title')))
entries.reverse()
return self.playlist_result(entries, show_id, show_title)
video_data = self._extract_videos(brand, video_id)[0] video_data = self._extract_videos(brand, video_id)[0]
video_id = video_data['id'] video_id = video_data['id']
title = video_data['title'] title = video_data['title']
@ -238,26 +205,31 @@ class GoIE(AdobePassIE):
if ext == 'm3u8': if ext == 'm3u8':
video_type = video_data.get('type') video_type = video_data.get('type')
data = { data = {
'video_id': video_data['id'], 'video_id': video_id,
'video_type': video_type, 'video_type': video_type,
'brand': brand, 'brand': brand,
'device': '001', 'device': '001',
'app_name': 'webplayer-abc',
} }
if video_data.get('accesslevel') == '1': if video_data.get('accesslevel') == '1':
requestor_id = site_info.get('requestor_id', 'DisneyChannels') provider_id = site_info['provider_id']
software_statement = traverse_obj(data, ('app', 'config', (
('features', 'auth', 'softwareStatement'),
('tvAuth', 'SOFTWARE_STATEMENTS', 'PRODUCTION'),
), {str}, any)) or site_info['software_statement']
resource = site_info.get('resource_id') or self._get_mvpd_resource( resource = site_info.get('resource_id') or self._get_mvpd_resource(
requestor_id, title, video_id, None) provider_id, title, video_id, None)
auth = self._extract_mvpd_auth( auth = self._extract_mvpd_auth(
url, video_id, requestor_id, resource) url, video_id, site_info['requestor_id'], resource, software_statement)
data.update({ data.update({
'token': auth, 'token': auth,
'token_type': 'ap', 'token_type': 'ap',
'adobe_requestor_id': requestor_id, 'adobe_requestor_id': provider_id,
}) })
else: else:
self._initialize_geo_bypass({'countries': ['US']}) self._initialize_geo_bypass({'countries': ['US']})
entitlement = self._download_json( entitlement = self._download_json(
'https://api.entitlement.watchabc.go.com/vp2/ws-secure/entitlement/2020/authorize.json', 'https://prod.gatekeeper.us-abc.symphony.edgedatg.go.com/vp2/ws-secure/entitlement/2020/playmanifest_secure.json',
video_id, data=urlencode_postdata(data)) video_id, data=urlencode_postdata(data))
errors = entitlement.get('errors', {}).get('errors', []) errors = entitlement.get('errors', {}).get('errors', [])
if errors: if errors:
@ -267,7 +239,7 @@ class GoIE(AdobePassIE):
error['message'], countries=['US']) error['message'], countries=['US'])
error_message = ', '.join([error['message'] for error in errors]) error_message = ', '.join([error['message'] for error in errors])
raise ExtractorError(f'{self.IE_NAME} said: {error_message}', expected=True) raise ExtractorError(f'{self.IE_NAME} said: {error_message}', expected=True)
asset_url += '?' + entitlement['uplynkData']['sessionKey'] asset_url += '?' + entitlement['entitlement']['uplynkData']['sessionKey']
fmts, subs = self._extract_m3u8_formats_and_subtitles( fmts, subs = self._extract_m3u8_formats_and_subtitles(
asset_url, video_id, 'mp4', m3u8_id=format_id or 'hls', fatal=False) asset_url, video_id, 'mp4', m3u8_id=format_id or 'hls', fatal=False)
formats.extend(fmts) formats.extend(fmts)

View File

@ -1,32 +1,66 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import js_to_json, traverse_obj from ..utils import (
ExtractorError,
clean_html,
url_or_none,
)
from ..utils.traversal import subs_list_to_dict, traverse_obj
class MonsterSirenHypergryphMusicIE(InfoExtractor): class MonsterSirenHypergryphMusicIE(InfoExtractor):
IE_NAME = 'monstersiren'
IE_DESC = '塞壬唱片'
_API_BASE = 'https://monster-siren.hypergryph.com/api'
_VALID_URL = r'https?://monster-siren\.hypergryph\.com/music/(?P<id>\d+)' _VALID_URL = r'https?://monster-siren\.hypergryph\.com/music/(?P<id>\d+)'
_TESTS = [{ _TESTS = [{
'url': 'https://monster-siren.hypergryph.com/music/514562', 'url': 'https://monster-siren.hypergryph.com/music/514562',
'info_dict': { 'info_dict': {
'id': '514562', 'id': '514562',
'ext': 'wav', 'ext': 'wav',
'artists': ['塞壬唱片-MSR'],
'album': 'Flame Shadow',
'title': 'Flame Shadow', 'title': 'Flame Shadow',
'album': 'Flame Shadow',
'artists': ['塞壬唱片-MSR'],
'description': 'md5:19e2acfcd1b65b41b29e8079ab948053',
'thumbnail': r're:https?://web\.hycdn\.cn/siren/pic/.+\.jpg',
},
}, {
'url': 'https://monster-siren.hypergryph.com/music/514518',
'info_dict': {
'id': '514518',
'ext': 'wav',
'title': 'Heavenly Me (Instrumental)',
'album': 'Heavenly Me',
'artists': ['塞壬唱片-MSR', 'AIYUE blessed : 理名'],
'description': 'md5:ce790b41c932d1ad72eb791d1d8ae598',
'thumbnail': r're:https?://web\.hycdn\.cn/siren/pic/.+\.jpg',
}, },
}] }]
def _real_extract(self, url): def _real_extract(self, url):
audio_id = self._match_id(url) audio_id = self._match_id(url)
webpage = self._download_webpage(url, audio_id) song = self._download_json(f'{self._API_BASE}/song/{audio_id}', audio_id)
json_data = self._search_json( if traverse_obj(song, 'code') != 0:
r'window\.g_initialProps\s*=', webpage, 'data', audio_id, transform_source=js_to_json) msg = traverse_obj(song, ('msg', {str}, filter))
raise ExtractorError(
msg or 'API returned an error response', expected=bool(msg))
album = None
if album_id := traverse_obj(song, ('data', 'albumCid', {str})):
album = self._download_json(
f'{self._API_BASE}/album/{album_id}/detail', album_id, fatal=False)
return { return {
'id': audio_id, 'id': audio_id,
'title': traverse_obj(json_data, ('player', 'songDetail', 'name')),
'url': traverse_obj(json_data, ('player', 'songDetail', 'sourceUrl')),
'ext': 'wav',
'vcodec': 'none', 'vcodec': 'none',
'artists': traverse_obj(json_data, ('player', 'songDetail', 'artists', ...)), **traverse_obj(song, ('data', {
'album': traverse_obj(json_data, ('musicPlay', 'albumDetail', 'name')), 'title': ('name', {str}),
'artists': ('artists', ..., {str}),
'subtitles': ({'url': 'lyricUrl'}, all, {subs_list_to_dict(lang='en')}),
'url': ('sourceUrl', {url_or_none}),
})),
**traverse_obj(album, ('data', {
'album': ('name', {str}),
'description': ('intro', {clean_html}),
'thumbnail': ('coverUrl', {url_or_none}),
})),
} }

View File

@ -1,3 +1,4 @@
import json
import re import re
import time import time
@ -6,9 +7,7 @@ from ..utils import (
ExtractorError, ExtractorError,
determine_ext, determine_ext,
js_to_json, js_to_json,
parse_qs,
traverse_obj, traverse_obj,
urlencode_postdata,
) )
@ -16,7 +15,6 @@ class IPrimaIE(InfoExtractor):
_VALID_URL = r'https?://(?!cnn)(?:[^/]+)\.iprima\.cz/(?:[^/]+/)*(?P<id>[^/?#&]+)' _VALID_URL = r'https?://(?!cnn)(?:[^/]+)\.iprima\.cz/(?:[^/]+/)*(?P<id>[^/?#&]+)'
_GEO_BYPASS = False _GEO_BYPASS = False
_NETRC_MACHINE = 'iprima' _NETRC_MACHINE = 'iprima'
_AUTH_ROOT = 'https://auth.iprima.cz'
access_token = None access_token = None
_TESTS = [{ _TESTS = [{
@ -86,48 +84,18 @@ class IPrimaIE(InfoExtractor):
if self.access_token: if self.access_token:
return return
login_page = self._download_webpage(
f'{self._AUTH_ROOT}/oauth2/login', None, note='Downloading login page',
errnote='Downloading login page failed')
login_form = self._hidden_inputs(login_page)
login_form.update({
'_email': username,
'_password': password})
profile_select_html, login_handle = self._download_webpage_handle(
f'{self._AUTH_ROOT}/oauth2/login', None, data=urlencode_postdata(login_form),
note='Logging in')
# a profile may need to be selected first, even when there is only a single one
if '/profile-select' in login_handle.url:
profile_id = self._search_regex(
r'data-identifier\s*=\s*["\']?(\w+)', profile_select_html, 'profile id')
login_handle = self._request_webpage(
f'{self._AUTH_ROOT}/user/profile-select-perform/{profile_id}', None,
query={'continueUrl': '/user/login?redirect_uri=/user/'}, note='Selecting profile')
code = traverse_obj(login_handle.url, ({parse_qs}, 'code', 0))
if not code:
raise ExtractorError('Login failed', expected=True)
token_request_data = {
'scope': 'openid+email+profile+phone+address+offline_access',
'client_id': 'prima_sso',
'grant_type': 'authorization_code',
'code': code,
'redirect_uri': f'{self._AUTH_ROOT}/sso/auth-check'}
token_data = self._download_json( token_data = self._download_json(
f'{self._AUTH_ROOT}/oauth2/token', None, 'https://ucet.iprima.cz/api/session/create', None,
note='Downloading token', errnote='Downloading token failed', note='Logging in', errnote='Failed to log in',
data=urlencode_postdata(token_request_data)) data=json.dumps({
'email': username,
'password': password,
'deviceName': 'Windows Chrome',
}).encode(), headers={'content-type': 'application/json'})
self.access_token = token_data.get('access_token') self.access_token = token_data['accessToken']['value']
if self.access_token is None: if not self.access_token:
raise ExtractorError('Getting token failed', expected=True) raise ExtractorError('Failed to fetch access token')
def _real_initialize(self): def _real_initialize(self):
if not self.access_token: if not self.access_token:

View File

@ -1,23 +1,33 @@
import functools import functools
import itertools
import math import math
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
InAdvancePagedList, InAdvancePagedList,
ISO639Utils,
OnDemandPagedList,
clean_html, clean_html,
int_or_none, int_or_none,
js_to_json,
make_archive_id, make_archive_id,
orderedSet,
smuggle_url, smuggle_url,
unified_strdate,
unified_timestamp,
unsmuggle_url, unsmuggle_url,
url_basename, url_basename,
url_or_none, url_or_none,
urlencode_postdata, urlencode_postdata,
urljoin,
variadic,
) )
from ..utils.traversal import traverse_obj from ..utils.traversal import traverse_obj
class JioSaavnBaseIE(InfoExtractor): class JioSaavnBaseIE(InfoExtractor):
_URL_BASE_RE = r'https?://(?:www\.)?(?:jio)?saavn\.com'
_API_URL = 'https://www.jiosaavn.com/api.php' _API_URL = 'https://www.jiosaavn.com/api.php'
_VALID_BITRATES = {'16', '32', '64', '128', '320'} _VALID_BITRATES = {'16', '32', '64', '128', '320'}
@ -30,16 +40,20 @@ class JioSaavnBaseIE(InfoExtractor):
f'Valid bitrates are: {", ".join(sorted(self._VALID_BITRATES, key=int))}') f'Valid bitrates are: {", ".join(sorted(self._VALID_BITRATES, key=int))}')
return requested_bitrates return requested_bitrates
def _extract_formats(self, song_data): def _extract_formats(self, item_data):
# Show/episode JSON data has a slightly different structure than song JSON data
if media_url := traverse_obj(item_data, ('more_info', 'encrypted_media_url', {str})):
item_data.setdefault('encrypted_media_url', media_url)
for bitrate in self.requested_bitrates: for bitrate in self.requested_bitrates:
media_data = self._download_json( media_data = self._download_json(
self._API_URL, song_data['id'], self._API_URL, item_data['id'],
f'Downloading format info for {bitrate}', f'Downloading format info for {bitrate}',
fatal=False, data=urlencode_postdata({ fatal=False, data=urlencode_postdata({
'__call': 'song.generateAuthToken', '__call': 'song.generateAuthToken',
'_format': 'json', '_format': 'json',
'bitrate': bitrate, 'bitrate': bitrate,
'url': song_data['encrypted_media_url'], 'url': item_data['encrypted_media_url'],
})) }))
if not traverse_obj(media_data, ('auth_url', {url_or_none})): if not traverse_obj(media_data, ('auth_url', {url_or_none})):
self.report_warning(f'Unable to extract format info for {bitrate}') self.report_warning(f'Unable to extract format info for {bitrate}')
@ -53,24 +67,6 @@ class JioSaavnBaseIE(InfoExtractor):
'vcodec': 'none', 'vcodec': 'none',
} }
def _extract_song(self, song_data, url=None):
info = traverse_obj(song_data, {
'id': ('id', {str}),
'title': ('song', {clean_html}),
'album': ('album', {clean_html}),
'thumbnail': ('image', {url_or_none}, {lambda x: re.sub(r'-\d+x\d+\.', '-500x500.', x)}),
'duration': ('duration', {int_or_none}),
'view_count': ('play_count', {int_or_none}),
'release_year': ('year', {int_or_none}),
'artists': ('primary_artists', {lambda x: x.split(', ') if x else None}),
'webpage_url': ('perma_url', {url_or_none}),
})
if webpage_url := info.get('webpage_url') or url:
info['display_id'] = url_basename(webpage_url)
info['_old_archive_ids'] = [make_archive_id(JioSaavnSongIE, info['display_id'])]
return info
def _call_api(self, type_, token, note='API', params={}): def _call_api(self, type_, token, note='API', params={}):
return self._download_json( return self._download_json(
self._API_URL, token, f'Downloading {note} JSON', f'Unable to download {note} JSON', self._API_URL, token, f'Downloading {note} JSON', f'Unable to download {note} JSON',
@ -84,19 +80,89 @@ class JioSaavnBaseIE(InfoExtractor):
**params, **params,
}) })
def _yield_songs(self, playlist_data): @staticmethod
for song_data in traverse_obj(playlist_data, ('songs', lambda _, v: v['id'] and v['perma_url'])): def _extract_song(song_data, url=None):
song_info = self._extract_song(song_data) info = traverse_obj(song_data, {
url = smuggle_url(song_info['webpage_url'], { 'id': ('id', {str}),
'id': song_data['id'], 'title': (('song', 'title'), {clean_html}, any),
'encrypted_media_url': song_data['encrypted_media_url'], 'album': ((None, 'more_info'), 'album', {clean_html}, any),
'duration': ((None, 'more_info'), 'duration', {int_or_none}, any),
'channel': ((None, 'more_info'), 'label', {str}, any),
'channel_id': ((None, 'more_info'), 'label_id', {str}, any),
'channel_url': ((None, 'more_info'), 'label_url', {urljoin('https://www.jiosaavn.com/')}, any),
'release_date': ((None, 'more_info'), 'release_date', {unified_strdate}, any),
'release_year': ('year', {int_or_none}),
'thumbnail': ('image', {url_or_none}, {lambda x: re.sub(r'-\d+x\d+\.', '-500x500.', x)}),
'view_count': ('play_count', {int_or_none}),
'language': ('language', {lambda x: ISO639Utils.short2long(x.casefold()) or 'und'}),
'webpage_url': ('perma_url', {url_or_none}),
'artists': ('more_info', 'artistMap', 'primary_artists', ..., 'name', {str}, filter, all),
}) })
yield self.url_result(url, JioSaavnSongIE, url_transparent=True, **song_info) if webpage_url := info.get('webpage_url') or url:
info['display_id'] = url_basename(webpage_url)
info['_old_archive_ids'] = [make_archive_id(JioSaavnSongIE, info['display_id'])]
if primary_artists := traverse_obj(song_data, ('primary_artists', {lambda x: x.split(', ') if x else None})):
info['artists'].extend(primary_artists)
if featured_artists := traverse_obj(song_data, ('featured_artists', {str}, filter)):
info['artists'].extend(featured_artists.split(', '))
info['artists'] = orderedSet(info['artists']) or None
return info
@staticmethod
def _extract_episode(episode_data, url=None):
info = JioSaavnBaseIE._extract_song(episode_data, url)
info.pop('_old_archive_ids', None)
info.update(traverse_obj(episode_data, {
'description': ('more_info', 'description', {str}),
'timestamp': ('more_info', 'release_time', {unified_timestamp}),
'series': ('more_info', 'show_title', {str}),
'series_id': ('more_info', 'show_id', {str}),
'season': ('more_info', 'season_title', {str}),
'season_number': ('more_info', 'season_no', {int_or_none}),
'season_id': ('more_info', 'season_id', {str}),
'episode_number': ('more_info', 'episode_number', {int_or_none}),
'cast': ('starring', {lambda x: x.split(', ') if x else None}),
}))
return info
def _extract_jiosaavn_result(self, url, endpoint, response_key, parse_func):
url, smuggled_data = unsmuggle_url(url)
data = traverse_obj(smuggled_data, ({
'id': ('id', {str}),
'encrypted_media_url': ('encrypted_media_url', {str}),
}))
if 'id' in data and 'encrypted_media_url' in data:
result = {'id': data['id']}
else:
# only extract metadata if this is not a url_transparent result
data = self._call_api(endpoint, self._match_id(url))[response_key][0]
result = parse_func(data, url)
result['formats'] = list(self._extract_formats(data))
return result
def _yield_items(self, playlist_data, keys=None, parse_func=None):
"""Subclasses using this method must set _ENTRY_IE"""
if parse_func is None:
parse_func = self._extract_song
for item_data in traverse_obj(playlist_data, (
*variadic(keys, (str, bytes, dict, set)), lambda _, v: v['id'] and v['perma_url'],
)):
info = parse_func(item_data)
url = smuggle_url(info['webpage_url'], traverse_obj(item_data, {
'id': ('id', {str}),
'encrypted_media_url': ((None, 'more_info'), 'encrypted_media_url', {str}, any),
}))
yield self.url_result(url, self._ENTRY_IE, url_transparent=True, **info)
class JioSaavnSongIE(JioSaavnBaseIE): class JioSaavnSongIE(JioSaavnBaseIE):
IE_NAME = 'jiosaavn:song' IE_NAME = 'jiosaavn:song'
_VALID_URL = r'https?://(?:www\.)?(?:jiosaavn\.com/song/[^/?#]+/|saavn\.com/s/song/(?:[^/?#]+/){3})(?P<id>[^/?#]+)' _VALID_URL = JioSaavnBaseIE._URL_BASE_RE + r'(?:/song/[^/?#]+/|/s/song/(?:[^/?#]+/){3})(?P<id>[^/?#]+)'
_TESTS = [{ _TESTS = [{
'url': 'https://www.jiosaavn.com/song/leja-re/OQsEfQFVUXk', 'url': 'https://www.jiosaavn.com/song/leja-re/OQsEfQFVUXk',
'md5': '3b84396d15ed9e083c3106f1fa589c04', 'md5': '3b84396d15ed9e083c3106f1fa589c04',
@ -106,12 +172,38 @@ class JioSaavnSongIE(JioSaavnBaseIE):
'ext': 'm4a', 'ext': 'm4a',
'title': 'Leja Re', 'title': 'Leja Re',
'album': 'Leja Re', 'album': 'Leja Re',
'thumbnail': r're:https?://c.saavncdn.com/258/Leja-Re-Hindi-2018-20181124024539-500x500.jpg', 'thumbnail': r're:https?://.+/.+\.jpg',
'duration': 205, 'duration': 205,
'view_count': int, 'view_count': int,
'release_year': 2018, 'release_year': 2018,
'artists': ['Sandesh Shandilya', 'Dhvani Bhanushali', 'Tanishk Bagchi'], 'artists': ['Sandesh Shandilya', 'Dhvani Bhanushali', 'Tanishk Bagchi'],
'_old_archive_ids': ['jiosaavnsong OQsEfQFVUXk'], '_old_archive_ids': ['jiosaavnsong OQsEfQFVUXk'],
'channel': 'T-Series',
'language': 'hin',
'channel_id': '34297',
'channel_url': 'https://www.jiosaavn.com/label/t-series-albums/6DLuXO3VoTo_',
'release_date': '20181124',
},
}, {
'url': 'https://www.jiosaavn.com/song/chuttamalle/P1FfWjZkQ0Q',
'md5': '96296c58d6ce488a417ef0728fd2d680',
'info_dict': {
'id': 'O94kBTtw',
'display_id': 'P1FfWjZkQ0Q',
'ext': 'm4a',
'title': 'Chuttamalle',
'album': 'Devara Part 1 - Telugu',
'thumbnail': r're:https?://.+/.+\.jpg',
'duration': 222,
'view_count': int,
'release_year': 2024,
'artists': 'count:3',
'_old_archive_ids': ['jiosaavnsong P1FfWjZkQ0Q'],
'channel': 'T-Series',
'language': 'tel',
'channel_id': '34297',
'channel_url': 'https://www.jiosaavn.com/label/t-series-albums/6DLuXO3VoTo_',
'release_date': '20240926',
}, },
}, { }, {
'url': 'https://www.saavn.com/s/song/hindi/Saathiya/O-Humdum-Suniyo-Re/KAMiazoCblU', 'url': 'https://www.saavn.com/s/song/hindi/Saathiya/O-Humdum-Suniyo-Re/KAMiazoCblU',
@ -119,26 +211,51 @@ class JioSaavnSongIE(JioSaavnBaseIE):
}] }]
def _real_extract(self, url): def _real_extract(self, url):
url, smuggled_data = unsmuggle_url(url) return self._extract_jiosaavn_result(url, 'song', 'songs', self._extract_song)
song_data = traverse_obj(smuggled_data, ({
'id': ('id', {str}),
'encrypted_media_url': ('encrypted_media_url', {str}),
}))
if 'id' in song_data and 'encrypted_media_url' in song_data:
result = {'id': song_data['id']}
else:
# only extract metadata if this is not a url_transparent result
song_data = self._call_api('song', self._match_id(url))['songs'][0]
result = self._extract_song(song_data, url)
result['formats'] = list(self._extract_formats(song_data)) class JioSaavnShowIE(JioSaavnBaseIE):
return result IE_NAME = 'jiosaavn:show'
_VALID_URL = JioSaavnBaseIE._URL_BASE_RE + r'/shows/[^/?#]+/(?P<id>[^/?#]{11,})/?(?:$|[?#])'
_TESTS = [{
'url': 'https://www.jiosaavn.com/shows/non-food-ways-to-boost-your-energy/XFMcKICOCgc_',
'md5': '0733cd254cfe74ef88bea1eaedcf1f4f',
'info_dict': {
'id': 'qqzh3RKZ',
'display_id': 'XFMcKICOCgc_',
'ext': 'mp3',
'title': 'Non-Food Ways To Boost Your Energy',
'description': 'md5:26e7129644b5c6aada32b8851c3997c8',
'episode': 'Episode 1',
'timestamp': 1640563200,
'series': 'Holistic Lifestyle With Neha Ranglani',
'series_id': '52397',
'season': 'Holistic Lifestyle With Neha Ranglani',
'season_number': 1,
'season_id': '61273',
'thumbnail': r're:https?://.+/.+\.jpg',
'duration': 311,
'view_count': int,
'release_year': 2021,
'language': 'eng',
'channel': 'Saavn OG',
'channel_id': '1953876',
'episode_number': 1,
'upload_date': '20211227',
'release_date': '20211227',
},
}, {
'url': 'https://www.jiosaavn.com/shows/himesh-reshammiya/Kr8fmfSN4vo_',
'only_matching': True,
}]
def _real_extract(self, url):
return self._extract_jiosaavn_result(url, 'episode', 'episodes', self._extract_episode)
class JioSaavnAlbumIE(JioSaavnBaseIE): class JioSaavnAlbumIE(JioSaavnBaseIE):
IE_NAME = 'jiosaavn:album' IE_NAME = 'jiosaavn:album'
_VALID_URL = r'https?://(?:www\.)?(?:jio)?saavn\.com/album/[^/?#]+/(?P<id>[^/?#]+)' _VALID_URL = JioSaavnBaseIE._URL_BASE_RE + r'/album/[^/?#]+/(?P<id>[^/?#]+)'
_TESTS = [{ _TESTS = [{
'url': 'https://www.jiosaavn.com/album/96/buIOjYZDrNA_', 'url': 'https://www.jiosaavn.com/album/96/buIOjYZDrNA_',
'info_dict': { 'info_dict': {
@ -147,18 +264,19 @@ class JioSaavnAlbumIE(JioSaavnBaseIE):
}, },
'playlist_count': 10, 'playlist_count': 10,
}] }]
_ENTRY_IE = JioSaavnSongIE
def _real_extract(self, url): def _real_extract(self, url):
display_id = self._match_id(url) display_id = self._match_id(url)
album_data = self._call_api('album', display_id) album_data = self._call_api('album', display_id)
return self.playlist_result( return self.playlist_result(
self._yield_songs(album_data), display_id, traverse_obj(album_data, ('title', {str}))) self._yield_items(album_data, 'songs'), display_id, traverse_obj(album_data, ('title', {str})))
class JioSaavnPlaylistIE(JioSaavnBaseIE): class JioSaavnPlaylistIE(JioSaavnBaseIE):
IE_NAME = 'jiosaavn:playlist' IE_NAME = 'jiosaavn:playlist'
_VALID_URL = r'https?://(?:www\.)?(?:jio)?saavn\.com/(?:s/playlist/(?:[^/?#]+/){2}|featured/[^/?#]+/)(?P<id>[^/?#]+)' _VALID_URL = JioSaavnBaseIE._URL_BASE_RE + r'/(?:s/playlist/(?:[^/?#]+/){2}|featured/[^/?#]+/)(?P<id>[^/?#]+)'
_TESTS = [{ _TESTS = [{
'url': 'https://www.jiosaavn.com/s/playlist/2279fbe391defa793ad7076929a2f5c9/mood-english/LlJ8ZWT1ibN5084vKHRj2Q__', 'url': 'https://www.jiosaavn.com/s/playlist/2279fbe391defa793ad7076929a2f5c9/mood-english/LlJ8ZWT1ibN5084vKHRj2Q__',
'info_dict': { 'info_dict': {
@ -172,15 +290,16 @@ class JioSaavnPlaylistIE(JioSaavnBaseIE):
'id': 'DVR,pFUOwyXqIp77B1JF,A__', 'id': 'DVR,pFUOwyXqIp77B1JF,A__',
'title': 'Mood Hindi', 'title': 'Mood Hindi',
}, },
'playlist_mincount': 801, 'playlist_mincount': 750,
}, { }, {
'url': 'https://www.jiosaavn.com/featured/taaza-tunes/Me5RridRfDk_', 'url': 'https://www.jiosaavn.com/featured/taaza-tunes/Me5RridRfDk_',
'info_dict': { 'info_dict': {
'id': 'Me5RridRfDk_', 'id': 'Me5RridRfDk_',
'title': 'Taaza Tunes', 'title': 'Taaza Tunes',
}, },
'playlist_mincount': 301, 'playlist_mincount': 50,
}] }]
_ENTRY_IE = JioSaavnSongIE
_PAGE_SIZE = 50 _PAGE_SIZE = 50
def _fetch_page(self, token, page): def _fetch_page(self, token, page):
@ -189,7 +308,7 @@ class JioSaavnPlaylistIE(JioSaavnBaseIE):
def _entries(self, token, first_page_data, page): def _entries(self, token, first_page_data, page):
page_data = first_page_data if not page else self._fetch_page(token, page + 1) page_data = first_page_data if not page else self._fetch_page(token, page + 1)
yield from self._yield_songs(page_data) yield from self._yield_items(page_data, 'songs')
def _real_extract(self, url): def _real_extract(self, url):
display_id = self._match_id(url) display_id = self._match_id(url)
@ -199,3 +318,95 @@ class JioSaavnPlaylistIE(JioSaavnBaseIE):
return self.playlist_result(InAdvancePagedList( return self.playlist_result(InAdvancePagedList(
functools.partial(self._entries, display_id, playlist_data), functools.partial(self._entries, display_id, playlist_data),
total_pages, self._PAGE_SIZE), display_id, traverse_obj(playlist_data, ('listname', {str}))) total_pages, self._PAGE_SIZE), display_id, traverse_obj(playlist_data, ('listname', {str})))
class JioSaavnShowPlaylistIE(JioSaavnBaseIE):
IE_NAME = 'jiosaavn:show:playlist'
_VALID_URL = JioSaavnBaseIE._URL_BASE_RE + r'/shows/(?P<show>[^#/?]+)/(?P<season>\d+)/[^/?#]+'
_TESTS = [{
'url': 'https://www.jiosaavn.com/shows/talking-music/1/PjReFP-Sguk_',
'info_dict': {
'id': 'talking-music-1',
'title': 'Talking Music',
},
'playlist_mincount': 11,
}]
_ENTRY_IE = JioSaavnShowIE
_PAGE_SIZE = 10
def _fetch_page(self, show_id, season_id, page):
return self._call_api('show', show_id, f'show page {page}', {
'p': page,
'__call': 'show.getAllEpisodes',
'show_id': show_id,
'season_number': season_id,
'api_version': '4',
'sort_order': 'desc',
})
def _entries(self, show_id, season_id, page):
page_data = self._fetch_page(show_id, season_id, page + 1)
yield from self._yield_items(page_data, keys=None, parse_func=self._extract_episode)
def _real_extract(self, url):
show_slug, season_id = self._match_valid_url(url).group('show', 'season')
playlist_id = f'{show_slug}-{season_id}'
webpage = self._download_webpage(url, playlist_id)
show_info = self._search_json(
r'window\.__INITIAL_DATA__\s*=', webpage, 'initial data',
playlist_id, transform_source=js_to_json)['showView']
show_id = show_info['current_id']
entries = OnDemandPagedList(functools.partial(self._entries, show_id, season_id), self._PAGE_SIZE)
return self.playlist_result(
entries, playlist_id, traverse_obj(show_info, ('show', 'title', 'text', {str})))
class JioSaavnArtistIE(JioSaavnBaseIE):
IE_NAME = 'jiosaavn:artist'
_VALID_URL = JioSaavnBaseIE._URL_BASE_RE + r'/artist/[^/?#]+/(?P<id>[^/?#]+)'
_TESTS = [{
'url': 'https://www.jiosaavn.com/artist/krsna-songs/rYLBEve2z3U_',
'info_dict': {
'id': 'rYLBEve2z3U_',
'title': 'KR$NA',
},
'playlist_mincount': 38,
}, {
'url': 'https://www.jiosaavn.com/artist/sanam-puri-songs/SkNEv3qRhDE_',
'info_dict': {
'id': 'SkNEv3qRhDE_',
'title': 'Sanam Puri',
},
'playlist_mincount': 51,
}]
_ENTRY_IE = JioSaavnSongIE
_PAGE_SIZE = 50
def _fetch_page(self, artist_id, page):
return self._call_api('artist', artist_id, f'artist page {page + 1}', {
'p': page,
'n_song': self._PAGE_SIZE,
'n_album': self._PAGE_SIZE,
'sub_type': '',
'includeMetaTags': '',
'api_version': '4',
'category': 'alphabetical',
'sort_order': 'asc',
})
def _entries(self, artist_id, first_page):
for page in itertools.count():
playlist_data = first_page if not page else self._fetch_page(artist_id, page)
if not traverse_obj(playlist_data, ('topSongs', ..., {dict})):
break
yield from self._yield_items(playlist_data, 'topSongs')
def _real_extract(self, url):
artist_id = self._match_id(url)
first_page = self._fetch_page(artist_id, 0)
return self.playlist_result(
self._entries(artist_id, first_page), artist_id,
traverse_obj(first_page, ('name', {str})))

View File

@ -2,7 +2,6 @@ from .common import InfoExtractor
from ..utils import ( from ..utils import (
clean_html, clean_html,
merge_dicts, merge_dicts,
str_or_none,
traverse_obj, traverse_obj,
unified_timestamp, unified_timestamp,
url_or_none, url_or_none,
@ -138,13 +137,15 @@ class LRTRadioIE(LRTBaseIE):
'https://www.lrt.lt/radioteka/api/media', video_id, 'https://www.lrt.lt/radioteka/api/media', video_id,
query={'url': f'/mediateka/irasas/{video_id}/{path}'}) query={'url': f'/mediateka/irasas/{video_id}/{path}'})
return traverse_obj(media, { return {
'id': ('id', {int}, {str_or_none}), 'id': video_id,
'formats': self._extract_m3u8_formats(media['playlist_item']['file'], video_id),
**traverse_obj(media, {
'title': ('title', {str}), 'title': ('title', {str}),
'tags': ('tags', ..., 'name', {str}), 'tags': ('tags', ..., 'name', {str}),
'categories': ('playlist_item', 'category', {str}, filter, all, filter), 'categories': ('playlist_item', 'category', {str}, filter, all, filter),
'description': ('content', {clean_html}, {str}), 'description': ('content', {clean_html}, {str}),
'timestamp': ('date', {lambda x: x.replace('.', '/')}, {unified_timestamp}), 'timestamp': ('date', {lambda x: x.replace('.', '/')}, {unified_timestamp}),
'thumbnail': ('playlist_item', 'image', {urljoin('https://www.lrt.lt')}), 'thumbnail': ('playlist_item', 'image', {urljoin('https://www.lrt.lt')}),
'formats': ('playlist_item', 'file', {lambda x: self._extract_m3u8_formats(x, video_id)}), }),
}) }

View File

@ -1,7 +1,5 @@
from .telecinco import TelecincoBaseIE from .telecinco import TelecincoBaseIE
from ..networking.exceptions import HTTPError
from ..utils import ( from ..utils import (
ExtractorError,
int_or_none, int_or_none,
parse_iso8601, parse_iso8601,
) )
@ -81,17 +79,7 @@ class MiTeleIE(TelecincoBaseIE):
def _real_extract(self, url): def _real_extract(self, url):
display_id = self._match_id(url) display_id = self._match_id(url)
webpage = self._download_akamai_webpage(url, display_id)
try: # yt-dlp's default user-agents are too old and blocked by akamai
webpage = self._download_webpage(url, display_id, headers={
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; rv:136.0) Gecko/20100101 Firefox/136.0',
})
except ExtractorError as e:
if not isinstance(e.cause, HTTPError) or e.cause.status != 403:
raise
# Retry with impersonation if hardcoded UA is insufficient to bypass akamai
webpage = self._download_webpage(url, display_id, impersonate=True)
pre_player = self._search_json( pre_player = self._search_json(
r'window\.\$REACTBASE_STATE\.prePlayer_mtweb\s*=', r'window\.\$REACTBASE_STATE\.prePlayer_mtweb\s*=',
webpage, 'Pre Player', display_id)['prePlayer'] webpage, 'Pre Player', display_id)['prePlayer']

View File

@ -19,7 +19,8 @@ from ..utils import (
class NBACVPBaseIE(TurnerBaseIE): class NBACVPBaseIE(TurnerBaseIE):
def _extract_nba_cvp_info(self, path, video_id, fatal=False): def _extract_nba_cvp_info(self, path, video_id, fatal=False):
return self._extract_cvp_info( return self._extract_cvp_info(
f'http://secure.nba.com/{path}', video_id, { # XXX: The 3rd argument (None) needs to be the AdobePass software_statement
f'http://secure.nba.com/{path}', video_id, None, {
'default': { 'default': {
'media_src': 'http://nba.cdn.turner.com/nba/big', 'media_src': 'http://nba.cdn.turner.com/nba/big',
}, },
@ -94,6 +95,7 @@ class NBAWatchBaseIE(NBACVPBaseIE):
class NBAWatchEmbedIE(NBAWatchBaseIE): class NBAWatchEmbedIE(NBAWatchBaseIE):
_WORKING = False
IE_NAME = 'nba:watch:embed' IE_NAME = 'nba:watch:embed'
_VALID_URL = NBAWatchBaseIE._VALID_URL_BASE + r'embed\?.*?\bid=(?P<id>\d+)' _VALID_URL = NBAWatchBaseIE._VALID_URL_BASE + r'embed\?.*?\bid=(?P<id>\d+)'
_TESTS = [{ _TESTS = [{
@ -115,6 +117,7 @@ class NBAWatchEmbedIE(NBAWatchBaseIE):
class NBAWatchIE(NBAWatchBaseIE): class NBAWatchIE(NBAWatchBaseIE):
_WORKING = False
IE_NAME = 'nba:watch' IE_NAME = 'nba:watch'
_VALID_URL = NBAWatchBaseIE._VALID_URL_BASE + r'(?:nba/)?video/(?P<id>.+?(?=/index\.html)|(?:[^/]+/)*[^/?#&]+)' _VALID_URL = NBAWatchBaseIE._VALID_URL_BASE + r'(?:nba/)?video/(?P<id>.+?(?=/index\.html)|(?:[^/]+/)*[^/?#&]+)'
_TESTS = [{ _TESTS = [{
@ -167,6 +170,7 @@ class NBAWatchIE(NBAWatchBaseIE):
class NBAWatchCollectionIE(NBAWatchBaseIE): class NBAWatchCollectionIE(NBAWatchBaseIE):
_WORKING = False
IE_NAME = 'nba:watch:collection' IE_NAME = 'nba:watch:collection'
_VALID_URL = NBAWatchBaseIE._VALID_URL_BASE + r'list/collection/(?P<id>[^/?#&]+)' _VALID_URL = NBAWatchBaseIE._VALID_URL_BASE + r'list/collection/(?P<id>[^/?#&]+)'
_TESTS = [{ _TESTS = [{
@ -336,6 +340,7 @@ class NBABaseIE(NBACVPBaseIE):
class NBAEmbedIE(NBABaseIE): class NBAEmbedIE(NBABaseIE):
_WORKING = False
IE_NAME = 'nba:embed' IE_NAME = 'nba:embed'
_VALID_URL = r'https?://secure\.nba\.com/assets/amp/include/video/(?:topI|i)frame\.html\?.*?\bcontentId=(?P<id>[^?#&]+)' _VALID_URL = r'https?://secure\.nba\.com/assets/amp/include/video/(?:topI|i)frame\.html\?.*?\bcontentId=(?P<id>[^?#&]+)'
_TESTS = [{ _TESTS = [{
@ -358,6 +363,7 @@ class NBAEmbedIE(NBABaseIE):
class NBAIE(NBABaseIE): class NBAIE(NBABaseIE):
_WORKING = False
IE_NAME = 'nba' IE_NAME = 'nba'
_VALID_URL = NBABaseIE._VALID_URL_BASE + f'(?!{NBABaseIE._CHANNEL_PATH_REGEX})video/(?P<id>(?:[^/]+/)*[^/?#&]+)' _VALID_URL = NBABaseIE._VALID_URL_BASE + f'(?!{NBABaseIE._CHANNEL_PATH_REGEX})video/(?P<id>(?:[^/]+/)*[^/?#&]+)'
_TESTS = [{ _TESTS = [{
@ -385,6 +391,7 @@ class NBAIE(NBABaseIE):
class NBAChannelIE(NBABaseIE): class NBAChannelIE(NBABaseIE):
_WORKING = False
IE_NAME = 'nba:channel' IE_NAME = 'nba:channel'
_VALID_URL = NBABaseIE._VALID_URL_BASE + f'(?:{NBABaseIE._CHANNEL_PATH_REGEX})/(?P<id>[^/?#&]+)' _VALID_URL = NBABaseIE._VALID_URL_BASE + f'(?:{NBABaseIE._CHANNEL_PATH_REGEX})/(?P<id>[^/?#&]+)'
_TESTS = [{ _TESTS = [{

View File

@ -6,7 +6,7 @@ import xml.etree.ElementTree
from .adobepass import AdobePassIE from .adobepass import AdobePassIE
from .common import InfoExtractor from .common import InfoExtractor
from .theplatform import ThePlatformIE, default_ns from .theplatform import ThePlatformBaseIE, ThePlatformIE, default_ns
from ..networking import HEADRequest from ..networking import HEADRequest
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
@ -14,26 +14,130 @@ from ..utils import (
UserNotLive, UserNotLive,
clean_html, clean_html,
determine_ext, determine_ext,
extract_attributes,
float_or_none, float_or_none,
get_element_html_by_class,
int_or_none, int_or_none,
join_nonempty, join_nonempty,
make_archive_id,
mimetype2ext, mimetype2ext,
parse_age_limit, parse_age_limit,
parse_duration, parse_duration,
parse_iso8601,
remove_end, remove_end,
smuggle_url,
traverse_obj,
try_get, try_get,
unescapeHTML, unescapeHTML,
unified_timestamp, unified_timestamp,
update_url_query, update_url_query,
url_basename, url_basename,
url_or_none,
) )
from ..utils.traversal import require, traverse_obj
class NBCIE(ThePlatformIE): # XXX: Do not subclass from concrete IE class NBCUniversalBaseIE(ThePlatformBaseIE):
_VALID_URL = r'https?(?P<permalink>://(?:www\.)?nbc\.com/(?:classic-tv/)?[^/]+/video/[^/]+/(?P<id>(?:NBCE|n)?\d+))' _GEO_COUNTRIES = ['US']
_GEO_BYPASS = False
_M3U8_RE = r'https?://[^/?#]+/prod/[\w-]+/(?P<folders>[^?#]+/)cmaf/mpeg_(?:cbcs|cenc)\w*/master_cmaf\w*\.m3u8'
def _download_nbcu_smil_and_extract_m3u8_url(self, tp_path, video_id, query):
smil = self._download_xml(
f'https://link.theplatform.com/s/{tp_path}', video_id,
'Downloading SMIL manifest', 'Failed to download SMIL manifest', query={
**query,
'format': 'SMIL', # XXX: Do not confuse "format" with "formats"
'manifest': 'm3u',
'switch': 'HLSServiceSecure', # Or else we get broken mp4 http URLs instead of HLS
}, headers=self.geo_verification_headers())
ns = f'//{{{default_ns}}}'
if url := traverse_obj(smil, (f'{ns}video/@src', lambda _, v: determine_ext(v) == 'm3u8', any)):
return url
exc = traverse_obj(smil, (f'{ns}param', lambda _, v: v.get('name') == 'exception', '@value', any))
if exc == 'GeoLocationBlocked':
self.raise_geo_restricted(countries=self._GEO_COUNTRIES)
raise ExtractorError(traverse_obj(smil, (f'{ns}ref/@abstract', ..., any)), expected=exc == 'Expired')
def _extract_nbcu_formats_and_subtitles(self, tp_path, video_id, query):
# formats='mpeg4' will return either a working m3u8 URL or an m3u8 template for non-DRM HLS
# formats='m3u+none,mpeg4' may return DRM HLS but w/the "folders" needed for non-DRM template
query['formats'] = 'm3u+none,mpeg4'
m3u8_url = self._download_nbcu_smil_and_extract_m3u8_url(tp_path, video_id, query)
if mobj := re.fullmatch(self._M3U8_RE, m3u8_url):
query['formats'] = 'mpeg4'
m3u8_tmpl = self._download_nbcu_smil_and_extract_m3u8_url(tp_path, video_id, query)
# Example: https://vod-lf-oneapp-prd.akamaized.net/prod/video/{folders}master_hls.m3u8
if '{folders}' in m3u8_tmpl:
self.write_debug('Found m3u8 URL template, formatting URL path')
m3u8_url = m3u8_tmpl.format(folders=mobj.group('folders'))
if '/mpeg_cenc' in m3u8_url or '/mpeg_cbcs' in m3u8_url:
self.report_drm(video_id)
return self._extract_m3u8_formats_and_subtitles(m3u8_url, video_id, 'mp4', m3u8_id='hls')
def _extract_nbcu_video(self, url, display_id, old_ie_key=None):
webpage = self._download_webpage(url, display_id)
settings = self._search_json(
r'<script[^>]+data-drupal-selector="drupal-settings-json"[^>]*>',
webpage, 'settings', display_id)
query = {}
tve = extract_attributes(get_element_html_by_class('tve-video-deck-app', webpage) or '')
if tve:
account_pid = tve.get('data-mpx-media-account-pid') or tve['data-mpx-account-pid']
account_id = tve['data-mpx-media-account-id']
metadata = self._parse_json(
tve.get('data-normalized-video') or '', display_id, fatal=False, transform_source=unescapeHTML)
video_id = tve.get('data-guid') or metadata['guid']
if tve.get('data-entitlement') == 'auth':
auth = settings['tve_adobe_auth']
release_pid = tve['data-release-pid']
resource = self._get_mvpd_resource(
tve.get('data-adobe-pass-resource-id') or auth['adobePassResourceId'],
tve['data-title'], release_pid, tve.get('data-rating'))
query['auth'] = self._extract_mvpd_auth(
url, release_pid, auth['adobePassRequestorId'],
resource, auth['adobePassSoftwareStatement'])
else:
ls_playlist = traverse_obj(settings, (
'ls_playlist', lambda _, v: v['defaultGuid'], any, {require('LS playlist')}))
video_id = ls_playlist['defaultGuid']
account_pid = ls_playlist.get('mpxMediaAccountPid') or ls_playlist['mpxAccountPid']
account_id = ls_playlist['mpxMediaAccountId']
metadata = traverse_obj(ls_playlist, ('videos', lambda _, v: v['guid'] == video_id, any)) or {}
tp_path = f'{account_pid}/media/guid/{account_id}/{video_id}'
formats, subtitles = self._extract_nbcu_formats_and_subtitles(tp_path, video_id, query)
tp_metadata = self._download_theplatform_metadata(tp_path, video_id, fatal=False)
parsed_info = self._parse_theplatform_metadata(tp_metadata)
self._merge_subtitles(parsed_info['subtitles'], target=subtitles)
return {
**parsed_info,
**traverse_obj(metadata, {
'title': ('title', {str}),
'description': ('description', {str}),
'duration': ('durationInSeconds', {int_or_none}),
'timestamp': ('airDate', {parse_iso8601}),
'thumbnail': ('thumbnailUrl', {url_or_none}),
'season_number': ('seasonNumber', {int_or_none}),
'episode_number': ('episodeNumber', {int_or_none}),
'episode': ('episodeTitle', {str}),
'series': ('show', {str}),
}),
'id': video_id,
'display_id': display_id,
'formats': formats,
'subtitles': subtitles,
'_old_archive_ids': [make_archive_id(old_ie_key, video_id)] if old_ie_key else None,
}
class NBCIE(NBCUniversalBaseIE):
_VALID_URL = r'https?(?P<permalink>://(?:www\.)?nbc\.com/(?:classic-tv/)?[^/?#]+/video/[^/?#]+/(?P<id>\w+))'
_TESTS = [ _TESTS = [
{ {
'url': 'http://www.nbc.com/the-tonight-show/video/jimmy-fallon-surprises-fans-at-ben-jerrys/2848237', 'url': 'http://www.nbc.com/the-tonight-show/video/jimmy-fallon-surprises-fans-at-ben-jerrys/2848237',
@ -49,47 +153,20 @@ class NBCIE(ThePlatformIE): # XXX: Do not subclass from concrete IE
'episode_number': 86, 'episode_number': 86,
'season': 'Season 2', 'season': 'Season 2',
'season_number': 2, 'season_number': 2,
'series': 'Tonight Show: Jimmy Fallon', 'series': 'Tonight',
'duration': 237.0, 'duration': 236.504,
'chapters': 'count:1', 'tags': 'count:2',
'tags': 'count:4',
'thumbnail': r're:https?://.+\.jpg', 'thumbnail': r're:https?://.+\.jpg',
'categories': ['Series/The Tonight Show Starring Jimmy Fallon'], 'categories': ['Series/The Tonight Show Starring Jimmy Fallon'],
'media_type': 'Full Episode', 'media_type': 'Full Episode',
'age_limit': 14,
'_old_archive_ids': ['theplatform 2848237'],
}, },
'params': { 'params': {
'skip_download': 'm3u8', 'skip_download': 'm3u8',
}, },
}, },
{ {
'url': 'http://www.nbc.com/saturday-night-live/video/star-wars-teaser/2832821',
'info_dict': {
'id': '2832821',
'ext': 'mp4',
'title': 'Star Wars Teaser',
'description': 'md5:0b40f9cbde5b671a7ff62fceccc4f442',
'timestamp': 1417852800,
'upload_date': '20141206',
'uploader': 'NBCU-COM',
},
'skip': 'page not found',
},
{
# HLS streams requires the 'hdnea3' cookie
'url': 'http://www.nbc.com/Kings/video/goliath/n1806',
'info_dict': {
'id': '101528f5a9e8127b107e98c5e6ce4638',
'ext': 'mp4',
'title': 'Goliath',
'description': 'When an unknown soldier saves the life of the King\'s son in battle, he\'s thrust into the limelight and politics of the kingdom.',
'timestamp': 1237100400,
'upload_date': '20090315',
'uploader': 'NBCU-COM',
},
'skip': 'page not found',
},
{
# manifest url does not have extension
'url': 'https://www.nbc.com/the-golden-globe-awards/video/oprah-winfrey-receives-cecil-b-de-mille-award-at-the-2018-golden-globes/3646439', 'url': 'https://www.nbc.com/the-golden-globe-awards/video/oprah-winfrey-receives-cecil-b-de-mille-award-at-the-2018-golden-globes/3646439',
'info_dict': { 'info_dict': {
'id': '3646439', 'id': '3646439',
@ -99,48 +176,47 @@ class NBCIE(ThePlatformIE): # XXX: Do not subclass from concrete IE
'episode_number': 1, 'episode_number': 1,
'season': 'Season 75', 'season': 'Season 75',
'season_number': 75, 'season_number': 75,
'series': 'The Golden Globe Awards', 'series': 'Golden Globes',
'description': 'Oprah Winfrey receives the Cecil B. de Mille Award at the 75th Annual Golden Globe Awards.', 'description': 'Oprah Winfrey receives the Cecil B. de Mille Award at the 75th Annual Golden Globe Awards.',
'uploader': 'NBCU-COM', 'uploader': 'NBCU-COM',
'upload_date': '20180107', 'upload_date': '20180107',
'timestamp': 1515312000, 'timestamp': 1515312000,
'duration': 570.0, 'duration': 569.703,
'tags': 'count:8', 'tags': 'count:8',
'thumbnail': r're:https?://.+\.jpg', 'thumbnail': r're:https?://.+\.jpg',
'chapters': 'count:1', 'media_type': 'Highlight',
'age_limit': 0,
'categories': ['Series/The Golden Globe Awards'],
'_old_archive_ids': ['theplatform 3646439'],
}, },
'params': { 'params': {
'skip_download': 'm3u8', 'skip_download': 'm3u8',
}, },
}, },
{ {
# new video_id format # Needs to be extracted from webpage instead of GraphQL
'url': 'https://www.nbc.com/quantum-leap/video/bens-first-leap-nbcs-quantum-leap/NBCE125189978', 'url': 'https://www.nbc.com/paris2024/video/ali-truwit-found-purpose-pool-after-her-life-changed/para24_sww_alitruwittodayshow_240823',
'info_dict': { 'info_dict': {
'id': 'NBCE125189978', 'id': 'para24_sww_alitruwittodayshow_240823',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Ben\'s First Leap | NBC\'s Quantum Leap', 'title': 'Ali Truwit found purpose in the pool after her life changed',
'description': 'md5:a82762449b7ec4bb83291a7b355ebf8e', 'description': 'md5:c16d7489e1516593de1cc5d3f39b9bdb',
'uploader': 'NBCU-COM', 'uploader': 'NBCU-SPORTS',
'series': 'Quantum Leap', 'duration': 311.077,
'season': 'Season 1',
'season_number': 1,
'episode': 'Ben\'s First Leap | NBC\'s Quantum Leap',
'episode_number': 1,
'duration': 170.171,
'chapters': [],
'timestamp': 1663956155,
'upload_date': '20220923',
'tags': 'count:10',
'age_limit': 0,
'thumbnail': r're:https?://.+\.jpg', 'thumbnail': r're:https?://.+\.jpg',
'categories': ['Series/Quantum Leap 2022'], 'episode': 'Ali Truwit found purpose in the pool after her life changed',
'media_type': 'Highlight', 'timestamp': 1724435902.0,
'upload_date': '20240823',
'_old_archive_ids': ['theplatform para24_sww_alitruwittodayshow_240823'],
}, },
'params': { 'params': {
'skip_download': 'm3u8', 'skip_download': 'm3u8',
}, },
}, },
{
'url': 'https://www.nbc.com/quantum-leap/video/bens-first-leap-nbcs-quantum-leap/NBCE125189978',
'only_matching': True,
},
{ {
'url': 'https://www.nbc.com/classic-tv/charles-in-charge/video/charles-in-charge-pilot/n3310', 'url': 'https://www.nbc.com/classic-tv/charles-in-charge/video/charles-in-charge-pilot/n3310',
'only_matching': True, 'only_matching': True,
@ -151,6 +227,7 @@ class NBCIE(ThePlatformIE): # XXX: Do not subclass from concrete IE
'only_matching': True, 'only_matching': True,
}, },
] ]
_SOFTWARE_STATEMENT = 'eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiI1Yzg2YjdkYy04NDI3LTRjNDUtOGQwZi1iNDkzYmE3MmQwYjQiLCJuYmYiOjE1Nzg3MDM2MzEsImlzcyI6ImF1dGguYWRvYmUuY29tIiwiaWF0IjoxNTc4NzAzNjMxfQ.QQKIsBhAjGQTMdAqRTqhcz2Cddr4Y2hEjnSiOeKKki4nLrkDOsjQMmqeTR0hSRarraxH54wBgLvsxI7LHwKMvr7G8QpynNAxylHlQD3yhN9tFhxt4KR5wW3as02B-W2TznK9bhNWPKIyHND95Uo2Mi6rEQoq8tM9O09WPWaanE5BX_-r6Llr6dPq5F0Lpx2QOn2xYRb1T4nFxdFTNoss8GBds8OvChTiKpXMLHegLTc1OS4H_1a8tO_37jDwSdJuZ8iTyRLV4kZ2cpL6OL5JPMObD4-HQiec_dfcYgMKPiIfP9ZqdXpec2SVaCLsWEk86ZYvD97hLIQrK5rrKd1y-A'
def _real_extract(self, url): def _real_extract(self, url):
permalink, video_id = self._match_valid_url(url).groups() permalink, video_id = self._match_valid_url(url).groups()
@ -196,62 +273,50 @@ class NBCIE(ThePlatformIE): # XXX: Do not subclass from concrete IE
'userId': '0', 'userId': '0',
}), }),
})['data']['bonanzaPage']['metadata'] })['data']['bonanzaPage']['metadata']
query = {
'mbr': 'true', if not video_data:
'manifest': 'm3u', # Some videos are not available via GraphQL API
'switch': 'HLSServiceSecure', webpage = self._download_webpage(url, video_id)
} video_data = self._search_json(
r'<script>\s*PRELOAD\s*=', webpage, 'video data',
video_id)['pages'][urllib.parse.urlparse(url).path]['base']['metadata']
video_id = video_data['mpxGuid'] video_id = video_data['mpxGuid']
tp_path = 'NnzsPC/media/guid/{}/{}'.format(video_data.get('mpxAccountId') or '2410887629', video_id) tp_path = f'NnzsPC/media/guid/{video_data["mpxAccountId"]}/{video_id}'
tpm = self._download_theplatform_metadata(tp_path, video_id) tpm = self._download_theplatform_metadata(tp_path, video_id, fatal=False)
title = tpm.get('title') or video_data.get('secondaryTitle') title = traverse_obj(tpm, ('title', {str})) or video_data.get('secondaryTitle')
query = {}
if video_data.get('locked'): if video_data.get('locked'):
resource = self._get_mvpd_resource( resource = self._get_mvpd_resource(
video_data.get('resourceId') or 'nbcentertainment', video_data['resourceId'], title, video_id, video_data.get('rating'))
title, video_id, video_data.get('rating'))
query['auth'] = self._extract_mvpd_auth( query['auth'] = self._extract_mvpd_auth(
url, video_id, 'nbcentertainment', resource) url, video_id, 'nbcentertainment', resource, self._SOFTWARE_STATEMENT)
theplatform_url = smuggle_url(update_url_query(
'http://link.theplatform.com/s/NnzsPC/media/guid/{}/{}'.format(video_data.get('mpxAccountId') or '2410887629', video_id),
query), {'force_smil_url': True})
# Empty string or 0 can be valid values for these. So the check must be `is None` formats, subtitles = self._extract_nbcu_formats_and_subtitles(tp_path, video_id, query)
description = video_data.get('description') parsed_info = self._parse_theplatform_metadata(tpm)
if description is None: self._merge_subtitles(parsed_info['subtitles'], target=subtitles)
description = tpm.get('description')
episode_number = int_or_none(video_data.get('episodeNumber'))
if episode_number is None:
episode_number = int_or_none(tpm.get('nbcu$airOrder'))
rating = video_data.get('rating')
if rating is None:
try_get(tpm, lambda x: x['ratings'][0]['rating'])
season_number = int_or_none(video_data.get('seasonNumber'))
if season_number is None:
season_number = int_or_none(tpm.get('nbcu$seasonNumber'))
series = video_data.get('seriesShortTitle')
if series is None:
series = tpm.get('nbcu$seriesShortTitle')
tags = video_data.get('keywords')
if tags is None or len(tags) == 0:
tags = tpm.get('keywords')
return { return {
'_type': 'url_transparent', **traverse_obj(video_data, {
'age_limit': parse_age_limit(rating), 'description': ('description', {str}, filter),
'description': description, 'episode': ('secondaryTitle', {str}, filter),
'episode': title, 'episode_number': ('episodeNumber', {int_or_none}),
'episode_number': episode_number, 'season_number': ('seasonNumber', {int_or_none}),
'age_limit': ('rating', {parse_age_limit}),
'tags': ('keywords', ..., {str}, filter, all, filter),
'series': ('seriesShortTitle', {str}),
}),
**parsed_info,
'id': video_id, 'id': video_id,
'ie_key': 'ThePlatform',
'season_number': season_number,
'series': series,
'tags': tags,
'title': title, 'title': title,
'url': theplatform_url, 'formats': formats,
'subtitles': subtitles,
'_old_archive_ids': [make_archive_id('ThePlatform', video_id)],
} }
class NBCSportsVPlayerIE(InfoExtractor): class NBCSportsVPlayerIE(InfoExtractor):
_WORKING = False
_VALID_URL_BASE = r'https?://(?:vplayer\.nbcsports\.com|(?:www\.)?nbcsports\.com/vplayer)/' _VALID_URL_BASE = r'https?://(?:vplayer\.nbcsports\.com|(?:www\.)?nbcsports\.com/vplayer)/'
_VALID_URL = _VALID_URL_BASE + r'(?:[^/]+/)+(?P<id>[0-9a-zA-Z_]+)' _VALID_URL = _VALID_URL_BASE + r'(?:[^/]+/)+(?P<id>[0-9a-zA-Z_]+)'
_EMBED_REGEX = [rf'(?:iframe[^>]+|var video|div[^>]+data-(?:mpx-)?)[sS]rc\s?=\s?"(?P<url>{_VALID_URL_BASE}[^\"]+)'] _EMBED_REGEX = [rf'(?:iframe[^>]+|var video|div[^>]+data-(?:mpx-)?)[sS]rc\s?=\s?"(?P<url>{_VALID_URL_BASE}[^\"]+)']
@ -286,6 +351,7 @@ class NBCSportsVPlayerIE(InfoExtractor):
class NBCSportsIE(InfoExtractor): class NBCSportsIE(InfoExtractor):
_WORKING = False
_VALID_URL = r'https?://(?:www\.)?nbcsports\.com//?(?!vplayer/)(?:[^/]+/)+(?P<id>[0-9a-z-]+)' _VALID_URL = r'https?://(?:www\.)?nbcsports\.com//?(?!vplayer/)(?:[^/]+/)+(?P<id>[0-9a-z-]+)'
_TESTS = [{ _TESTS = [{
@ -321,6 +387,7 @@ class NBCSportsIE(InfoExtractor):
class NBCSportsStreamIE(AdobePassIE): class NBCSportsStreamIE(AdobePassIE):
_WORKING = False
_VALID_URL = r'https?://stream\.nbcsports\.com/.+?\bpid=(?P<id>\d+)' _VALID_URL = r'https?://stream\.nbcsports\.com/.+?\bpid=(?P<id>\d+)'
_TEST = { _TEST = {
'url': 'http://stream.nbcsports.com/nbcsn/generic?pid=206559', 'url': 'http://stream.nbcsports.com/nbcsn/generic?pid=206559',
@ -354,7 +421,7 @@ class NBCSportsStreamIE(AdobePassIE):
source_url = video_source['ottStreamUrl'] source_url = video_source['ottStreamUrl']
is_live = video_source.get('type') == 'live' or video_source.get('status') == 'Live' is_live = video_source.get('type') == 'live' or video_source.get('status') == 'Live'
resource = self._get_mvpd_resource('nbcsports', title, video_id, '') resource = self._get_mvpd_resource('nbcsports', title, video_id, '')
token = self._extract_mvpd_auth(url, video_id, 'nbcsports', resource) token = self._extract_mvpd_auth(url, video_id, 'nbcsports', resource, None) # XXX: None arg needs to be software_statement
tokenized_url = self._download_json( tokenized_url = self._download_json(
'https://token.playmakerservices.com/cdn', 'https://token.playmakerservices.com/cdn',
video_id, data=json.dumps({ video_id, data=json.dumps({
@ -534,22 +601,26 @@ class NBCOlympicsIE(InfoExtractor):
IE_NAME = 'nbcolympics' IE_NAME = 'nbcolympics'
_VALID_URL = r'https?://www\.nbcolympics\.com/videos?/(?P<id>[0-9a-z-]+)' _VALID_URL = r'https?://www\.nbcolympics\.com/videos?/(?P<id>[0-9a-z-]+)'
_TEST = { _TESTS = [{
# Geo-restricted to US # Geo-restricted to US
'url': 'http://www.nbcolympics.com/video/justin-roses-son-leo-was-tears-after-his-dad-won-gold', 'url': 'https://www.nbcolympics.com/videos/watch-final-minutes-team-usas-mens-basketball-gold',
'md5': '54fecf846d05429fbaa18af557ee523a',
'info_dict': { 'info_dict': {
'id': 'WjTBzDXx5AUq', 'id': 'SAwGfPlQ1q01',
'display_id': 'justin-roses-son-leo-was-tears-after-his-dad-won-gold',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Rose\'s son Leo was in tears after his dad won gold', 'display_id': 'watch-final-minutes-team-usas-mens-basketball-gold',
'description': 'Olympic gold medalist Justin Rose gets emotional talking to the impact his win in men\'s golf has already had on his children.', 'title': 'Watch the final minutes of Team USA\'s men\'s basketball gold',
'timestamp': 1471274964, 'description': 'md5:f704f591217305c9559b23b877aa8d31',
'upload_date': '20160815',
'uploader': 'NBCU-SPORTS', 'uploader': 'NBCU-SPORTS',
'duration': 387.053,
'thumbnail': r're:https://.+/.+\.jpg',
'chapters': [],
'timestamp': 1723346984,
'upload_date': '20240811',
}, },
'skip': '404 Not Found', }, {
} 'url': 'http://www.nbcolympics.com/video/justin-roses-son-leo-was-tears-after-his-dad-won-gold',
'only_matching': True,
}]
def _real_extract(self, url): def _real_extract(self, url):
display_id = self._match_id(url) display_id = self._match_id(url)
@ -578,6 +649,7 @@ class NBCOlympicsIE(InfoExtractor):
class NBCOlympicsStreamIE(AdobePassIE): class NBCOlympicsStreamIE(AdobePassIE):
_WORKING = False
IE_NAME = 'nbcolympics:stream' IE_NAME = 'nbcolympics:stream'
_VALID_URL = r'https?://stream\.nbcolympics\.com/(?P<id>[0-9a-z-]+)' _VALID_URL = r'https?://stream\.nbcolympics\.com/(?P<id>[0-9a-z-]+)'
_TESTS = [ _TESTS = [
@ -630,7 +702,8 @@ class NBCOlympicsStreamIE(AdobePassIE):
event_config.get('resourceId', 'NBCOlympics'), event_config.get('resourceId', 'NBCOlympics'),
re.sub(r'[^\w\d ]+', '', event_config['eventTitle']), pid, re.sub(r'[^\w\d ]+', '', event_config['eventTitle']), pid,
event_config.get('ratingId', 'NO VALUE')) event_config.get('ratingId', 'NO VALUE'))
media_token = self._extract_mvpd_auth(url, pid, event_config.get('requestorId', 'NBCOlympics'), ap_resource) # XXX: The None arg below needs to be the software_statement for this requestor
media_token = self._extract_mvpd_auth(url, pid, event_config.get('requestorId', 'NBCOlympics'), ap_resource, None)
source_url = self._download_json( source_url = self._download_json(
'https://tokens.playmakerservices.com/', pid, 'Retrieving tokenized URL', 'https://tokens.playmakerservices.com/', pid, 'Retrieving tokenized URL',
@ -848,3 +921,178 @@ class NBCStationsIE(InfoExtractor):
'is_live': is_live, 'is_live': is_live,
**info, **info,
} }
class BravoTVIE(NBCUniversalBaseIE):
_VALID_URL = r'https?://(?:www\.)?(?:bravotv|oxygen)\.com/(?:[^/?#]+/)+(?P<id>[^/?#]+)'
_TESTS = [{
'url': 'https://www.bravotv.com/top-chef/season-16/episode-15/videos/the-top-chef-season-16-winner-is',
'info_dict': {
'id': '3923059',
'ext': 'mp4',
'title': 'The Top Chef Season 16 Winner Is...',
'display_id': 'the-top-chef-season-16-winner-is',
'description': 'Find out who takes the title of Top Chef!',
'upload_date': '20190315',
'timestamp': 1552618860,
'season_number': 16,
'episode_number': 15,
'series': 'Top Chef',
'episode': 'Finale',
'duration': 190,
'season': 'Season 16',
'thumbnail': r're:^https://.+\.jpg',
'uploader': 'NBCU-BRAV',
'categories': ['Series', 'Series/Top Chef'],
'tags': 'count:10',
},
'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://www.bravotv.com/top-chef/season-20/episode-1/london-calling',
'info_dict': {
'id': '9000234570',
'ext': 'mp4',
'title': 'London Calling',
'display_id': 'london-calling',
'description': 'md5:5af95a8cbac1856bd10e7562f86bb759',
'upload_date': '20230310',
'timestamp': 1678418100,
'season_number': 20,
'episode_number': 1,
'series': 'Top Chef',
'episode': 'London Calling',
'duration': 3266,
'season': 'Season 20',
'chapters': 'count:7',
'thumbnail': r're:^https://.+\.jpg',
'age_limit': 14,
'media_type': 'Full Episode',
'uploader': 'NBCU-MPAT',
'categories': ['Series/Top Chef'],
'tags': 'count:10',
},
'params': {'skip_download': 'm3u8'},
'skip': 'This video requires AdobePass MSO credentials',
}, {
'url': 'https://www.oxygen.com/in-ice-cold-blood/season-1/closing-night',
'info_dict': {
'id': '3692045',
'ext': 'mp4',
'title': 'Closing Night',
'display_id': 'closing-night',
'description': 'md5:c8a5bb523c8ef381f3328c6d9f1e4632',
'upload_date': '20230126',
'timestamp': 1674709200,
'season_number': 1,
'episode_number': 1,
'series': 'In Ice Cold Blood',
'episode': 'Closing Night',
'duration': 2629,
'season': 'Season 1',
'chapters': 'count:6',
'thumbnail': r're:^https://.+\.jpg',
'age_limit': 14,
'media_type': 'Full Episode',
'uploader': 'NBCU-MPAT',
'categories': ['Series/In Ice Cold Blood'],
'tags': ['ice-t', 'in ice cold blood', 'law and order', 'oxygen', 'true crime'],
},
'params': {'skip_download': 'm3u8'},
'skip': 'This video requires AdobePass MSO credentials',
}, {
'url': 'https://www.oxygen.com/in-ice-cold-blood/season-2/episode-16/videos/handling-the-horwitz-house-after-the-murder-season-2',
'info_dict': {
'id': '3974019',
'ext': 'mp4',
'title': '\'Handling The Horwitz House After The Murder (Season 2, Episode 16)',
'display_id': 'handling-the-horwitz-house-after-the-murder-season-2',
'description': 'md5:f9d638dd6946a1c1c0533a9c6100eae5',
'upload_date': '20190618',
'timestamp': 1560819600,
'season_number': 2,
'episode_number': 16,
'series': 'In Ice Cold Blood',
'episode': 'Mother Vs Son',
'duration': 68,
'season': 'Season 2',
'thumbnail': r're:^https://.+\.jpg',
'age_limit': 14,
'uploader': 'NBCU-OXY',
'categories': ['Series/In Ice Cold Blood'],
'tags': ['in ice cold blood', 'ice-t', 'law and order', 'true crime', 'oxygen'],
},
'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://www.bravotv.com/below-deck/season-3/ep-14-reunion-part-1',
'only_matching': True,
}]
def _real_extract(self, url):
display_id = self._match_id(url)
return self._extract_nbcu_video(url, display_id)
class SyfyIE(NBCUniversalBaseIE):
_VALID_URL = r'https?://(?:www\.)?syfy\.com/[^/?#]+/(?:season-\d+/episode-\d+/(?:videos/)?|videos/)(?P<id>[^/?#]+)'
_TESTS = [{
'url': 'https://www.syfy.com/face-off/season-13/episode-10/videos/keyed-up',
'info_dict': {
'id': '3774403',
'ext': 'mp4',
'display_id': 'keyed-up',
'title': 'Keyed Up',
'description': 'md5:feafd15bee449f212dcd3065bbe9a755',
'age_limit': 14,
'duration': 169,
'thumbnail': r're:https://www\.syfy\.com/.+/.+\.jpg',
'series': 'Face Off',
'season': 'Season 13',
'season_number': 13,
'episode': 'Through the Looking Glass Part 2',
'episode_number': 10,
'timestamp': 1533711618,
'upload_date': '20180808',
'media_type': 'Excerpt',
'uploader': 'NBCU-MPAT',
'categories': ['Series/Face Off'],
'tags': 'count:15',
'_old_archive_ids': ['theplatform 3774403'],
},
'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://www.syfy.com/face-off/season-13/episode-10/through-the-looking-glass-part-2',
'info_dict': {
'id': '3772391',
'ext': 'mp4',
'display_id': 'through-the-looking-glass-part-2',
'title': 'Through the Looking Glass Pt.2',
'description': 'md5:90bd5dcbf1059fe3296c263599af41d2',
'age_limit': 0,
'duration': 2599,
'thumbnail': r're:https://www\.syfy\.com/.+/.+\.jpg',
'chapters': [{'start_time': 0.0, 'end_time': 679.0, 'title': '<Untitled Chapter 1>'},
{'start_time': 679.0, 'end_time': 1040.967, 'title': '<Untitled Chapter 2>'},
{'start_time': 1040.967, 'end_time': 1403.0, 'title': '<Untitled Chapter 3>'},
{'start_time': 1403.0, 'end_time': 1870.0, 'title': '<Untitled Chapter 4>'},
{'start_time': 1870.0, 'end_time': 2496.967, 'title': '<Untitled Chapter 5>'},
{'start_time': 2496.967, 'end_time': 2599, 'title': '<Untitled Chapter 6>'}],
'series': 'Face Off',
'season': 'Season 13',
'season_number': 13,
'episode': 'Through the Looking Glass Part 2',
'episode_number': 10,
'timestamp': 1672570800,
'upload_date': '20230101',
'media_type': 'Full Episode',
'uploader': 'NBCU-MPAT',
'categories': ['Series/Face Off'],
'tags': 'count:15',
'_old_archive_ids': ['theplatform 3772391'],
},
'params': {'skip_download': 'm3u8'},
'skip': 'This video requires AdobePass MSO credentials',
}]
def _real_extract(self, url):
display_id = self._match_id(url)
return self._extract_nbcu_video(url, display_id, old_ie_key='ThePlatform')

View File

@ -3,6 +3,7 @@ import json
from .art19 import Art19IE from .art19 import Art19IE
from .common import InfoExtractor from .common import InfoExtractor
from ..networking import PATCHRequest
from ..networking.exceptions import HTTPError from ..networking.exceptions import HTTPError
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
@ -74,7 +75,7 @@ class NebulaBaseIE(InfoExtractor):
'app_version': '23.10.0', 'app_version': '23.10.0',
'platform': 'ios', 'platform': 'ios',
}) })
return {'formats': fmts, 'subtitles': subs} break
except ExtractorError as e: except ExtractorError as e:
if isinstance(e.cause, HTTPError) and e.cause.status == 401: if isinstance(e.cause, HTTPError) and e.cause.status == 401:
self.raise_login_required() self.raise_login_required()
@ -84,6 +85,9 @@ class NebulaBaseIE(InfoExtractor):
continue continue
raise raise
self.mark_watched(content_id, slug)
return {'formats': fmts, 'subtitles': subs}
def _extract_video_metadata(self, episode): def _extract_video_metadata(self, episode):
channel_url = traverse_obj( channel_url = traverse_obj(
episode, (('channel_slug', 'class_slug'), {urljoin('https://nebula.tv/')}), get_all=False) episode, (('channel_slug', 'class_slug'), {urljoin('https://nebula.tv/')}), get_all=False)
@ -111,6 +115,13 @@ class NebulaBaseIE(InfoExtractor):
'uploader_url': channel_url, 'uploader_url': channel_url,
} }
def _mark_watched(self, content_id, slug):
self._call_api(
PATCHRequest(f'https://content.api.nebula.app/{content_id.split(":")[0]}s/{content_id}/progress/'),
slug, 'Marking watched', 'Unable to mark watched', fatal=False,
data=json.dumps({'completed': True}).encode(),
headers={'content-type': 'application/json'})
class NebulaIE(NebulaBaseIE): class NebulaIE(NebulaBaseIE):
IE_NAME = 'nebula:video' IE_NAME = 'nebula:video'
@ -322,6 +333,7 @@ class NebulaClassIE(NebulaBaseIE):
if not episode_url and metadata.get('premium'): if not episode_url and metadata.get('premium'):
self.raise_login_required() self.raise_login_required()
self.mark_watched(metadata['id'], slug)
if Art19IE.suitable(episode_url): if Art19IE.suitable(episode_url):
return self.url_result(episode_url, Art19IE) return self.url_result(episode_url, Art19IE)
return traverse_obj(metadata, { return traverse_obj(metadata, {

View File

@ -16,6 +16,7 @@ from ..utils import (
determine_ext, determine_ext,
float_or_none, float_or_none,
int_or_none, int_or_none,
parse_bitrate,
parse_duration, parse_duration,
parse_iso8601, parse_iso8601,
parse_qs, parse_qs,
@ -23,7 +24,6 @@ from ..utils import (
qualities, qualities,
remove_start, remove_start,
str_or_none, str_or_none,
try_get,
unescapeHTML, unescapeHTML,
unified_timestamp, unified_timestamp,
update_url_query, update_url_query,
@ -32,7 +32,7 @@ from ..utils import (
urlencode_postdata, urlencode_postdata,
urljoin, urljoin,
) )
from ..utils.traversal import find_element, traverse_obj from ..utils.traversal import find_element, require, traverse_obj
class NiconicoBaseIE(InfoExtractor): class NiconicoBaseIE(InfoExtractor):
@ -283,35 +283,54 @@ class NiconicoIE(NiconicoBaseIE):
lambda _, v: v['id'] == video_fmt['format_id'], 'qualityLevel', {int_or_none}, any)) or -1 lambda _, v: v['id'] == video_fmt['format_id'], 'qualityLevel', {int_or_none}, any)) or -1
yield video_fmt yield video_fmt
def _extract_server_response(self, webpage, video_id, fatal=True):
try:
return traverse_obj(
self._parse_json(self._html_search_meta('server-response', webpage) or '', video_id),
('data', 'response', {dict}, {require('server response')}))
except ExtractorError:
if not fatal:
return {}
raise
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
try: try:
webpage, handle = self._download_webpage_handle( webpage, handle = self._download_webpage_handle(
'https://www.nicovideo.jp/watch/' + video_id, video_id) f'https://www.nicovideo.jp/watch/{video_id}', video_id,
headers=self.geo_verification_headers())
if video_id.startswith('so'): if video_id.startswith('so'):
video_id = self._match_id(handle.url) video_id = self._match_id(handle.url)
api_data = traverse_obj( api_data = self._extract_server_response(webpage, video_id)
self._parse_json(self._html_search_meta('server-response', webpage) or '', video_id),
('data', 'response', {dict}))
if not api_data:
raise ExtractorError('Server response data not found')
except ExtractorError as e: except ExtractorError as e:
try: try:
api_data = self._download_json( api_data = self._download_json(
f'https://www.nicovideo.jp/api/watch/v3/{video_id}?_frontendId=6&_frontendVersion=0&actionTrackId=AAAAAAAAAA_{round(time.time() * 1000)}', video_id, f'https://www.nicovideo.jp/api/watch/v3/{video_id}', video_id,
note='Downloading API JSON', errnote='Unable to fetch data')['data'] 'Downloading API JSON', 'Unable to fetch data', query={
'_frontendId': '6',
'_frontendVersion': '0',
'actionTrackId': f'AAAAAAAAAA_{round(time.time() * 1000)}',
}, headers=self.geo_verification_headers())['data']
except ExtractorError: except ExtractorError:
if not isinstance(e.cause, HTTPError): if not isinstance(e.cause, HTTPError):
# Raise if original exception was from _parse_json or utils.traversal.require
raise raise
# The webpage server response has more detailed error info than the API response
webpage = e.cause.response.read().decode('utf-8', 'replace') webpage = e.cause.response.read().decode('utf-8', 'replace')
error_msg = self._html_search_regex( reason_code = self._extract_server_response(
r'(?s)<section\s+class="(?:(?:ErrorMessage|WatchExceptionPage-message)\s*)+">(.+?)</section>', webpage, video_id, fatal=False).get('reasonCode')
webpage, 'error reason', default=None) if not reason_code:
if not error_msg:
raise raise
raise ExtractorError(clean_html(error_msg), expected=True) if reason_code in ('DOMESTIC_VIDEO', 'HIGH_RISK_COUNTRY_VIDEO'):
self.raise_geo_restricted(countries=self._GEO_COUNTRIES)
elif reason_code == 'HIDDEN_VIDEO':
raise ExtractorError(
'The viewing period of this video has expired', expected=True)
elif reason_code == 'DELETED_VIDEO':
raise ExtractorError('This video has been deleted', expected=True)
raise ExtractorError(f'Niconico says: {reason_code}')
availability = self._availability(**(traverse_obj(api_data, ('payment', 'video', { availability = self._availability(**(traverse_obj(api_data, ('payment', 'video', {
'needs_premium': ('isPremium', {bool}), 'needs_premium': ('isPremium', {bool}),
@ -785,8 +804,6 @@ class NiconicoLiveIE(NiconicoBaseIE):
'only_matching': True, 'only_matching': True,
}] }]
_KNOWN_LATENCY = ('high', 'low')
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage, urlh = self._download_webpage_handle(f'https://live.nicovideo.jp/watch/{video_id}', video_id) webpage, urlh = self._download_webpage_handle(f'https://live.nicovideo.jp/watch/{video_id}', video_id)
@ -802,22 +819,19 @@ class NiconicoLiveIE(NiconicoBaseIE):
}) })
hostname = remove_start(urllib.parse.urlparse(urlh.url).hostname, 'sp.') hostname = remove_start(urllib.parse.urlparse(urlh.url).hostname, 'sp.')
latency = try_get(self._configuration_arg('latency'), lambda x: x[0])
if latency not in self._KNOWN_LATENCY:
latency = 'high'
ws = self._request_webpage( ws = self._request_webpage(
Request(ws_url, headers={'Origin': f'https://{hostname}'}), Request(ws_url, headers={'Origin': f'https://{hostname}'}),
video_id=video_id, note='Connecting to WebSocket server') video_id=video_id, note='Connecting to WebSocket server')
self.write_debug('[debug] Sending HLS server request') self.write_debug('Sending HLS server request')
ws.send(json.dumps({ ws.send(json.dumps({
'type': 'startWatching', 'type': 'startWatching',
'data': { 'data': {
'stream': { 'stream': {
'quality': 'abr', 'quality': 'abr',
'protocol': 'hls+fmp4', 'protocol': 'hls',
'latency': latency, 'latency': 'high',
'accessRightMethod': 'single_cookie', 'accessRightMethod': 'single_cookie',
'chasePlay': False, 'chasePlay': False,
}, },
@ -881,18 +895,29 @@ class NiconicoLiveIE(NiconicoBaseIE):
for cookie in cookies: for cookie in cookies:
self._set_cookie( self._set_cookie(
cookie['domain'], cookie['name'], cookie['value'], cookie['domain'], cookie['name'], cookie['value'],
expire_time=unified_timestamp(cookie['expires']), path=cookie['path'], secure=cookie['secure']) expire_time=unified_timestamp(cookie.get('expires')), path=cookie['path'], secure=cookie['secure'])
fmt_common = {
'live_latency': 'high',
'origin': hostname,
'protocol': 'niconico_live',
'video_id': video_id,
'ws': ws,
}
q_iter = (q for q in qualities[1:] if not q.startswith('audio_')) # ignore initial 'abr'
a_map = {96: 'audio_low', 192: 'audio_high'}
formats = self._extract_m3u8_formats(m3u8_url, video_id, ext='mp4', live=True) formats = self._extract_m3u8_formats(m3u8_url, video_id, ext='mp4', live=True)
for fmt, q in zip(formats, reversed(qualities[1:])): for fmt in formats:
if fmt.get('acodec') == 'none':
fmt['format_id'] = next(q_iter, fmt['format_id'])
elif fmt.get('vcodec') == 'none':
abr = parse_bitrate(fmt['url'].lower())
fmt.update({ fmt.update({
'format_id': q, 'abr': abr,
'protocol': 'niconico_live', 'format_id': a_map.get(abr, fmt['format_id']),
'ws': ws,
'video_id': video_id,
'live_latency': latency,
'origin': hostname,
}) })
fmt.update(fmt_common)
return { return {
'id': video_id, 'id': video_id,

View File

@ -1,59 +1,57 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
determine_ext, UnsupportedError,
get_element_by_attribute, clean_html,
int_or_none, int_or_none,
js_to_json, parse_duration,
mimetype2ext, parse_qs,
update_url_query, str_or_none,
update_url,
) )
from ..utils.traversal import find_element, traverse_obj
class NobelPrizeIE(InfoExtractor): class NobelPrizeIE(InfoExtractor):
_WORKING = False _VALID_URL = r'https?://(?:(?:mediaplayer|www)\.)?nobelprize\.org/mediaplayer/'
_VALID_URL = r'https?://(?:www\.)?nobelprize\.org/mediaplayer.*?\bid=(?P<id>\d+)' _TESTS = [{
_TEST = { 'url': 'https://www.nobelprize.org/mediaplayer/?id=2636',
'url': 'http://www.nobelprize.org/mediaplayer/?id=2636',
'md5': '04c81e5714bb36cc4e2232fee1d8157f',
'info_dict': { 'info_dict': {
'id': '2636', 'id': '2636',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Announcement of the 2016 Nobel Prize in Physics', 'title': 'Announcement of the 2016 Nobel Prize in Physics',
'description': 'md5:05beba57f4f5a4bbd4cf2ef28fcff739', 'description': 'md5:1a2d8a6ca80c88fb3b9a326e0b0e8e43',
'duration': 1560.0,
'thumbnail': r're:https?://www\.nobelprize\.org/images/.+\.jpg',
'timestamp': 1504883793,
'upload_date': '20170908',
}, },
} }, {
'url': 'https://mediaplayer.nobelprize.org/mediaplayer/?qid=12693',
'info_dict': {
'id': '12693',
'ext': 'mp4',
'title': 'Nobel Lecture by Peter Higgs',
'description': 'md5:9b12e275dbe3a8138484e70e00673a05',
'duration': 1800.0,
'thumbnail': r're:https?://www\.nobelprize\.org/images/.+\.jpg',
'timestamp': 1504883793,
'upload_date': '20170908',
},
}]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = traverse_obj(parse_qs(url), (
webpage = self._download_webpage(url, video_id) ('id', 'qid'), -1, {int_or_none}, {str_or_none}, any))
media = self._parse_json(self._search_regex( if not video_id:
r'(?s)var\s*config\s*=\s*({.+?});', webpage, raise UnsupportedError(url)
'config'), video_id, js_to_json)['media'] webpage = self._download_webpage(
title = media['title'] update_url(url, netloc='mediaplayer.nobelprize.org'), video_id)
formats = []
for source in media.get('source', []):
source_src = source.get('src')
if not source_src:
continue
ext = mimetype2ext(source.get('type')) or determine_ext(source_src)
if ext == 'm3u8':
formats.extend(self._extract_m3u8_formats(
source_src, video_id, 'mp4', 'm3u8_native',
m3u8_id='hls', fatal=False))
elif ext == 'f4m':
formats.extend(self._extract_f4m_formats(
update_url_query(source_src, {'hdcore': '3.7.0'}),
video_id, f4m_id='hds', fatal=False))
else:
formats.append({
'url': source_src,
})
return { return {
**self._search_json_ld(webpage, video_id),
'id': video_id, 'id': video_id,
'title': title, 'title': self._html_search_meta('caption', webpage),
'description': get_element_by_attribute('itemprop', 'description', webpage), 'description': traverse_obj(webpage, (
'duration': int_or_none(media.get('duration')), {find_element(tag='span', attr='itemprop', value='description')}, {clean_html})),
'formats': formats, 'duration': parse_duration(self._html_search_meta('duration', webpage)),
} }

View File

@ -1,55 +1,82 @@
from .common import InfoExtractor from .streaks import StreaksBaseIE
from ..utils import ( from ..utils import (
ExtractorError, int_or_none,
smuggle_url, parse_iso8601,
traverse_obj, str_or_none,
url_or_none,
) )
from ..utils.traversal import require, traverse_obj
class NTVCoJpCUIE(InfoExtractor): class NTVCoJpCUIE(StreaksBaseIE):
IE_NAME = 'cu.ntv.co.jp' IE_NAME = 'cu.ntv.co.jp'
IE_DESC = 'Nippon Television Network' IE_DESC = '日テレ無料TADA!'
_VALID_URL = r'https?://cu\.ntv\.co\.jp/(?!program)(?P<id>[^/?&#]+)' _VALID_URL = r'https?://cu\.ntv\.co\.jp/(?!program-list|search)(?P<id>[\w-]+)/?(?:[?#]|$)'
_TEST = { _TESTS = [{
'url': 'https://cu.ntv.co.jp/televiva-chill-gohan_181031/', 'url': 'https://cu.ntv.co.jp/gaki_20250525/',
'info_dict': { 'info_dict': {
'id': '5978891207001', 'id': 'gaki_20250525',
'ext': 'mp4', 'ext': 'mp4',
'title': '桜エビと炒り卵がポイント! 「中華風 エビチリおにぎり」──『美虎』五十嵐美幸', 'title': '放送開始36年!方正ココリコが選ぶ神回&地獄回!',
'upload_date': '20181213', 'cast': 'count:2',
'description': 'md5:1985b51a9abc285df0104d982a325f2a', 'description': 'md5:1e1db556224d627d4d2f74370c650927',
'uploader_id': '3855502814001', 'display_id': 'ref:gaki_20250525',
'timestamp': 1544669941, 'duration': 1450,
'episode': '放送開始36年!方正ココリコが選ぶ神回&地獄回!',
'episode_id': '000000010172808',
'episode_number': 255,
'genres': ['variety'],
'live_status': 'not_live',
'modified_date': '20250525',
'modified_timestamp': 1748145537,
'release_date': '20250525',
'release_timestamp': 1748145539,
'series': 'ダウンタウンのガキの使いやあらへんで!',
'series_id': 'gaki',
'thumbnail': r're:https?://.+\.jpg',
'timestamp': 1748145197,
'upload_date': '20250525',
'uploader': '日本テレビ放送網',
'uploader_id': '0x7FE2',
}, },
'params': { }]
# m3u8 download
'skip_download': True,
},
}
BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/default_default/index.html?videoId=%s'
def _real_extract(self, url): def _real_extract(self, url):
display_id = self._match_id(url) display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id) webpage = self._download_webpage(url, display_id)
player_config = self._search_nuxt_data(webpage, display_id)
video_id = traverse_obj(player_config, ('movie', 'video_id')) info = self._search_json(
if not video_id: r'window\.app\s*=', webpage, 'video info',
raise ExtractorError('Failed to extract video ID for Brightcove') display_id)['falcorCache']['catalog']['episode'][display_id]['value']
account_id = traverse_obj(player_config, ('player', 'account')) or '3855502814001' media_id = traverse_obj(info, (
title = traverse_obj(player_config, ('movie', 'name')) 'streaks_data', 'mediaid', {str_or_none}, {require('Streaks media ID')}))
if not title: non_phonetic = (lambda _, v: v['is_phonetic'] is False, 'value', {str})
og_title = self._og_search_title(webpage, fatal=False) or traverse_obj(player_config, ('player', 'title'))
if og_title:
title = og_title.split('(', 1)[0].strip()
description = (traverse_obj(player_config, ('movie', 'description'))
or self._html_search_meta(['description', 'og:description'], webpage))
return { return {
'_type': 'url_transparent', **self._extract_from_streaks_api('ntv-tada', media_id, headers={
'id': video_id, 'X-Streaks-Api-Key': 'df497719056b44059a0483b8faad1f4a',
'display_id': display_id, }),
'title': title, **traverse_obj(info, {
'description': description, 'id': ('content_id', {str_or_none}),
'url': smuggle_url(self.BRIGHTCOVE_URL_TEMPLATE % (account_id, video_id), {'geo_countries': ['JP']}), 'title': ('title', *non_phonetic, any),
'ie_key': 'BrightcoveNew', 'age_limit': ('is_adult_only_content', {lambda x: 18 if x else None}),
'cast': ('credit', ..., 'name', *non_phonetic),
'genres': ('genre', ..., {str}),
'release_timestamp': ('pub_date', {parse_iso8601}),
'tags': ('tags', ..., {str}),
'thumbnail': ('artwork', ..., 'url', any, {url_or_none}),
}),
**traverse_obj(info, ('tv_episode_info', {
'duration': ('duration', {int_or_none}),
'episode_number': ('episode_number', {int}),
'series': ('parent_show_title', *non_phonetic, any),
'series_id': ('show_content_id', {str}),
})),
**traverse_obj(info, ('custom_data', {
'description': ('program_detail', {str}),
'episode': ('episode_title', {str}),
'episode_id': ('episode_id', {str_or_none}),
'uploader': ('network_name', {str}),
'uploader_id': ('network_id', {str}),
})),
} }

View File

@ -181,6 +181,7 @@ class NYTimesArticleIE(NYTimesBaseIE):
'thumbnail': r're:https?://\w+\.nyt.com/images/.*\.jpg', 'thumbnail': r're:https?://\w+\.nyt.com/images/.*\.jpg',
'duration': 119.0, 'duration': 119.0,
}, },
'skip': 'HTTP Error 500: Internal Server Error',
}, { }, {
# article with audio and no video # article with audio and no video
'url': 'https://www.nytimes.com/2023/09/29/health/mosquitoes-genetic-engineering.html', 'url': 'https://www.nytimes.com/2023/09/29/health/mosquitoes-genetic-engineering.html',
@ -190,13 +191,14 @@ class NYTimesArticleIE(NYTimesBaseIE):
'ext': 'mp3', 'ext': 'mp3',
'title': 'The Gamble: Can Genetically Modified Mosquitoes End Disease?', 'title': 'The Gamble: Can Genetically Modified Mosquitoes End Disease?',
'description': 'md5:9ff8b47acbaf7f3ca8c732f5c815be2e', 'description': 'md5:9ff8b47acbaf7f3ca8c732f5c815be2e',
'timestamp': 1695960700, 'timestamp': 1696008129,
'upload_date': '20230929', 'upload_date': '20230929',
'creator': 'Stephanie Nolen, Natalija Gormalova', 'creators': ['Stephanie Nolen', 'Natalija Gormalova'],
'thumbnail': r're:https?://\w+\.nyt.com/images/.*\.jpg', 'thumbnail': r're:https?://\w+\.nyt.com/images/.*\.jpg',
'duration': 1322, 'duration': 1322,
}, },
}, { }, {
# lede_media_block already has sourceId
'url': 'https://www.nytimes.com/2023/11/29/business/dealbook/kamala-harris-biden-voters.html', 'url': 'https://www.nytimes.com/2023/11/29/business/dealbook/kamala-harris-biden-voters.html',
'md5': '3eb5ddb1d6f86254fe4f233826778737', 'md5': '3eb5ddb1d6f86254fe4f233826778737',
'info_dict': { 'info_dict': {
@ -207,7 +209,7 @@ class NYTimesArticleIE(NYTimesBaseIE):
'timestamp': 1701290997, 'timestamp': 1701290997,
'upload_date': '20231129', 'upload_date': '20231129',
'uploader': 'By The New York Times', 'uploader': 'By The New York Times',
'creator': 'Katie Rogers', 'creators': ['Katie Rogers'],
'thumbnail': r're:https?://\w+\.nyt.com/images/.*\.jpg', 'thumbnail': r're:https?://\w+\.nyt.com/images/.*\.jpg',
'duration': 97.631, 'duration': 97.631,
}, },
@ -222,10 +224,22 @@ class NYTimesArticleIE(NYTimesBaseIE):
'title': 'Drunk and Asleep on the Job: Air Traffic Controllers Pushed to the Brink', 'title': 'Drunk and Asleep on the Job: Air Traffic Controllers Pushed to the Brink',
'description': 'md5:549e5a5e935bf7d048be53ba3d2c863d', 'description': 'md5:549e5a5e935bf7d048be53ba3d2c863d',
'upload_date': '20231202', 'upload_date': '20231202',
'creator': 'Emily Steel, Sydney Ember', 'creators': ['Emily Steel', 'Sydney Ember'],
'timestamp': 1701511264, 'timestamp': 1701511264,
}, },
'playlist_count': 3, 'playlist_count': 3,
}, {
# lede_media_block does not have sourceId
'url': 'https://www.nytimes.com/2025/04/30/well/move/hip-mobility-routine.html',
'info_dict': {
'id': 'hip-mobility-routine',
'title': 'Tight Hips? These Moves Can Help.',
'description': 'Sitting all day is hard on your hips. Try this simple routine for better mobility.',
'creators': ['Alyssa Ages', 'Theodore Tae'],
'timestamp': 1746003629,
'upload_date': '20250430',
},
'playlist_count': 7,
}, { }, {
'url': 'https://www.nytimes.com/2023/12/02/business/media/netflix-squid-game-challenge.html', 'url': 'https://www.nytimes.com/2023/12/02/business/media/netflix-squid-game-challenge.html',
'only_matching': True, 'only_matching': True,
@ -256,14 +270,18 @@ class NYTimesArticleIE(NYTimesBaseIE):
def _real_extract(self, url): def _real_extract(self, url):
page_id = self._match_id(url) page_id = self._match_id(url)
webpage = self._download_webpage(url, page_id) webpage = self._download_webpage(url, page_id, impersonate=True)
art_json = self._search_json( art_json = self._search_json(
r'window\.__preloadedData\s*=', webpage, 'media details', page_id, r'window\.__preloadedData\s*=', webpage, 'media details', page_id,
transform_source=lambda x: x.replace('undefined', 'null'))['initialData']['data']['article'] transform_source=lambda x: x.replace('undefined', 'null'))['initialData']['data']['article']
content = art_json['sprinkledBody']['content']
blocks = traverse_obj(art_json, ( blocks = []
'sprinkledBody', 'content', ..., ('ledeMedia', None), block_filter = lambda k, v: k == 'media' and v['__typename'] in ('Video', 'Audio')
lambda _, v: v['__typename'] in ('Video', 'Audio'))) if lede_media_block := traverse_obj(content, (..., 'ledeMedia', block_filter, any)):
lede_media_block.setdefault('sourceId', art_json.get('sourceId'))
blocks.append(lede_media_block)
blocks.extend(traverse_obj(content, (..., block_filter)))
if not blocks: if not blocks:
raise ExtractorError('Unable to extract any media blocks from webpage') raise ExtractorError('Unable to extract any media blocks from webpage')
@ -273,8 +291,7 @@ class NYTimesArticleIE(NYTimesBaseIE):
'sprinkledBody', 'content', ..., 'summary', 'content', ..., 'text', {str}), 'sprinkledBody', 'content', ..., 'summary', 'content', ..., 'text', {str}),
get_all=False) or self._html_search_meta(['og:description', 'twitter:description'], webpage), get_all=False) or self._html_search_meta(['og:description', 'twitter:description'], webpage),
'timestamp': traverse_obj(art_json, ('firstPublished', {parse_iso8601})), 'timestamp': traverse_obj(art_json, ('firstPublished', {parse_iso8601})),
'creator': ', '.join( 'creators': traverse_obj(art_json, ('bylines', ..., 'creators', ..., 'displayName', {str})),
traverse_obj(art_json, ('bylines', ..., 'creators', ..., 'displayName'))), # TODO: change to 'creators' (list)
'thumbnails': self._extract_thumbnails(traverse_obj( 'thumbnails': self._extract_thumbnails(traverse_obj(
art_json, ('promotionalMedia', 'assetCrops', ..., 'renditions', ...))), art_json, ('promotionalMedia', 'assetCrops', ..., 'renditions', ...))),
} }

View File

@ -273,6 +273,8 @@ class OdnoklassnikiIE(InfoExtractor):
return self._extract_desktop(smuggle_url(url, {'referrer': 'https://boosty.to'})) return self._extract_desktop(smuggle_url(url, {'referrer': 'https://boosty.to'}))
elif error: elif error:
raise ExtractorError(error, expected=True) raise ExtractorError(error, expected=True)
elif '>Access to this video is restricted</div>' in webpage:
self.raise_login_required()
player = self._parse_json( player = self._parse_json(
unescapeHTML(self._search_regex( unescapeHTML(self._search_regex(
@ -429,7 +431,7 @@ class OdnoklassnikiIE(InfoExtractor):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage( webpage = self._download_webpage(
f'http://m.ok.ru/video/{video_id}', video_id, f'https://m.ok.ru/video/{video_id}', video_id,
note='Downloading mobile webpage') note='Downloading mobile webpage')
error = self._search_regex( error = self._search_regex(

View File

@ -1,40 +0,0 @@
import re
from .common import InfoExtractor
class OnceIE(InfoExtractor): # XXX: Conventionally, base classes should end with BaseIE/InfoExtractor
_VALID_URL = r'https?://.+?\.unicornmedia\.com/now/(?:ads/vmap/)?[^/]+/[^/]+/(?P<domain_id>[^/]+)/(?P<application_id>[^/]+)/(?:[^/]+/)?(?P<media_item_id>[^/]+)/content\.(?:once|m3u8|mp4)'
ADAPTIVE_URL_TEMPLATE = 'http://once.unicornmedia.com/now/master/playlist/%s/%s/%s/content.m3u8'
PROGRESSIVE_URL_TEMPLATE = 'http://once.unicornmedia.com/now/media/progressive/%s/%s/%s/%s/content.mp4'
def _extract_once_formats(self, url, http_formats_preference=None):
domain_id, application_id, media_item_id = re.match(
OnceIE._VALID_URL, url).groups()
formats = self._extract_m3u8_formats(
self.ADAPTIVE_URL_TEMPLATE % (
domain_id, application_id, media_item_id),
media_item_id, 'mp4', m3u8_id='hls', fatal=False)
progressive_formats = []
for adaptive_format in formats:
# Prevent advertisement from embedding into m3u8 playlist (see
# https://github.com/ytdl-org/youtube-dl/issues/8893#issuecomment-199912684)
adaptive_format['url'] = re.sub(
r'\badsegmentlength=\d+', r'adsegmentlength=0', adaptive_format['url'])
rendition_id = self._search_regex(
r'/now/media/playlist/[^/]+/[^/]+/([^/]+)',
adaptive_format['url'], 'redition id', default=None)
if rendition_id:
progressive_format = adaptive_format.copy()
progressive_format.update({
'url': self.PROGRESSIVE_URL_TEMPLATE % (
domain_id, application_id, rendition_id, media_item_id),
'format_id': adaptive_format['format_id'].replace(
'hls', 'http'),
'protocol': 'http',
'preference': http_formats_preference,
})
progressive_formats.append(progressive_format)
self._check_formats(progressive_formats, media_item_id)
formats.extend(progressive_formats)
return formats

View File

@ -340,8 +340,9 @@ class PatreonIE(PatreonBaseIE):
'channel_follower_count': ('attributes', 'patron_count', {int_or_none}), 'channel_follower_count': ('attributes', 'patron_count', {int_or_none}),
})) }))
# all-lowercase 'referer' so we can smuggle it to Generic, SproutVideo, Vimeo # Must be all-lowercase 'referer' so we can smuggle it to Generic, SproutVideo, and Vimeo.
headers = {'referer': 'https://patreon.com/'} # patreon.com URLs redirect to www.patreon.com; this matters when requesting mux.com m3u8s
headers = {'referer': 'https://www.patreon.com/'}
# handle Vimeo embeds # handle Vimeo embeds
if traverse_obj(attributes, ('embed', 'provider')) == 'Vimeo': if traverse_obj(attributes, ('embed', 'provider')) == 'Vimeo':
@ -352,7 +353,7 @@ class PatreonIE(PatreonBaseIE):
v_url, video_id, 'Checking Vimeo embed URL', headers=headers, v_url, video_id, 'Checking Vimeo embed URL', headers=headers,
fatal=False, errnote=False, expected_status=429): # 429 is TLS fingerprint rejection fatal=False, errnote=False, expected_status=429): # 429 is TLS fingerprint rejection
entries.append(self.url_result( entries.append(self.url_result(
VimeoIE._smuggle_referrer(v_url, 'https://patreon.com/'), VimeoIE._smuggle_referrer(v_url, headers['referer']),
VimeoIE, url_transparent=True)) VimeoIE, url_transparent=True))
embed_url = traverse_obj(attributes, ('embed', 'url', {url_or_none})) embed_url = traverse_obj(attributes, ('embed', 'url', {url_or_none}))
@ -379,11 +380,13 @@ class PatreonIE(PatreonBaseIE):
'url': post_file['url'], 'url': post_file['url'],
}) })
elif name == 'video' or determine_ext(post_file.get('url')) == 'm3u8': elif name == 'video' or determine_ext(post_file.get('url')) == 'm3u8':
formats, subtitles = self._extract_m3u8_formats_and_subtitles(post_file['url'], video_id) formats, subtitles = self._extract_m3u8_formats_and_subtitles(
post_file['url'], video_id, headers=headers)
entries.append({ entries.append({
'id': video_id, 'id': video_id,
'formats': formats, 'formats': formats,
'subtitles': subtitles, 'subtitles': subtitles,
'http_headers': headers,
}) })
can_view_post = traverse_obj(attributes, 'current_user_can_view') can_view_post = traverse_obj(attributes, 'current_user_can_view')

View File

@ -10,7 +10,8 @@ from ..utils import (
class PicartoIE(InfoExtractor): class PicartoIE(InfoExtractor):
_VALID_URL = r'https?://(?:www.)?picarto\.tv/(?P<id>[a-zA-Z0-9]+)' IE_NAME = 'picarto'
_VALID_URL = r'https?://(?:www.)?picarto\.tv/(?P<id>[^/#?]+)/?(?:$|[?#])'
_TEST = { _TEST = {
'url': 'https://picarto.tv/Setz', 'url': 'https://picarto.tv/Setz',
'info_dict': { 'info_dict': {
@ -89,7 +90,8 @@ class PicartoIE(InfoExtractor):
class PicartoVodIE(InfoExtractor): class PicartoVodIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?picarto\.tv/(?:videopopout|\w+/videos)/(?P<id>[^/?#&]+)' IE_NAME = 'picarto:vod'
_VALID_URL = r'https?://(?:www\.)?picarto\.tv/(?:videopopout|\w+(?:/profile)?/videos)/(?P<id>[^/?#&]+)'
_TESTS = [{ _TESTS = [{
'url': 'https://picarto.tv/videopopout/ArtofZod_2017.12.12.00.13.23.flv', 'url': 'https://picarto.tv/videopopout/ArtofZod_2017.12.12.00.13.23.flv',
'md5': '3ab45ba4352c52ee841a28fb73f2d9ca', 'md5': '3ab45ba4352c52ee841a28fb73f2d9ca',
@ -111,6 +113,18 @@ class PicartoVodIE(InfoExtractor):
'channel': 'ArtofZod', 'channel': 'ArtofZod',
'age_limit': 18, 'age_limit': 18,
}, },
}, {
'url': 'https://picarto.tv/DrechuArt/profile/videos/400347',
'md5': 'f9ea54868b1d9dec40eb554b484cc7bf',
'info_dict': {
'id': '400347',
'ext': 'mp4',
'title': 'Welcome to the Show',
'thumbnail': r're:^https?://.*\.jpg',
'channel': 'DrechuArt',
'age_limit': 0,
},
}, { }, {
'url': 'https://picarto.tv/videopopout/Plague', 'url': 'https://picarto.tv/videopopout/Plague',
'only_matching': True, 'only_matching': True,

View File

@ -7,11 +7,12 @@ from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
int_or_none, int_or_none,
join_nonempty,
parse_qs, parse_qs,
traverse_obj,
update_url_query, update_url_query,
urlencode_postdata, urlencode_postdata,
) )
from ..utils.traversal import traverse_obj, unpack
class PlaySuisseIE(InfoExtractor): class PlaySuisseIE(InfoExtractor):
@ -26,12 +27,12 @@ class PlaySuisseIE(InfoExtractor):
{ {
# episode in a series # episode in a series
'url': 'https://www.playsuisse.ch/watch/763182?episodeId=763211', 'url': 'https://www.playsuisse.ch/watch/763182?episodeId=763211',
'md5': '82df2a470b2dfa60c2d33772a8a60cf8', 'md5': 'e20d1ede6872a03b41905ca1060a1ef2',
'info_dict': { 'info_dict': {
'id': '763211', 'id': '763211',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Knochen', 'title': 'Knochen',
'description': 'md5:8ea7a8076ba000cd9e8bc132fd0afdd8', 'description': 'md5:3bdd80e2ce20227c47aab1df2a79a519',
'duration': 3344, 'duration': 3344,
'series': 'Wilder', 'series': 'Wilder',
'season': 'Season 1', 'season': 'Season 1',
@ -42,24 +43,33 @@ class PlaySuisseIE(InfoExtractor):
}, },
}, { }, {
# film # film
'url': 'https://www.playsuisse.ch/watch/808675', 'url': 'https://www.playsuisse.ch/detail/2573198',
'md5': '818b94c1d2d7c4beef953f12cb8f3e75', 'md5': '1f115bb0a5191477b1a5771643a4283d',
'info_dict': { 'info_dict': {
'id': '808675', 'id': '2573198',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Der Läufer', 'title': 'Azor',
'description': 'md5:9f61265c7e6dcc3e046137a792b275fd', 'description': 'md5:d41d8cd98f00b204e9800998ecf8427e',
'duration': 5280, 'genres': ['Fiction'],
'creators': ['Andreas Fontana'],
'cast': ['Fabrizio Rongione', 'Stéphanie Cléau', 'Gilles Privat', 'Alexandre Trocki'],
'location': 'France; Argentine',
'release_year': 2021,
'duration': 5981,
'thumbnail': 're:https://playsuisse-img.akamaized.net/', 'thumbnail': 're:https://playsuisse-img.akamaized.net/',
}, },
}, { }, {
# series (treated as a playlist) # series (treated as a playlist)
'url': 'https://www.playsuisse.ch/detail/1115687', 'url': 'https://www.playsuisse.ch/detail/1115687',
'info_dict': { 'info_dict': {
'description': 'md5:e4a2ae29a8895823045b5c3145a02aa3',
'id': '1115687', 'id': '1115687',
'series': 'They all came out to Montreux', 'series': 'They all came out to Montreux',
'title': 'They all came out to Montreux', 'title': 'They all came out to Montreux',
'description': 'md5:0fefd8c5b4468a0bb35e916887681520',
'genres': ['Documentary'],
'creators': ['Oliver Murray'],
'location': 'Switzerland',
'release_year': 2021,
}, },
'playlist': [{ 'playlist': [{
'info_dict': { 'info_dict': {
@ -120,6 +130,12 @@ class PlaySuisseIE(InfoExtractor):
id id
name name
description description
descriptionLong
year
contentTypes
directors
mainCast
productionCountries
duration duration
episodeNumber episodeNumber
seasonNumber seasonNumber
@ -215,9 +231,7 @@ class PlaySuisseIE(InfoExtractor):
if not self._ID_TOKEN: if not self._ID_TOKEN:
raise ExtractorError('Login failed') raise ExtractorError('Login failed')
def _get_media_data(self, media_id): def _get_media_data(self, media_id, locale=None):
# NOTE In the web app, the "locale" header is used to switch between languages,
# However this doesn't seem to take effect when passing the header here.
response = self._download_json( response = self._download_json(
'https://www.playsuisse.ch/api/graphql', 'https://www.playsuisse.ch/api/graphql',
media_id, data=json.dumps({ media_id, data=json.dumps({
@ -225,7 +239,7 @@ class PlaySuisseIE(InfoExtractor):
'query': self._GRAPHQL_QUERY, 'query': self._GRAPHQL_QUERY,
'variables': {'assetId': media_id}, 'variables': {'assetId': media_id},
}).encode(), }).encode(),
headers={'Content-Type': 'application/json', 'locale': 'de'}) headers={'Content-Type': 'application/json', 'locale': locale or 'de'})
return response['data']['assetV2'] return response['data']['assetV2']
@ -234,7 +248,7 @@ class PlaySuisseIE(InfoExtractor):
self.raise_login_required(method='password') self.raise_login_required(method='password')
media_id = self._match_id(url) media_id = self._match_id(url)
media_data = self._get_media_data(media_id) media_data = self._get_media_data(media_id, traverse_obj(parse_qs(url), ('locale', 0)))
info = self._extract_single(media_data) info = self._extract_single(media_data)
if media_data.get('episodes'): if media_data.get('episodes'):
info.update({ info.update({
@ -257,15 +271,22 @@ class PlaySuisseIE(InfoExtractor):
self._merge_subtitles(subs, target=subtitles) self._merge_subtitles(subs, target=subtitles)
return { return {
'id': media_data['id'],
'title': media_data.get('name'),
'description': media_data.get('description'),
'thumbnails': thumbnails, 'thumbnails': thumbnails,
'duration': int_or_none(media_data.get('duration')),
'formats': formats, 'formats': formats,
'subtitles': subtitles, 'subtitles': subtitles,
'series': media_data.get('seriesName'), **traverse_obj(media_data, {
'season_number': int_or_none(media_data.get('seasonNumber')), 'id': ('id', {str}),
'episode': media_data.get('name') if media_data.get('episodeNumber') else None, 'title': ('name', {str}),
'episode_number': int_or_none(media_data.get('episodeNumber')), 'description': (('descriptionLong', 'description'), {str}, any),
'genres': ('contentTypes', ..., {str}),
'creators': ('directors', ..., {str}),
'cast': ('mainCast', ..., {str}),
'location': ('productionCountries', ..., {str}, all, {unpack(join_nonempty, delim='; ')}, filter),
'release_year': ('year', {str}, {lambda x: x[:4]}, {int_or_none}),
'duration': ('duration', {int_or_none}),
'series': ('seriesName', {str}),
'season_number': ('seasonNumber', {int_or_none}),
'episode': ('name', {str}, {lambda x: x if media_data['episodeNumber'] is not None else None}),
'episode_number': ('episodeNumber', {int_or_none}),
}),
} }

View File

@ -5,11 +5,13 @@ from .common import InfoExtractor
from ..utils import ( from ..utils import (
OnDemandPagedList, OnDemandPagedList,
float_or_none, float_or_none,
int_or_none,
orderedSet,
str_or_none, str_or_none,
str_to_int,
traverse_obj,
unified_timestamp, unified_timestamp,
url_or_none,
) )
from ..utils.traversal import require, traverse_obj
class PodchaserIE(InfoExtractor): class PodchaserIE(InfoExtractor):
@ -21,24 +23,25 @@ class PodchaserIE(InfoExtractor):
'id': '104365585', 'id': '104365585',
'title': 'Ep. 285 freeze me off', 'title': 'Ep. 285 freeze me off',
'description': 'cam ahn', 'description': 'cam ahn',
'thumbnail': r're:^https?://.*\.jpg$', 'thumbnail': r're:https?://.+/.+\.jpg',
'ext': 'mp3', 'ext': 'mp3',
'categories': ['Comedy'], 'categories': ['Comedy', 'News', 'Politics', 'Arts'],
'tags': ['comedy', 'dark humor'], 'tags': ['comedy', 'dark humor'],
'series': 'Cum Town', 'series': 'The Adam Friedland Show Podcast',
'duration': 3708, 'duration': 3708,
'timestamp': 1636531259, 'timestamp': 1636531259,
'upload_date': '20211110', 'upload_date': '20211110',
'average_rating': 4.0, 'average_rating': 4.0,
'series_id': '36924',
}, },
}, { }, {
'url': 'https://www.podchaser.com/podcasts/the-bone-zone-28853', 'url': 'https://www.podchaser.com/podcasts/the-bone-zone-28853',
'info_dict': { 'info_dict': {
'id': '28853', 'id': '28853',
'title': 'The Bone Zone', 'title': 'The Bone Zone',
'description': 'Podcast by The Bone Zone', 'description': r're:The official home of the Bone Zone podcast.+',
}, },
'playlist_count': 275, 'playlist_mincount': 275,
}, { }, {
'url': 'https://www.podchaser.com/podcasts/sean-carrolls-mindscape-scienc-699349/episodes', 'url': 'https://www.podchaser.com/podcasts/sean-carrolls-mindscape-scienc-699349/episodes',
'info_dict': { 'info_dict': {
@ -51,19 +54,33 @@ class PodchaserIE(InfoExtractor):
@staticmethod @staticmethod
def _parse_episode(episode, podcast): def _parse_episode(episode, podcast):
return { info = traverse_obj(episode, {
'id': str(episode.get('id')), 'id': ('id', {int}, {str_or_none}, {require('episode ID')}),
'title': episode.get('title'), 'title': ('title', {str}),
'description': episode.get('description'), 'description': ('description', {str}),
'url': episode.get('audio_url'), 'url': ('audio_url', {url_or_none}),
'thumbnail': episode.get('image_url'), 'thumbnail': ('image_url', {url_or_none}),
'duration': str_to_int(episode.get('length')), 'duration': ('length', {int_or_none}),
'timestamp': unified_timestamp(episode.get('air_date')), 'timestamp': ('air_date', {unified_timestamp}),
'average_rating': float_or_none(episode.get('rating')), 'average_rating': ('rating', {float_or_none}),
'categories': list(set(traverse_obj(podcast, (('summary', None), 'categories', ..., 'text')))), })
'tags': traverse_obj(podcast, ('tags', ..., 'text')), info.update(traverse_obj(podcast, {
'series': podcast.get('title'), 'series': ('title', {str}),
} 'series_id': ('id', {int}, {str_or_none}),
'categories': (('summary', None), 'categories', ..., 'text', {str}, filter, all, {orderedSet}),
'tags': ('tags', ..., 'text', {str}),
}))
info['vcodec'] = 'none'
if info.get('series_id'):
podcast_slug = traverse_obj(podcast, ('slug', {str})) or 'podcast'
episode_slug = traverse_obj(episode, ('slug', {str})) or 'episode'
info['webpage_url'] = '/'.join((
'https://www.podchaser.com/podcasts',
'-'.join((podcast_slug[:30].rstrip('-'), info['series_id'])),
'-'.join((episode_slug[:30].rstrip('-'), info['id']))))
return info
def _call_api(self, path, *args, **kwargs): def _call_api(self, path, *args, **kwargs):
return self._download_json(f'https://api.podchaser.com/{path}', *args, **kwargs) return self._download_json(f'https://api.podchaser.com/{path}', *args, **kwargs)
@ -93,5 +110,5 @@ class PodchaserIE(InfoExtractor):
OnDemandPagedList(functools.partial(self._fetch_page, podcast_id, podcast), self._PAGE_SIZE), OnDemandPagedList(functools.partial(self._fetch_page, podcast_id, podcast), self._PAGE_SIZE),
str_or_none(podcast.get('id')), podcast.get('title'), podcast.get('description')) str_or_none(podcast.get('id')), podcast.get('title'), podcast.get('description'))
episode = self._call_api(f'episodes/{episode_id}', episode_id) episode = self._call_api(f'podcasts/{podcast_id}/episodes/{episode_id}/player_ids', episode_id)
return self._parse_episode(episode, podcast) return self._parse_episode(episode, podcast)

View File

@ -15,7 +15,6 @@ from ..utils import (
str_or_none, str_or_none,
strip_jsonp, strip_jsonp,
traverse_obj, traverse_obj,
unescapeHTML,
url_or_none, url_or_none,
urljoin, urljoin,
) )
@ -425,7 +424,7 @@ class QQMusicPlaylistIE(QQPlaylistBaseIE):
return self.playlist_result(entries, list_id, **traverse_obj(list_json, ('cdlist', 0, { return self.playlist_result(entries, list_id, **traverse_obj(list_json, ('cdlist', 0, {
'title': ('dissname', {str}), 'title': ('dissname', {str}),
'description': ('desc', {unescapeHTML}, {clean_html}), 'description': ('desc', {clean_html}),
}))) })))

View File

@ -697,7 +697,7 @@ class SoundcloudIE(SoundcloudBaseIE):
try: try:
return self._extract_info_dict(info, full_title, token) return self._extract_info_dict(info, full_title, token)
except ExtractorError as e: except ExtractorError as e:
if not isinstance(e.cause, HTTPError) or not e.cause.status == 429: if not isinstance(e.cause, HTTPError) or e.cause.status != 429:
raise raise
self.report_warning( self.report_warning(
'You have reached the API rate limit, which is ~600 requests per ' 'You have reached the API rate limit, which is ~600 requests per '

View File

@ -1,61 +0,0 @@
from .adobepass import AdobePassIE
from ..utils import (
int_or_none,
smuggle_url,
update_url_query,
)
class SproutIE(AdobePassIE):
_VALID_URL = r'https?://(?:www\.)?(?:sproutonline|universalkids)\.com/(?:watch|(?:[^/]+/)*videos)/(?P<id>[^/?#]+)'
_TESTS = [{
'url': 'https://www.universalkids.com/shows/remy-and-boo/season/1/videos/robot-bike-race',
'info_dict': {
'id': 'bm0foJFaTKqb',
'ext': 'mp4',
'title': 'Robot Bike Race',
'description': 'md5:436b1d97117cc437f54c383f4debc66d',
'timestamp': 1606148940,
'upload_date': '20201123',
'uploader': 'NBCU-MPAT',
},
'params': {
'skip_download': True,
},
}, {
'url': 'http://www.sproutonline.com/watch/cowboy-adventure',
'only_matching': True,
}, {
'url': 'https://www.universalkids.com/watch/robot-bike-race',
'only_matching': True,
}]
_GEO_COUNTRIES = ['US']
def _real_extract(self, url):
display_id = self._match_id(url)
mpx_metadata = self._download_json(
# http://nbcuunikidsprod.apps.nbcuni.com/networks/universalkids/content/videos/
'https://www.universalkids.com/_api/videos/' + display_id,
display_id)['mpxMetadata']
media_pid = mpx_metadata['mediaPid']
theplatform_url = 'https://link.theplatform.com/s/HNK2IC/' + media_pid
query = {
'mbr': 'true',
'manifest': 'm3u',
}
if mpx_metadata.get('entitlement') == 'auth':
query['auth'] = self._extract_mvpd_auth(url, media_pid, 'sprout', 'sprout')
theplatform_url = smuggle_url(
update_url_query(theplatform_url, query), {
'force_smil_url': True,
'geo_countries': self._GEO_COUNTRIES,
})
return {
'_type': 'url_transparent',
'id': media_pid,
'url': theplatform_url,
'series': mpx_metadata.get('seriesName'),
'season_number': int_or_none(mpx_metadata.get('seasonNumber')),
'episode_number': int_or_none(mpx_metadata.get('episodeNumber')),
'ie_key': 'ThePlatform',
}

View File

@ -1,57 +1,102 @@
from .ard import ARDMediathekBaseIE from .ard import ARDMediathekBaseIE
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
get_element_by_attribute, clean_html,
extract_attributes,
parse_duration,
parse_qs,
unified_strdate,
)
from ..utils.traversal import (
find_element,
require,
traverse_obj,
) )
class SRMediathekIE(ARDMediathekBaseIE): class SRMediathekIE(ARDMediathekBaseIE):
_WORKING = False
IE_NAME = 'sr:mediathek' IE_NAME = 'sr:mediathek'
IE_DESC = 'Saarländischer Rundfunk' IE_DESC = 'Saarländischer Rundfunk'
_VALID_URL = r'https?://sr-mediathek(?:\.sr-online)?\.de/index\.php\?.*?&id=(?P<id>[0-9]+)'
_CLS_COMMON = 'teaser__image__caption__text teaser__image__caption__text--'
_VALID_URL = r'https?://(?:www\.)?sr-mediathek\.de/index\.php\?.*?&id=(?P<id>\d+)'
_TESTS = [{ _TESTS = [{
'url': 'http://sr-mediathek.sr-online.de/index.php?seite=7&id=28455', 'url': 'https://www.sr-mediathek.de/index.php?seite=7&id=141317',
'info_dict': { 'info_dict': {
'id': '28455', 'id': '141317',
'ext': 'mp4', 'ext': 'mp4',
'title': 'sportarena (26.10.2014)', 'title': 'Kärnten, da will ich hin!',
'description': 'Ringen: KSV Köllerbach gegen Aachen-Walheim; Frauen-Fußball: 1. FC Saarbrücken gegen Sindelfingen; Motorsport: Rallye in Losheim; dazu: Interview mit Timo Bernhard; Turnen: TG Saar; Reitsport: Deutscher Voltigier-Pokal; Badminton: Interview mit Michael Fuchs ', 'channel': 'SR Fernsehen',
'thumbnail': r're:^https?://.*\.jpg$', 'description': 'md5:7732e71e803379a499732864a572a456',
}, 'duration': 1788.0,
'skip': 'no longer available', 'release_date': '20250525',
}, { 'series': 'da will ich hin!',
'url': 'http://sr-mediathek.sr-online.de/index.php?seite=7&id=37682', 'series_id': 'DWIH',
'info_dict': { 'thumbnail': r're:https?://.+\.jpg',
'id': '37682',
'ext': 'mp4',
'title': 'Love, Cakes and Rock\'n\'Roll',
'description': 'md5:18bf9763631c7d326c22603681e1123d',
},
'params': {
# m3u8 download
'skip_download': True,
}, },
}, { }, {
'url': 'http://sr-mediathek.de/index.php?seite=7&id=7480', 'url': 'https://www.sr-mediathek.de/index.php?seite=7&id=153853',
'only_matching': True, 'info_dict': {
'id': '153853',
'ext': 'mp3',
'title': 'Kappes, Klöße, Kokosmilch: Bruschetta mit Nduja',
'channel': 'SR 3',
'description': 'md5:3935798de3562b10c4070b408a15e225',
'duration': 139.0,
'release_date': '20250523',
'series': 'Kappes, Klöße, Kokosmilch',
'series_id': 'SR3_KKK_A',
'thumbnail': r're:https?://.+\.jpg',
},
}, {
'url': 'https://www.sr-mediathek.de/index.php?seite=7&id=31406&pnr=&tbl=pf',
'info_dict': {
'id': '31406',
'ext': 'mp3',
'title': 'Das Leben schwer nehmen, ist einfach zu anstrengend',
'channel': 'SR 1',
'description': 'md5:3e03fd556af831ad984d0add7175fb0c',
'duration': 1769.0,
'release_date': '20230717',
'series': 'Abendrot',
'series_id': 'SR1_AB_P',
'thumbnail': r're:https?://.+\.jpg',
},
}] }]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
description = self._og_search_description(webpage)
if '>Der gew&uuml;nschte Beitrag ist leider nicht mehr verf&uuml;gbar.<' in webpage: if description == 'Der gewünschte Beitrag ist leider nicht mehr vorhanden.':
raise ExtractorError(f'Video {video_id} is no longer available', expected=True) raise ExtractorError(f'Video {video_id} is no longer available', expected=True)
media_collection_url = self._search_regex( player_url = traverse_obj(webpage, (
r'data-mediacollection-ardplayer="([^"]+)"', webpage, 'media collection url') {find_element(tag='div', id=f'player{video_id}', html=True)},
info = self._extract_media_info(media_collection_url, webpage, video_id) {extract_attributes}, 'data-mediacollection-ardplayer',
info.update({ {self._proto_relative_url}, {require('player URL')}))
article = traverse_obj(webpage, (
{find_element(cls='article__content')},
{find_element(tag='p')}, {clean_html}))
return {
**self._extract_media_info(player_url, webpage, video_id),
'id': video_id, 'id': video_id,
'title': get_element_by_attribute('class', 'ardplayer-title', webpage), 'title': traverse_obj(webpage, (
'description': self._og_search_description(webpage), {find_element(cls='ardplayer-title')}, {clean_html})),
'channel': traverse_obj(webpage, (
{find_element(cls=f'{self._CLS_COMMON}subheadline')},
{lambda x: x.split('|')[0]}, {clean_html})),
'description': description,
'duration': parse_duration(self._search_regex(
r'(\d{2}:\d{2}:\d{2})', article, 'duration')),
'release_date': unified_strdate(self._search_regex(
r'(\d{2}\.\d{2}\.\d{4})', article, 'release_date')),
'series': traverse_obj(webpage, (
{find_element(cls=f'{self._CLS_COMMON}headline')}, {clean_html})),
'series_id': traverse_obj(webpage, (
{find_element(cls='teaser__link', html=True)},
{extract_attributes}, 'href', {parse_qs}, 'sen', ..., {str}, any)),
'thumbnail': self._og_search_thumbnail(webpage), 'thumbnail': self._og_search_thumbnail(webpage),
}) }
return info

View File

@ -4,6 +4,7 @@ from .wrestleuniverse import WrestleUniverseBaseIE
from ..utils import ( from ..utils import (
int_or_none, int_or_none,
traverse_obj, traverse_obj,
url_basename,
url_or_none, url_or_none,
) )
@ -65,9 +66,19 @@ class StacommuBaseIE(WrestleUniverseBaseIE):
hls_info, decrypt = self._call_encrypted_api( hls_info, decrypt = self._call_encrypted_api(
video_id, ':watchArchive', 'stream information', data={'method': 1}) video_id, ':watchArchive', 'stream information', data={'method': 1})
formats = self._get_formats(hls_info, ('hls', 'urls', ..., {url_or_none}), video_id)
for f in formats:
# bitrates are exaggerated in PPV playlists, so avoid wrong/huge filesize_approx values
if f.get('tbr'):
f['tbr'] = int(f['tbr'] / 2.5)
# prefer variants with the same basename as the master playlist to avoid partial streams
f['format_id'] = url_basename(f['url']).partition('.')[0]
if not f['format_id'].startswith(url_basename(f['manifest_url']).partition('.')[0]):
f['preference'] = -10
return { return {
'id': video_id, 'id': video_id,
'formats': self._get_formats(hls_info, ('hls', 'urls', ..., {url_or_none}), video_id), 'formats': formats,
'hls_aes': self._extract_hls_key(hls_info, 'hls', decrypt), 'hls_aes': self._extract_hls_key(hls_info, 'hls', decrypt),
**traverse_obj(video_info, { **traverse_obj(video_info, {
'title': ('displayName', {str}), 'title': ('displayName', {str}),

View File

@ -1,76 +1,76 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import int_or_none, urljoin from .youtube import YoutubeIE
from ..utils import (
clean_html,
parse_iso8601,
update_url,
url_or_none,
)
from ..utils.traversal import subs_list_to_dict, traverse_obj
class StarTrekIE(InfoExtractor): class StarTrekIE(InfoExtractor):
_WORKING = False IE_NAME = 'startrek'
_VALID_URL = r'(?P<base>https?://(?:intl|www)\.startrek\.com)/videos/(?P<id>[^/]+)' IE_DESC = 'STAR TREK'
_VALID_URL = r'https?://(?:www\.)?startrek\.com(?:/en-(?:ca|un))?/videos/(?P<id>[^/?#]+)'
_TESTS = [{ _TESTS = [{
'url': 'https://intl.startrek.com/videos/watch-welcoming-jess-bush-to-the-ready-room', 'url': 'https://www.startrek.com/en-un/videos/official-trailer-star-trek-lower-decks-season-4',
'md5': '491df5035c9d4dc7f63c79caaf9c839e',
'info_dict': { 'info_dict': {
'id': 'watch-welcoming-jess-bush-to-the-ready-room', 'id': 'official-trailer-star-trek-lower-decks-season-4',
'ext': 'mp4', 'ext': 'mp4',
'title': 'WATCH: Welcoming Jess Bush to The Ready Room', 'title': 'Official Trailer | Star Trek: Lower Decks - Season 4',
'duration': 1888, 'alt_title': 'md5:dd7e3191aaaf9e95db16fc3abd5ef68b',
'timestamp': 1655388000, 'categories': ['TRAILERS'],
'upload_date': '20220616', 'description': 'md5:563d7856ddab99bee7a5e50f45531757',
'description': 'md5:1ffee884e3920afbdd6dd04e926a1221', 'release_date': '20230722',
'thumbnail': r're:https://(?:intl|www)\.startrek\.com/sites/default/files/styles/video_1920x1080/public/images/2022-06/pp_14794_rr_thumb_107_yt_16x9\.jpg(?:\?.+)?', 'release_timestamp': 1690033200,
'subtitles': {'en-US': [{ 'series': 'Star Trek: Lower Decks',
'url': r're:https://(?:intl|www)\.startrek\.com/sites/default/files/video/captions/2022-06/TRR_SNW_107_v4\.vtt', 'series_id': 'star-trek-lower-decks',
}, { 'thumbnail': r're:https?://.+\.(?:jpg|png)',
'url': 'https://media.startrek.com/2022/06/16/2043801155561/1069981_hls/trr_snw_107_v4-c4bfc25d/stream_vtt.m3u8',
}]},
}, },
}, { }, {
'url': 'https://www.startrek.com/videos/watch-ethan-peck-and-gia-sandhu-beam-down-to-the-ready-room', 'url': 'https://www.startrek.com/en-ca/videos/my-first-contact-senator-cory-booker',
'md5': 'f5ad74fbb86e91e0882fc0a333178d1d',
'info_dict': { 'info_dict': {
'id': 'watch-ethan-peck-and-gia-sandhu-beam-down-to-the-ready-room', 'id': 'my-first-contact-senator-cory-booker',
'ext': 'mp4', 'ext': 'mp4',
'title': 'WATCH: Ethan Peck and Gia Sandhu Beam Down to The Ready Room', 'title': 'My First Contact: Senator Cory Booker',
'duration': 1986, 'alt_title': 'md5:fe74a8bdb0afab421c6e159a7680db4d',
'timestamp': 1654221600, 'categories': ['MY FIRST CONTACT'],
'upload_date': '20220603', 'description': 'md5:a3992ab3b3e0395925d71156bbc018ce',
'description': 'md5:b3aa0edacfe119386567362dec8ed51b', 'release_date': '20250401',
'thumbnail': r're:https://www\.startrek\.com/sites/default/files/styles/video_1920x1080/public/images/2022-06/pp_14792_rr_thumb_105_yt_16x9_1.jpg(?:\?.+)?', 'release_timestamp': 1743512400,
'subtitles': {'en-US': [{ 'series': 'Star Trek: The Original Series',
'url': r're:https://(?:intl|www)\.startrek\.com/sites/default/files/video/captions/2022-06/TRR_SNW_105_v5\.vtt', 'series_id': 'star-trek-the-original-series',
}]}, 'thumbnail': r're:https?://.+\.(?:jpg|png)',
}, },
}] }]
def _real_extract(self, url): def _real_extract(self, url):
urlbase, video_id = self._match_valid_url(url).group('base', 'id') video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
player = self._search_regex( page_props = self._search_nextjs_data(webpage, video_id)['props']['pageProps']
r'(<\s*div\s+id\s*=\s*"cvp-player-[^<]+<\s*/div\s*>)', webpage, 'player') video_data = page_props['video']['data']
if youtube_id := video_data.get('youtube_video_id'):
return self.url_result(youtube_id, YoutubeIE)
hls = self._html_search_regex(r'\bdata-hls\s*=\s*"([^"]+)"', player, 'HLS URL') series_id = traverse_obj(video_data, (
formats, subtitles = self._extract_m3u8_formats_and_subtitles(hls, video_id, 'mp4') 'series_and_movies', ..., 'series_or_movie', 'slug', {str}, any))
captions = self._html_search_regex(
r'\bdata-captions-url\s*=\s*"([^"]+)"', player, 'captions URL', fatal=False)
if captions:
subtitles.setdefault('en-US', [])[:0] = [{'url': urljoin(urlbase, captions)}]
# NB: Most of the data in the json_ld is undesirable
json_ld = self._search_json_ld(webpage, video_id, fatal=False)
return { return {
'id': video_id, 'id': video_id,
'title': self._html_search_regex( 'series': traverse_obj(page_props, (
r'\bdata-title\s*=\s*"([^"]+)"', player, 'title', json_ld.get('title')), 'queried', 'header', 'tab3', 'slices', ..., 'items',
'description': self._html_search_regex( lambda _, v: v['link']['slug'] == series_id, 'link_copy', {str}, any)),
r'(?s)<\s*div\s+class\s*=\s*"header-body"\s*>(.+?)<\s*/div\s*>', 'series_id': series_id,
webpage, 'description', fatal=False), **traverse_obj(video_data, {
'duration': int_or_none(self._html_search_regex( 'title': ('title', ..., 'text', {clean_html}, any),
r'\bdata-duration\s*=\s*"(\d+)"', player, 'duration', fatal=False)), 'alt_title': ('subhead', ..., 'text', {clean_html}, any),
'formats': formats, 'categories': ('category', 'data', 'category_name', {str.upper}, filter, all),
'subtitles': subtitles, 'description': ('slices', ..., 'primary', 'content', ..., 'text', {clean_html}, any),
'thumbnail': urljoin(urlbase, self._html_search_regex( 'release_timestamp': ('published', {parse_iso8601}),
r'\bdata-poster-url\s*=\s*"([^"]+)"', player, 'thumbnail', fatal=False)), 'subtitles': ({'url': 'legacy_subtitle_file'}, all, {subs_list_to_dict(lang='en')}),
'timestamp': json_ld.get('timestamp'), 'thumbnail': ('poster_frame', 'url', {url_or_none}, {update_url(query=None)}),
'url': ('legacy_video_url', {url_or_none}),
}),
} }

View File

@ -6,10 +6,13 @@ from ..utils import (
determine_ext, determine_ext,
dict_get, dict_get,
int_or_none, int_or_none,
traverse_obj,
try_get, try_get,
unified_timestamp, unified_timestamp,
) )
from ..utils.traversal import (
require,
traverse_obj,
)
class SVTBaseIE(InfoExtractor): class SVTBaseIE(InfoExtractor):
@ -97,40 +100,8 @@ class SVTBaseIE(InfoExtractor):
} }
class SVTIE(SVTBaseIE): class SVTPlayIE(SVTBaseIE):
_VALID_URL = r'https?://(?:www\.)?svt\.se/wd\?(?:.*?&)?widgetId=(?P<widget_id>\d+)&.*?\barticleId=(?P<id>\d+)' IE_NAME = 'svt:play'
_EMBED_REGEX = [rf'(?:<iframe src|href)="(?P<url>{_VALID_URL}[^"]*)"']
_TEST = {
'url': 'http://www.svt.se/wd?widgetId=23991&sectionId=541&articleId=2900353&type=embed&contextSectionId=123&autostart=false',
'md5': '33e9a5d8f646523ce0868ecfb0eed77d',
'info_dict': {
'id': '2900353',
'ext': 'mp4',
'title': 'Stjärnorna skojar till det - under SVT-intervjun',
'duration': 27,
'age_limit': 0,
},
}
def _real_extract(self, url):
mobj = self._match_valid_url(url)
widget_id = mobj.group('widget_id')
article_id = mobj.group('id')
info = self._download_json(
f'http://www.svt.se/wd?widgetId={widget_id}&articleId={article_id}&format=json&type=embed&output=json',
article_id)
info_dict = self._extract_video(info['video'], article_id)
info_dict['title'] = info['context']['title']
return info_dict
class SVTPlayBaseIE(SVTBaseIE):
_SVTPLAY_RE = r'root\s*\[\s*(["\'])_*svtplay\1\s*\]\s*=\s*(?P<json>{.+?})\s*;\s*\n'
class SVTPlayIE(SVTPlayBaseIE):
IE_DESC = 'SVT Play and Öppet arkiv' IE_DESC = 'SVT Play and Öppet arkiv'
_VALID_URL = r'''(?x) _VALID_URL = r'''(?x)
(?: (?:
@ -173,6 +144,7 @@ class SVTPlayIE(SVTPlayBaseIE):
'ext': 'mp4', 'ext': 'mp4',
'title': '1. Farlig kryssning', 'title': '1. Farlig kryssning',
'timestamp': 1491019200, 'timestamp': 1491019200,
'description': 'md5:8f350bc605677a5ead36a19a62fd9a34',
'upload_date': '20170401', 'upload_date': '20170401',
'duration': 2566, 'duration': 2566,
'thumbnail': r're:^https?://(?:.*[\.-]jpg|www.svtstatic.se/image/.*)$', 'thumbnail': r're:^https?://(?:.*[\.-]jpg|www.svtstatic.se/image/.*)$',
@ -186,19 +158,21 @@ class SVTPlayIE(SVTPlayBaseIE):
'params': { 'params': {
'skip_download': 'm3u8', 'skip_download': 'm3u8',
}, },
'expected_warnings': [r'Failed to download (?:MPD|m3u8)'],
}, { }, {
'url': 'https://www.svtplay.se/video/jz2rYz7/anders-hansen-moter/james-fallon?info=visa', 'url': 'https://www.svtplay.se/video/jz2rYz7/anders-hansen-moter/james-fallon?info=visa',
'info_dict': { 'info_dict': {
'id': 'jvXAGVb', 'id': 'jvXAGVb',
'ext': 'mp4', 'ext': 'mp4',
'title': 'James Fallon', 'title': 'James Fallon',
'timestamp': 1673917200, 'description': r're:James Fallon är hjärnforskaren .{532} att upptäcka psykopati tidigt\?$',
'upload_date': '20230117', 'timestamp': 1743379200,
'upload_date': '20250331',
'duration': 1081, 'duration': 1081,
'thumbnail': r're:^https?://(?:.*[\.-]jpg|www.svtstatic.se/image/.*)$', 'thumbnail': r're:^https?://(?:.*[\.-]jpg|www.svtstatic.se/image/.*)$',
'age_limit': 0, 'age_limit': 0,
'episode': 'James Fallon', 'episode': 'James Fallon',
'series': 'Anders Hansen möter...', 'series': 'Anders Hansen möter',
}, },
'params': { 'params': {
'skip_download': 'dash', 'skip_download': 'dash',
@ -233,96 +207,75 @@ class SVTPlayIE(SVTPlayBaseIE):
'only_matching': True, 'only_matching': True,
}] }]
def _extract_by_video_id(self, video_id, webpage=None): def _extract_by_video_id(self, video_id):
data = self._download_json( data = self._download_json(
f'https://api.svt.se/videoplayer-api/video/{video_id}', f'https://api.svt.se/videoplayer-api/video/{video_id}',
video_id, headers=self.geo_verification_headers()) video_id, headers=self.geo_verification_headers())
info_dict = self._extract_video(data, video_id) info_dict = self._extract_video(data, video_id)
if not info_dict.get('title'): if not info_dict.get('title'):
title = dict_get(info_dict, ('episode', 'series')) info_dict['title'] = traverse_obj(info_dict, 'episode', 'series')
if not title and webpage:
title = re.sub(
r'\s*\|\s*.+?$', '', self._og_search_title(webpage))
if not title:
title = video_id
info_dict['title'] = title
return info_dict return info_dict
def _real_extract(self, url): def _real_extract(self, url):
mobj = self._match_valid_url(url) mobj = self._match_valid_url(url)
video_id = mobj.group('id') video_id = mobj.group('id')
svt_id = mobj.group('svt_id') or mobj.group('modal_id') svt_id = mobj.group('svt_id') or mobj.group('modal_id')
if svt_id: if svt_id:
return self._extract_by_video_id(svt_id) return self._extract_by_video_id(svt_id)
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
data = self._parse_json( data = traverse_obj(self._search_nextjs_data(webpage, video_id), (
self._search_regex( 'props', 'urqlState', ..., 'data', {json.loads},
self._SVTPLAY_RE, webpage, 'embedded data', default='{}', 'detailsPageByPath', {dict}, any, {require('video data')}))
group='json'), details = traverse_obj(data, (
video_id, fatal=False) 'modules', lambda _, v: v['details']['smartStart']['item']['videos'], 'details', any))
svt_id = traverse_obj(details, (
thumbnail = self._og_search_thumbnail(webpage) 'smartStart', 'item', 'videos',
# There can be 'AudioDescribed' and 'SignInterpreted' variants; try 'Default' or else get first
if data: (lambda _, v: v['accessibility'] == 'Default', 0),
video_info = try_get( 'svtId', {str}, any))
data, lambda x: x['context']['dispatcher']['stores']['VideoTitlePageStore']['data']['video'],
dict)
if video_info:
info_dict = self._extract_video(video_info, video_id)
info_dict.update({
'title': data['context']['dispatcher']['stores']['MetaStore']['title'],
'thumbnail': thumbnail,
})
return info_dict
svt_id = try_get(
data, lambda x: x['statistics']['dataLake']['content']['id'],
str)
if not svt_id: if not svt_id:
nextjs_data = self._search_nextjs_data(webpage, video_id, fatal=False) svt_id = traverse_obj(data, ('video', 'svtId', {str}, {require('SVT ID')}))
svt_id = traverse_obj(nextjs_data, (
'props', 'urqlState', ..., 'data', {json.loads}, 'detailsPageByPath',
'video', 'svtId', {str}), get_all=False)
if not svt_id: info_dict = self._extract_by_video_id(svt_id)
svt_id = self._search_regex(
(r'<video[^>]+data-video-id=["\']([\da-zA-Z-]+)',
r'<[^>]+\bdata-rt=["\']top-area-play-button["\'][^>]+\bhref=["\'][^"\']*video/[\w-]+/[^"\']*\b(?:modalId|id)=([\w-]+)'),
webpage, 'video id')
info_dict = self._extract_by_video_id(svt_id, webpage) if not info_dict.get('title'):
info_dict['thumbnail'] = thumbnail info_dict['title'] = re.sub(r'\s*\|\s*.+?$', '', self._og_search_title(webpage))
if not info_dict.get('thumbnail'):
info_dict['thumbnail'] = self._og_search_thumbnail(webpage)
if not info_dict.get('description'):
info_dict['description'] = traverse_obj(details, ('description', {str}))
return info_dict return info_dict
class SVTSeriesIE(SVTPlayBaseIE): class SVTSeriesIE(SVTBaseIE):
IE_NAME = 'svt:play:series'
_VALID_URL = r'https?://(?:www\.)?svtplay\.se/(?P<id>[^/?&#]+)(?:.+?\btab=(?P<season_slug>[^&#]+))?' _VALID_URL = r'https?://(?:www\.)?svtplay\.se/(?P<id>[^/?&#]+)(?:.+?\btab=(?P<season_slug>[^&#]+))?'
_TESTS = [{ _TESTS = [{
'url': 'https://www.svtplay.se/rederiet', 'url': 'https://www.svtplay.se/rederiet',
'info_dict': { 'info_dict': {
'id': '14445680', 'id': 'jpmQYgn',
'title': 'Rederiet', 'title': 'Rederiet',
'description': 'md5:d9fdfff17f5d8f73468176ecd2836039', 'description': 'md5:f71122f7cf2e52b643e75915e04cb83d',
}, },
'playlist_mincount': 318, 'playlist_mincount': 318,
}, { }, {
'url': 'https://www.svtplay.se/rederiet?tab=season-2-14445680', 'url': 'https://www.svtplay.se/rederiet?tab=season-2-jpmQYgn',
'info_dict': { 'info_dict': {
'id': 'season-2-14445680', 'id': 'season-2-jpmQYgn',
'title': 'Rederiet - Säsong 2', 'title': 'Rederiet - Säsong 2',
'description': 'md5:d9fdfff17f5d8f73468176ecd2836039', 'description': 'md5:f71122f7cf2e52b643e75915e04cb83d',
}, },
'playlist_mincount': 12, 'playlist_mincount': 12,
}] }]
@classmethod @classmethod
def suitable(cls, url): def suitable(cls, url):
return False if SVTIE.suitable(url) or SVTPlayIE.suitable(url) else super().suitable(url) return False if SVTPlayIE.suitable(url) else super().suitable(url)
def _real_extract(self, url): def _real_extract(self, url):
series_slug, season_id = self._match_valid_url(url).groups() series_slug, season_id = self._match_valid_url(url).groups()
@ -386,6 +339,7 @@ class SVTSeriesIE(SVTPlayBaseIE):
class SVTPageIE(SVTBaseIE): class SVTPageIE(SVTBaseIE):
IE_NAME = 'svt:page'
_VALID_URL = r'https?://(?:www\.)?svt\.se/(?:[^/?#]+/)*(?P<id>[^/?&#]+)' _VALID_URL = r'https?://(?:www\.)?svt\.se/(?:[^/?#]+/)*(?P<id>[^/?&#]+)'
_TESTS = [{ _TESTS = [{
'url': 'https://www.svt.se/nyheter/lokalt/skane/viktor-18-forlorade-armar-och-ben-i-sepsis-vill-ateruppta-karaten-och-bli-svetsare', 'url': 'https://www.svt.se/nyheter/lokalt/skane/viktor-18-forlorade-armar-och-ben-i-sepsis-vill-ateruppta-karaten-och-bli-svetsare',
@ -463,7 +417,7 @@ class SVTPageIE(SVTBaseIE):
@classmethod @classmethod
def suitable(cls, url): def suitable(cls, url):
return False if SVTIE.suitable(url) or SVTPlayIE.suitable(url) else super().suitable(url) return False if SVTPlayIE.suitable(url) else super().suitable(url)
def _real_extract(self, url): def _real_extract(self, url):
display_id = self._match_id(url) display_id = self._match_id(url)
@ -471,8 +425,7 @@ class SVTPageIE(SVTBaseIE):
webpage = self._download_webpage(url, display_id) webpage = self._download_webpage(url, display_id)
title = self._og_search_title(webpage) title = self._og_search_title(webpage)
urql_state = self._search_json( urql_state = self._search_json(r'urqlState\s*[=:]', webpage, 'json data', display_id)
r'window\.svt\.(?:nyh\.)?urqlState\s*=', webpage, 'json data', display_id)
data = traverse_obj(urql_state, (..., 'data', {str}, {json.loads}), get_all=False) or {} data = traverse_obj(urql_state, (..., 'data', {str}, {json.loads}), get_all=False) or {}

View File

@ -1,58 +0,0 @@
from .adobepass import AdobePassIE
from ..utils import (
smuggle_url,
update_url_query,
)
class SyfyIE(AdobePassIE):
_WORKING = False
_VALID_URL = r'https?://(?:www\.)?syfy\.com/(?:[^/]+/)?videos/(?P<id>[^/?#]+)'
_TESTS = [{
'url': 'http://www.syfy.com/theinternetruinedmylife/videos/the-internet-ruined-my-life-season-1-trailer',
'info_dict': {
'id': '2968097',
'ext': 'mp4',
'title': 'The Internet Ruined My Life: Season 1 Trailer',
'description': 'One tweet, one post, one click, can destroy everything.',
'uploader': 'NBCU-MPAT',
'upload_date': '20170113',
'timestamp': 1484345640,
},
'params': {
# m3u8 download
'skip_download': True,
},
'add_ie': ['ThePlatform'],
'skip': 'Redirects to main page',
}]
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
syfy_mpx = next(iter(self._parse_json(self._search_regex(
r'jQuery\.extend\(Drupal\.settings\s*,\s*({.+?})\);', webpage, 'drupal settings'),
display_id)['syfy']['syfy_mpx'].values()))
video_id = syfy_mpx['mpxGUID']
title = syfy_mpx['episodeTitle']
query = {
'mbr': 'true',
'manifest': 'm3u',
}
if syfy_mpx.get('entitlement') == 'auth':
resource = self._get_mvpd_resource(
'syfy', title, video_id,
syfy_mpx.get('mpxRating', 'TV-14'))
query['auth'] = self._extract_mvpd_auth(
url, video_id, 'syfy', resource)
return {
'_type': 'url_transparent',
'ie_key': 'ThePlatform',
'url': smuggle_url(update_url_query(
self._proto_relative_url(syfy_mpx['releaseURL']), query),
{'force_smil_url': True}),
'title': title,
'id': video_id,
'display_id': display_id,
}

View File

@ -32,6 +32,10 @@ class TBSIE(TurnerBaseIE):
'url': 'http://www.tntdrama.com/movies/star-wars-a-new-hope', 'url': 'http://www.tntdrama.com/movies/star-wars-a-new-hope',
'only_matching': True, 'only_matching': True,
}] }]
_SOFTWARE_STATEMENT_MAP = {
'tbs': 'eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJkZTA0NTYxZS1iMTFhLTRlYTgtYTg5NC01NjI3MGM1NmM2MWIiLCJuYmYiOjE1MzcxODkzOTAsImlzcyI6ImF1dGguYWRvYmUuY29tIiwiaWF0IjoxNTM3MTg5MzkwfQ.Z7ny66kaqNDdCHf9Y9KsV12LrBxrLkGGxlYe2XGm6qsw2T-k1OCKC1TMzeqiZP735292MMRAQkcJDKrMIzNbAuf9nCdIcv4kE1E2nqUnjPMBduC1bHffZp8zlllyrN2ElDwM8Vhwv_5nElLRwWGEt0Kaq6KJAMZA__WDxKWC18T-wVtsOZWXQpDqO7nByhfj2t-Z8c3TUNVsA_wHgNXlkzJCZ16F2b7yGLT5ZhLPupOScd3MXC5iPh19HSVIok22h8_F_noTmGzmMnIRQi6bWYWK2zC7TQ_MsYHfv7V6EaG5m1RKZTV6JAwwoJQF_9ByzarLV1DGwZxD9-eQdqswvg',
'tntdrama': 'eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiIwOTMxYTU4OS1jZjEzLTRmNjMtYTJmYy03MzhjMjE1NWU5NjEiLCJuYmYiOjE1MzcxOTA4MjcsImlzcyI6ImF1dGguYWRvYmUuY29tIiwiaWF0IjoxNTM3MTkwODI3fQ.AucKvtws7oekTXi80_zX4-BlgJD9GLvlOI9FlBCjdlx7Pa3eJ0AqbogynKMiatMbnLOTMHGjd7tTiq422unmZjBz70dhePAe9BbW0dIo7oQ57vZ-VBYw_tWYRPmON61MwAbLVlqROD3n_zURs85S8TlkQx9aNx9x_riGGELjd8l05CVa_pOluNhYvuIFn6wmrASOKI1hNEblBDWh468UWP571-fe4zzi0rlYeeHd-cjvtWvOB3bQsWrUVbK4pRmqvzEH59j0vNF-ihJF9HncmUicYONe47Mib3elfMok23v4dB1_UAlQY_oawfNcynmEnJQCcqFmbHdEwTW6gMiYsA',
}
def _real_extract(self, url): def _real_extract(self, url):
site, path, display_id = self._match_valid_url(url).groups() site, path, display_id = self._match_valid_url(url).groups()
@ -48,7 +52,7 @@ class TBSIE(TurnerBaseIE):
drupal_settings['ngtv_token_url']).query) drupal_settings['ngtv_token_url']).query)
info = self._extract_ngtv_info( info = self._extract_ngtv_info(
media_id, tokenizer_query, { media_id, tokenizer_query, self._SOFTWARE_STATEMENT_MAP[site], {
'url': url, 'url': url,
'site_name': site[:3].upper(), 'site_name': site[:3].upper(),
'auth_required': video_data.get('authRequired') == '1' or is_live, 'auth_required': video_data.get('authRequired') == '1' or is_live,

View File

@ -156,6 +156,7 @@ class TeamcocoIE(TeamcocoBaseIE):
class ConanClassicIE(TeamcocoBaseIE): class ConanClassicIE(TeamcocoBaseIE):
_WORKING = False
_VALID_URL = r'https?://(?:(?:www\.)?conanclassic|conan25\.teamcoco)\.com/(?P<id>([^/]+/)*[^/?#]+)' _VALID_URL = r'https?://(?:(?:www\.)?conanclassic|conan25\.teamcoco)\.com/(?P<id>([^/]+/)*[^/?#]+)'
_TESTS = [{ _TESTS = [{
'url': 'https://conanclassic.com/video/ice-cube-kevin-hart-conan-share-lyft', 'url': 'https://conanclassic.com/video/ice-cube-kevin-hart-conan-share-lyft',
@ -263,7 +264,7 @@ class ConanClassicIE(TeamcocoBaseIE):
info.update(self._extract_ngtv_info(media_id, { info.update(self._extract_ngtv_info(media_id, {
'accessToken': token, 'accessToken': token,
'accessTokenType': 'jws', 'accessTokenType': 'jws',
})) }, None)) # TODO: the None arg needs to be the AdobePass software_statement
else: else:
formats, subtitles = self._get_formats_and_subtitles( formats, subtitles = self._get_formats_and_subtitles(
traverse_obj(response, ('data', 'findRecordVideoMetadata')), video_id) traverse_obj(response, ('data', 'findRecordVideoMetadata')), video_id)

View File

@ -63,6 +63,17 @@ class TelecincoBaseIE(InfoExtractor):
'http_headers': headers, 'http_headers': headers,
} }
def _download_akamai_webpage(self, url, display_id):
try: # yt-dlp's default user-agents are too old and blocked by akamai
return self._download_webpage(url, display_id, headers={
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; rv:136.0) Gecko/20100101 Firefox/136.0',
})
except ExtractorError as e:
if not isinstance(e.cause, HTTPError) or e.cause.status != 403:
raise
# Retry with impersonation if hardcoded UA is insufficient to bypass akamai
return self._download_webpage(url, display_id, impersonate=True)
class TelecincoIE(TelecincoBaseIE): class TelecincoIE(TelecincoBaseIE):
IE_DESC = 'telecinco.es, cuatro.com and mediaset.es' IE_DESC = 'telecinco.es, cuatro.com and mediaset.es'
@ -140,7 +151,7 @@ class TelecincoIE(TelecincoBaseIE):
def _real_extract(self, url): def _real_extract(self, url):
display_id = self._match_id(url) display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id) webpage = self._download_akamai_webpage(url, display_id)
article = self._search_json( article = self._search_json(
r'window\.\$REACTBASE_STATE\.article(?:_multisite)?\s*=', r'window\.\$REACTBASE_STATE\.article(?:_multisite)?\s*=',
webpage, 'article', display_id)['article'] webpage, 'article', display_id)['article']

View File

@ -6,32 +6,32 @@ from ..utils import int_or_none, traverse_obj, url_or_none, urljoin
class TenPlayIE(InfoExtractor): class TenPlayIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?10play\.com\.au/(?:[^/]+/)+(?P<id>tpv\d{6}[a-z]{5})' IE_NAME = '10play'
_VALID_URL = r'https?://(?:www\.)?10play\.com\.au/(?:[^/?#]+/)+(?P<id>tpv\d{6}[a-z]{5})'
_NETRC_MACHINE = '10play' _NETRC_MACHINE = '10play'
_TESTS = [{ _TESTS = [{
'url': 'https://10play.com.au/neighbours/web-extras/season-41/heres-a-first-look-at-mischa-bartons-neighbours-debut/tpv230911hyxnz', # Geo-restricted to Australia
'url': 'https://10play.com.au/australian-survivor/web-extras/season-10-brains-v-brawn-ii/myless-journey/tpv250414jdmtf',
'info_dict': { 'info_dict': {
'id': '6336940246112', 'id': '7440980000013868',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Here\'s A First Look At Mischa Barton\'s Neighbours Debut', 'title': 'Myles\'s Journey',
'alt_title': 'Here\'s A First Look At Mischa Barton\'s Neighbours Debut', 'alt_title': 'Myles\'s Journey',
'description': 'Neighbours Premieres Monday, September 18 At 4:30pm On 10 And 10 Play And 6:30pm On 10 Peach', 'description': 'Relive Myles\'s epic Brains V Brawn II journey to reach the game\'s final two',
'duration': 74,
'season': 'Season 41',
'season_number': 41,
'series': 'Neighbours',
'thumbnail': r're:https://.*\.jpg',
'uploader': 'Channel 10', 'uploader': 'Channel 10',
'age_limit': 15,
'timestamp': 1694386800,
'upload_date': '20230910',
'uploader_id': '2199827728001', 'uploader_id': '2199827728001',
'age_limit': 15,
'duration': 249,
'thumbnail': r're:https://.+/.+\.jpg',
'series': 'Australian Survivor',
'season': 'Season 10',
'season_number': 10,
'timestamp': 1744629420,
'upload_date': '20250414',
}, },
'params': { 'params': {'skip_download': 'm3u8'},
'skip_download': True,
},
'skip': 'Only available in Australia',
}, { }, {
# Geo-restricted to Australia
'url': 'https://10play.com.au/neighbours/episodes/season-42/episode-9107/tpv240902nzqyp', 'url': 'https://10play.com.au/neighbours/episodes/season-42/episode-9107/tpv240902nzqyp',
'info_dict': { 'info_dict': {
'id': '9000000000091177', 'id': '9000000000091177',
@ -45,17 +45,38 @@ class TenPlayIE(InfoExtractor):
'season': 'Season 42', 'season': 'Season 42',
'season_number': 42, 'season_number': 42,
'series': 'Neighbours', 'series': 'Neighbours',
'thumbnail': r're:https://.*\.jpg', 'thumbnail': r're:https://.+/.+\.jpg',
'age_limit': 15, 'age_limit': 15,
'timestamp': 1725517860, 'timestamp': 1725517860,
'upload_date': '20240905', 'upload_date': '20240905',
'uploader': 'Channel 10', 'uploader': 'Channel 10',
'uploader_id': '2199827728001', 'uploader_id': '2199827728001',
}, },
'params': { 'params': {'skip_download': 'm3u8'},
'skip_download': True, }, {
# Geo-restricted to Australia; upgrading the m3u8 quality fails and we need the fallback
'url': 'https://10play.com.au/tiny-chef-show/episodes/season-1/episode-2/tpv240228pofvt',
'info_dict': {
'id': '9000000000084116',
'ext': 'mp4',
'uploader': 'Channel 10',
'uploader_id': '2199827728001',
'duration': 1297,
'title': 'The Tiny Chef Show - S1 Ep. 2',
'alt_title': 'S1 Ep. 2 - Popcorn/banana',
'description': 'md5:d4758b52b5375dfaa67a78261dcb5763',
'age_limit': 0,
'series': 'The Tiny Chef Show',
'season_number': 1,
'episode_number': 2,
'timestamp': 1747957740,
'thumbnail': r're:https://.+/.+\.jpg',
'upload_date': '20250522',
'season': 'Season 1',
'episode': 'Episode 2',
}, },
'skip': 'Only available in Australia', 'params': {'skip_download': 'm3u8'},
'expected_warnings': ['Failed to download m3u8 information: HTTP Error 502'],
}, { }, {
'url': 'https://10play.com.au/how-to-stay-married/web-extras/season-1/terrys-talks-ep-1-embracing-change/tpv190915ylupc', 'url': 'https://10play.com.au/how-to-stay-married/web-extras/season-1/terrys-talks-ep-1-embracing-change/tpv190915ylupc',
'only_matching': True, 'only_matching': True,
@ -86,7 +107,10 @@ class TenPlayIE(InfoExtractor):
if '10play-not-in-oz' in m3u8_url: if '10play-not-in-oz' in m3u8_url:
self.raise_geo_restricted(countries=['AU']) self.raise_geo_restricted(countries=['AU'])
# Attempt to get a higher quality stream # Attempt to get a higher quality stream
m3u8_url = m3u8_url.replace(',150,75,55,0000', ',300,150,75,55,0000') formats = self._extract_m3u8_formats(
m3u8_url.replace(',150,75,55,0000', ',300,150,75,55,0000'),
content_id, 'mp4', fatal=False)
if not formats:
formats = self._extract_m3u8_formats(m3u8_url, content_id, 'mp4') formats = self._extract_m3u8_formats(m3u8_url, content_id, 'mp4')
return { return {
@ -112,21 +136,22 @@ class TenPlayIE(InfoExtractor):
class TenPlaySeasonIE(InfoExtractor): class TenPlaySeasonIE(InfoExtractor):
IE_NAME = '10play:season'
_VALID_URL = r'https?://(?:www\.)?10play\.com\.au/(?P<show>[^/?#]+)/episodes/(?P<season>[^/?#]+)/?(?:$|[?#])' _VALID_URL = r'https?://(?:www\.)?10play\.com\.au/(?P<show>[^/?#]+)/episodes/(?P<season>[^/?#]+)/?(?:$|[?#])'
_TESTS = [{ _TESTS = [{
'url': 'https://10play.com.au/masterchef/episodes/season-14', 'url': 'https://10play.com.au/masterchef/episodes/season-15',
'info_dict': { 'info_dict': {
'title': 'Season 14', 'title': 'Season 15',
'id': 'MjMyOTIy', 'id': 'MTQ2NjMxOQ==',
}, },
'playlist_mincount': 64, 'playlist_mincount': 50,
}, { }, {
'url': 'https://10play.com.au/the-bold-and-the-beautiful-fast-tracked/episodes/season-2022', 'url': 'https://10play.com.au/the-bold-and-the-beautiful-fast-tracked/episodes/season-2024',
'info_dict': { 'info_dict': {
'title': 'Season 2022', 'title': 'Season 2024',
'id': 'Mjc0OTIw', 'id': 'Mjc0OTIw',
}, },
'playlist_mincount': 256, 'playlist_mincount': 159,
}] }]
def _entries(self, load_more_url, display_id=None): def _entries(self, load_more_url, display_id=None):

View File

@ -4,7 +4,6 @@ import re
import time import time
from .adobepass import AdobePassIE from .adobepass import AdobePassIE
from .once import OnceIE
from ..networking import HEADRequest, Request from ..networking import HEADRequest, Request
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
@ -13,11 +12,13 @@ from ..utils import (
float_or_none, float_or_none,
int_or_none, int_or_none,
mimetype2ext, mimetype2ext,
parse_age_limit,
parse_qs, parse_qs,
traverse_obj, traverse_obj,
unsmuggle_url, unsmuggle_url,
update_url, update_url,
update_url_query, update_url_query,
url_or_none,
urlhandle_detect_ext, urlhandle_detect_ext,
xpath_with_ns, xpath_with_ns,
) )
@ -26,7 +27,7 @@ default_ns = 'http://www.w3.org/2005/SMIL21/Language'
_x = lambda p: xpath_with_ns(p, {'smil': default_ns}) _x = lambda p: xpath_with_ns(p, {'smil': default_ns})
class ThePlatformBaseIE(OnceIE): class ThePlatformBaseIE(AdobePassIE):
_TP_TLD = 'com' _TP_TLD = 'com'
def _extract_theplatform_smil(self, smil_url, video_id, note='Downloading SMIL data'): def _extract_theplatform_smil(self, smil_url, video_id, note='Downloading SMIL data'):
@ -54,9 +55,6 @@ class ThePlatformBaseIE(OnceIE):
formats = [] formats = []
for _format in smil_formats: for _format in smil_formats:
if OnceIE.suitable(_format['url']):
formats.extend(self._extract_once_formats(_format['url']))
else:
media_url = _format['url'] media_url = _format['url']
if determine_ext(media_url) == 'm3u8': if determine_ext(media_url) == 'm3u8':
hdnea2 = self._get_cookies(media_url).get('hdnea2') hdnea2 = self._get_cookies(media_url).get('hdnea2')
@ -67,69 +65,60 @@ class ThePlatformBaseIE(OnceIE):
return formats, subtitles return formats, subtitles
def _download_theplatform_metadata(self, path, video_id): def _download_theplatform_metadata(self, path, video_id, fatal=True):
info_url = f'http://link.theplatform.{self._TP_TLD}/s/{path}?format=preview' return self._download_json(
return self._download_json(info_url, video_id) f'https://link.theplatform.{self._TP_TLD}/s/{path}', video_id,
fatal=fatal, query={'format': 'preview'}) or {}
def _parse_theplatform_metadata(self, info): @staticmethod
subtitles = {} def _parse_theplatform_metadata(tp_metadata):
captions = info.get('captions') def site_specific_filter(*fields):
if isinstance(captions, list): return lambda k, v: v and k.endswith(tuple(f'${f}' for f in fields))
for caption in captions:
lang, src, mime = caption.get('lang', 'en'), caption.get('src'), caption.get('type') info = traverse_obj(tp_metadata, {
subtitles.setdefault(lang, []).append({ 'title': ('title', {str}),
'ext': mimetype2ext(mime), 'episode': ('title', {str}),
'url': src, 'description': ('description', {str}),
'thumbnail': ('defaultThumbnailUrl', {url_or_none}),
'duration': ('duration', {float_or_none(scale=1000)}),
'timestamp': ('pubDate', {float_or_none(scale=1000)}),
'uploader': ('billingCode', {str}),
'creators': ('author', {str}, filter, all, filter),
'categories': (
'categories', lambda _, v: v.get('label') in ['category', None],
'name', {str}, filter, all, filter),
'tags': ('keywords', {str}, filter, {lambda x: re.split(r'[;,]\s?', x)}, filter),
'age_limit': ('ratings', ..., 'rating', {parse_age_limit}, any),
'season_number': (site_specific_filter('seasonNumber'), {int_or_none}, any),
'episode_number': (site_specific_filter('episodeNumber', 'airOrder'), {int_or_none}, any),
'series': (site_specific_filter('show', 'seriesTitle', 'seriesShortTitle'), (None, ...), {str}, any),
'location': (site_specific_filter('region'), {str}, any),
'media_type': (site_specific_filter('programmingType', 'type'), {str}, any),
}) })
duration = info.get('duration') chapters = traverse_obj(tp_metadata, ('chapters', ..., {
tp_chapters = info.get('chapters', []) 'start_time': ('startTime', {float_or_none(scale=1000)}),
chapters = [] 'end_time': ('endTime', {float_or_none(scale=1000)}),
if tp_chapters: }))
def _add_chapter(start_time, end_time): # Ignore pointless single chapters from short videos that span the entire video's duration
start_time = float_or_none(start_time, 1000) if len(chapters) > 1 or traverse_obj(chapters, (0, 'end_time')):
end_time = float_or_none(end_time, 1000) info['chapters'] = chapters
if start_time is None or end_time is None:
return info['subtitles'] = {}
chapters.append({ for caption in traverse_obj(tp_metadata, ('captions', lambda _, v: url_or_none(v['src']))):
'start_time': start_time, info['subtitles'].setdefault(caption.get('lang') or 'en', []).append({
'end_time': end_time, 'url': caption['src'],
'ext': mimetype2ext(caption.get('type')),
}) })
for chapter in tp_chapters[:-1]: return info
_add_chapter(chapter.get('startTime'), chapter.get('endTime'))
_add_chapter(tp_chapters[-1].get('startTime'), tp_chapters[-1].get('endTime') or duration)
def extract_site_specific_field(field):
# A number of sites have custom-prefixed keys, e.g. 'cbc$seasonNumber'
return traverse_obj(info, lambda k, v: v and k.endswith(f'${field}'), get_all=False)
return {
'title': info['title'],
'subtitles': subtitles,
'description': info['description'],
'thumbnail': info['defaultThumbnailUrl'],
'duration': float_or_none(duration, 1000),
'timestamp': int_or_none(info.get('pubDate'), 1000) or None,
'uploader': info.get('billingCode'),
'chapters': chapters,
'creator': traverse_obj(info, ('author', {str})) or None,
'categories': traverse_obj(info, (
'categories', lambda _, v: v.get('label') in ('category', None), 'name', {str})) or None,
'tags': traverse_obj(info, ('keywords', {lambda x: re.split(r'[;,]\s?', x) if x else None})),
'location': extract_site_specific_field('region'),
'series': extract_site_specific_field('show') or extract_site_specific_field('seriesTitle'),
'season_number': int_or_none(extract_site_specific_field('seasonNumber')),
'episode_number': int_or_none(extract_site_specific_field('episodeNumber')),
'media_type': extract_site_specific_field('programmingType') or extract_site_specific_field('type'),
}
def _extract_theplatform_metadata(self, path, video_id): def _extract_theplatform_metadata(self, path, video_id):
info = self._download_theplatform_metadata(path, video_id) info = self._download_theplatform_metadata(path, video_id)
return self._parse_theplatform_metadata(info) return self._parse_theplatform_metadata(info)
class ThePlatformIE(ThePlatformBaseIE, AdobePassIE): class ThePlatformIE(ThePlatformBaseIE):
_VALID_URL = r'''(?x) _VALID_URL = r'''(?x)
(?:https?://(?:link|player)\.theplatform\.com/[sp]/(?P<provider_id>[^/]+)/ (?:https?://(?:link|player)\.theplatform\.com/[sp]/(?P<provider_id>[^/]+)/
(?:(?:(?:[^/]+/)+select/)?(?P<media>media/(?:guid/\d+/)?)?|(?P<config>(?:[^/\?]+/(?:swf|config)|onsite)/select/))? (?:(?:(?:[^/]+/)+select/)?(?P<media>media/(?:guid/\d+/)?)?|(?P<config>(?:[^/\?]+/(?:swf|config)|onsite)/select/))?

121
yt_dlp/extractor/toutiao.py Normal file
View File

@ -0,0 +1,121 @@
import json
import urllib.parse
from .common import InfoExtractor
from ..utils import (
float_or_none,
int_or_none,
str_or_none,
try_call,
url_or_none,
)
from ..utils.traversal import find_element, traverse_obj
class ToutiaoIE(InfoExtractor):
IE_NAME = 'toutiao'
IE_DESC = '今日头条'
_VALID_URL = r'https?://www\.toutiao\.com/video/(?P<id>\d+)/?(?:[?#]|$)'
_TESTS = [{
'url': 'https://www.toutiao.com/video/7505382061495176511/',
'info_dict': {
'id': '7505382061495176511',
'ext': 'mp4',
'title': '新疆多地现不明飞行物,目击者称和月亮一样亮,几秒内突然加速消失,气象部门回应',
'comment_count': int,
'duration': 9.753,
'like_count': int,
'release_date': '20250517',
'release_timestamp': 1747483344,
'thumbnail': r're:https?://p\d+-sign\.toutiaoimg\.com/.+$',
'uploader': '极目新闻',
'uploader_id': 'MS4wLjABAAAAeateBb9Su8I3MJOZozmvyzWktmba5LMlliRDz1KffnM',
'view_count': int,
},
}, {
'url': 'https://www.toutiao.com/video/7479446610359878153/',
'info_dict': {
'id': '7479446610359878153',
'ext': 'mp4',
'title': '小伙竟然利用两块磁铁制作成磁力减震器,简直太有创意了!',
'comment_count': int,
'duration': 118.374,
'like_count': int,
'release_date': '20250308',
'release_timestamp': 1741444368,
'thumbnail': r're:https?://p\d+-sign\.toutiaoimg\.com/.+$',
'uploader': '小莉创意发明',
'uploader_id': 'MS4wLjABAAAA4f7d4mwtApALtHIiq-QM20dwXqe32NUz0DeWF7wbHKw',
'view_count': int,
},
}]
def _real_initialize(self):
if self._get_cookies('https://www.toutiao.com').get('ttwid'):
return
urlh = self._request_webpage(
'https://ttwid.bytedance.com/ttwid/union/register/', None,
'Fetching ttwid', 'Unable to fetch ttwid', headers={
'Content-Type': 'application/json',
}, data=json.dumps({
'aid': 24,
'needFid': False,
'region': 'cn',
'service': 'www.toutiao.com',
'union': True,
}).encode(),
)
if ttwid := try_call(lambda: self._get_cookies(urlh.url)['ttwid'].value):
self._set_cookie('.toutiao.com', 'ttwid', ttwid)
return
self.raise_login_required()
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
video_data = traverse_obj(webpage, (
{find_element(tag='script', id='RENDER_DATA')},
{urllib.parse.unquote}, {json.loads}, 'data', 'initialVideo',
))
formats = []
for video in traverse_obj(video_data, (
'videoPlayInfo', 'video_list', lambda _, v: v['main_url'],
)):
formats.append({
'url': video['main_url'],
**traverse_obj(video, ('video_meta', {
'acodec': ('audio_profile', {str}),
'asr': ('audio_sample_rate', {int_or_none}),
'audio_channels': ('audio_channels', {float_or_none}, {int_or_none}),
'ext': ('vtype', {str}),
'filesize': ('size', {int_or_none}),
'format_id': ('definition', {str}),
'fps': ('fps', {int_or_none}),
'height': ('vheight', {int_or_none}),
'tbr': ('real_bitrate', {float_or_none(scale=1000)}),
'vcodec': ('codec_type', {str}),
'width': ('vwidth', {int_or_none}),
})),
})
return {
'id': video_id,
'formats': formats,
**traverse_obj(video_data, {
'comment_count': ('commentCount', {int_or_none}),
'duration': ('videoPlayInfo', 'video_duration', {float_or_none}),
'like_count': ('repinCount', {int_or_none}),
'release_timestamp': ('publishTime', {int_or_none}),
'thumbnail': (('poster', 'coverUrl'), {url_or_none}, any),
'title': ('title', {str}),
'uploader': ('userInfo', 'name', {str}),
'uploader_id': ('userInfo', 'userId', {str_or_none}),
'view_count': ('playCount', {int_or_none}),
'webpage_url': ('detailUrl', {url_or_none}),
}),
}

View File

@ -20,6 +20,7 @@ class TruTVIE(TurnerBaseIE):
'skip_download': True, 'skip_download': True,
}, },
} }
_SOFTWARE_STATEMENT = 'eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhYzQyOTkwMi0xMDYzLTQyNTQtYWJlYS1iZTY2ODM4MTVmZGIiLCJuYmYiOjE1MzcxOTA4NjgsImlzcyI6ImF1dGguYWRvYmUuY29tIiwiaWF0IjoxNTM3MTkwODY4fQ.ewXl5LDMDvvx3nDXV4jCdSwUq_sOluKoOVsIjznAo6Zo4zrGe9rjlZ9DOmQKW66g6VRMexJsJ5vM1EkY8TC5-YcQw_BclK1FPGO1rH3Wf7tX_l0b1BVbSJQKIj9UgqDp_QbGcBXz24kN4So3U22mhs6di9PYyyfG68ccKL2iRprcVKWCslIHwUF-T7FaEqb0K57auilxeW1PONG2m-lIAcZ62DUwqXDWvw0CRoWI08aVVqkkhnXaSsQfLs5Ph1Pfh9Oq3g_epUm9Ss45mq6XM7gbOb5omTcKLADRKK-PJVB_JXnZnlsXbG0ttKE1cTKJ738qu7j4aipYTf-W0nKF5Q'
def _real_extract(self, url): def _real_extract(self, url):
series_slug, clip_slug, video_id = self._match_valid_url(url).groups() series_slug, clip_slug, video_id = self._match_valid_url(url).groups()
@ -39,7 +40,7 @@ class TruTVIE(TurnerBaseIE):
title = video_data['title'].strip() title = video_data['title'].strip()
info = self._extract_ngtv_info( info = self._extract_ngtv_info(
media_id, {}, { media_id, {}, self._SOFTWARE_STATEMENT, {
'url': url, 'url': url,
'site_name': 'truTV', 'site_name': 'truTV',
'auth_required': video_data.get('isAuthRequired'), 'auth_required': video_data.get('isAuthRequired'),

View File

@ -22,7 +22,7 @@ class TurnerBaseIE(AdobePassIE):
def _extract_timestamp(self, video_data): def _extract_timestamp(self, video_data):
return int_or_none(xpath_attr(video_data, 'dateCreated', 'uts')) return int_or_none(xpath_attr(video_data, 'dateCreated', 'uts'))
def _add_akamai_spe_token(self, tokenizer_src, video_url, content_id, ap_data, custom_tokenizer_query=None): def _add_akamai_spe_token(self, tokenizer_src, video_url, content_id, ap_data, software_statement, custom_tokenizer_query=None):
secure_path = self._search_regex(r'https?://[^/]+(.+/)', video_url, 'secure path') + '*' secure_path = self._search_regex(r'https?://[^/]+(.+/)', video_url, 'secure path') + '*'
token = self._AKAMAI_SPE_TOKEN_CACHE.get(secure_path) token = self._AKAMAI_SPE_TOKEN_CACHE.get(secure_path)
if not token: if not token:
@ -34,7 +34,8 @@ class TurnerBaseIE(AdobePassIE):
else: else:
query['videoId'] = content_id query['videoId'] = content_id
if ap_data.get('auth_required'): if ap_data.get('auth_required'):
query['accessToken'] = self._extract_mvpd_auth(ap_data['url'], content_id, ap_data['site_name'], ap_data['site_name']) query['accessToken'] = self._extract_mvpd_auth(
ap_data['url'], content_id, ap_data['site_name'], ap_data['site_name'], software_statement)
auth = self._download_xml( auth = self._download_xml(
tokenizer_src, content_id, query=query) tokenizer_src, content_id, query=query)
error_msg = xpath_text(auth, 'error/msg') error_msg = xpath_text(auth, 'error/msg')
@ -46,7 +47,7 @@ class TurnerBaseIE(AdobePassIE):
self._AKAMAI_SPE_TOKEN_CACHE[secure_path] = token self._AKAMAI_SPE_TOKEN_CACHE[secure_path] = token
return video_url + '?hdnea=' + token return video_url + '?hdnea=' + token
def _extract_cvp_info(self, data_src, video_id, path_data={}, ap_data={}, fatal=False): def _extract_cvp_info(self, data_src, video_id, software_statement, path_data={}, ap_data={}, fatal=False):
video_data = self._download_xml( video_data = self._download_xml(
data_src, video_id, data_src, video_id,
transform_source=lambda s: fix_xml_ampersands(s).strip(), transform_source=lambda s: fix_xml_ampersands(s).strip(),
@ -101,7 +102,7 @@ class TurnerBaseIE(AdobePassIE):
video_url = self._add_akamai_spe_token( video_url = self._add_akamai_spe_token(
secure_path_data['tokenizer_src'], secure_path_data['tokenizer_src'],
secure_path_data['media_src'] + video_url, secure_path_data['media_src'] + video_url,
content_id, ap_data) content_id, ap_data, software_statement)
elif not re.match('https?://', video_url): elif not re.match('https?://', video_url):
base_path_data = path_data.get(ext, path_data.get('default', {})) base_path_data = path_data.get(ext, path_data.get('default', {}))
media_src = base_path_data.get('media_src') media_src = base_path_data.get('media_src')
@ -215,10 +216,12 @@ class TurnerBaseIE(AdobePassIE):
'is_live': is_live, 'is_live': is_live,
} }
def _extract_ngtv_info(self, media_id, tokenizer_query, ap_data=None): def _extract_ngtv_info(self, media_id, tokenizer_query, software_statement, ap_data=None):
if not isinstance(ap_data, dict):
ap_data = {}
is_live = ap_data.get('is_live') is_live = ap_data.get('is_live')
streams_data = self._download_json( streams_data = self._download_json(
f'http://medium.ngtv.io/media/{media_id}/tv', f'https://medium.ngtv.io/media/{media_id}/tv',
media_id)['media']['tv'] media_id)['media']['tv']
duration = None duration = None
chapters = [] chapters = []
@ -230,8 +233,8 @@ class TurnerBaseIE(AdobePassIE):
continue continue
if stream_data.get('playlistProtection') == 'spe': if stream_data.get('playlistProtection') == 'spe':
m3u8_url = self._add_akamai_spe_token( m3u8_url = self._add_akamai_spe_token(
'http://token.ngtv.io/token/token_spe', 'https://token.ngtv.io/token/token_spe',
m3u8_url, media_id, ap_data or {}, tokenizer_query) m3u8_url, media_id, ap_data, software_statement, tokenizer_query)
formats.extend(self._extract_m3u8_formats( formats.extend(self._extract_m3u8_formats(
m3u8_url, media_id, 'mp4', m3u8_id='hls', live=is_live, fatal=False)) m3u8_url, media_id, 'mp4', m3u8_id='hls', live=is_live, fatal=False))

View File

@ -1,4 +1,5 @@
import base64 import base64
import hashlib
import itertools import itertools
import re import re
@ -16,6 +17,7 @@ from ..utils import (
str_to_int, str_to_int,
try_get, try_get,
unified_timestamp, unified_timestamp,
update_url_query,
url_or_none, url_or_none,
urlencode_postdata, urlencode_postdata,
urljoin, urljoin,
@ -171,6 +173,10 @@ class TwitCastingIE(InfoExtractor):
'player': 'pc_web', 'player': 'pc_web',
}) })
password_params = {
'word': hashlib.md5(video_password.encode()).hexdigest(),
} if video_password else None
formats = [] formats = []
# low: 640x360, medium: 1280x720, high: 1920x1080 # low: 640x360, medium: 1280x720, high: 1920x1080
qq = qualities(['low', 'medium', 'high']) qq = qualities(['low', 'medium', 'high'])
@ -178,7 +184,7 @@ class TwitCastingIE(InfoExtractor):
'tc-hls', 'streams', {dict.items}, lambda _, v: url_or_none(v[1]), 'tc-hls', 'streams', {dict.items}, lambda _, v: url_or_none(v[1]),
)): )):
formats.append({ formats.append({
'url': m3u8_url, 'url': update_url_query(m3u8_url, password_params),
'format_id': f'hls-{quality}', 'format_id': f'hls-{quality}',
'ext': 'mp4', 'ext': 'mp4',
'quality': qq(quality), 'quality': qq(quality),
@ -192,7 +198,7 @@ class TwitCastingIE(InfoExtractor):
'llfmp4', 'streams', {dict.items}, lambda _, v: url_or_none(v[1]), 'llfmp4', 'streams', {dict.items}, lambda _, v: url_or_none(v[1]),
)): )):
formats.append({ formats.append({
'url': ws_url, 'url': update_url_query(ws_url, password_params),
'format_id': f'ws-{mode}', 'format_id': f'ws-{mode}',
'ext': 'mp4', 'ext': 'mp4',
'quality': qq(mode), 'quality': qq(mode),

View File

@ -187,7 +187,7 @@ class TwitchBaseIE(InfoExtractor):
'url': thumbnail, 'url': thumbnail,
}] if thumbnail else None }] if thumbnail else None
def _extract_twitch_m3u8_formats(self, path, video_id, token, signature): def _extract_twitch_m3u8_formats(self, path, video_id, token, signature, live_from_start=False):
formats = self._extract_m3u8_formats( formats = self._extract_m3u8_formats(
f'{self._USHER_BASE}/{path}/{video_id}.m3u8', video_id, 'mp4', query={ f'{self._USHER_BASE}/{path}/{video_id}.m3u8', video_id, 'mp4', query={
'allow_source': 'true', 'allow_source': 'true',
@ -204,7 +204,10 @@ class TwitchBaseIE(InfoExtractor):
for fmt in formats: for fmt in formats:
if fmt.get('vcodec') and fmt['vcodec'].startswith('av01'): if fmt.get('vcodec') and fmt['vcodec'].startswith('av01'):
# mpegts does not yet have proper support for av1 # mpegts does not yet have proper support for av1
fmt['downloader_options'] = {'ffmpeg_args_out': ['-f', 'mp4']} fmt.setdefault('downloader_options', {}).update({'ffmpeg_args_out': ['-f', 'mp4']})
if live_from_start:
fmt.setdefault('downloader_options', {}).update({'ffmpeg_args': ['-live_start_index', '0']})
fmt['is_from_start'] = True
return formats return formats
@ -550,7 +553,8 @@ class TwitchVodIE(TwitchBaseIE):
access_token = self._download_access_token(vod_id, 'video', 'id') access_token = self._download_access_token(vod_id, 'video', 'id')
formats = self._extract_twitch_m3u8_formats( formats = self._extract_twitch_m3u8_formats(
'vod', vod_id, access_token['value'], access_token['signature']) 'vod', vod_id, access_token['value'], access_token['signature'],
live_from_start=self.get_param('live_from_start'))
formats.extend(self._extract_storyboard(vod_id, video.get('storyboard'), info.get('duration'))) formats.extend(self._extract_storyboard(vod_id, video.get('storyboard'), info.get('duration')))
self._prefer_source(formats) self._prefer_source(formats)
@ -633,6 +637,10 @@ class TwitchPlaylistBaseIE(TwitchBaseIE):
_PAGE_LIMIT = 100 _PAGE_LIMIT = 100
def _entries(self, channel_name, *args): def _entries(self, channel_name, *args):
"""
Subclasses must define _make_variables() and _extract_entry(),
as well as set _OPERATION_NAME, _ENTRY_KIND, _EDGE_KIND, and _NODE_KIND
"""
cursor = None cursor = None
variables_common = self._make_variables(channel_name, *args) variables_common = self._make_variables(channel_name, *args)
entries_key = f'{self._ENTRY_KIND}s' entries_key = f'{self._ENTRY_KIND}s'
@ -672,7 +680,22 @@ class TwitchPlaylistBaseIE(TwitchBaseIE):
break break
class TwitchVideosIE(TwitchPlaylistBaseIE): class TwitchVideosBaseIE(TwitchPlaylistBaseIE):
_OPERATION_NAME = 'FilterableVideoTower_Videos'
_ENTRY_KIND = 'video'
_EDGE_KIND = 'VideoEdge'
_NODE_KIND = 'Video'
@staticmethod
def _make_variables(channel_name, broadcast_type, sort):
return {
'channelOwnerLogin': channel_name,
'broadcastType': broadcast_type,
'videoSort': sort.upper(),
}
class TwitchVideosIE(TwitchVideosBaseIE):
_VALID_URL = r'https?://(?:(?:www|go|m)\.)?twitch\.tv/(?P<id>[^/]+)/(?:videos|profile)' _VALID_URL = r'https?://(?:(?:www|go|m)\.)?twitch\.tv/(?P<id>[^/]+)/(?:videos|profile)'
_TESTS = [{ _TESTS = [{
@ -751,11 +774,6 @@ class TwitchVideosIE(TwitchPlaylistBaseIE):
'views': 'Popular', 'views': 'Popular',
} }
_OPERATION_NAME = 'FilterableVideoTower_Videos'
_ENTRY_KIND = 'video'
_EDGE_KIND = 'VideoEdge'
_NODE_KIND = 'Video'
@classmethod @classmethod
def suitable(cls, url): def suitable(cls, url):
return (False return (False
@ -764,14 +782,6 @@ class TwitchVideosIE(TwitchPlaylistBaseIE):
TwitchVideosCollectionsIE)) TwitchVideosCollectionsIE))
else super().suitable(url)) else super().suitable(url))
@staticmethod
def _make_variables(channel_name, broadcast_type, sort):
return {
'channelOwnerLogin': channel_name,
'broadcastType': broadcast_type,
'videoSort': sort.upper(),
}
@staticmethod @staticmethod
def _extract_entry(node): def _extract_entry(node):
return _make_video_result(node) return _make_video_result(node)
@ -919,7 +929,7 @@ class TwitchVideosCollectionsIE(TwitchPlaylistBaseIE):
playlist_title=f'{channel_name} - Collections') playlist_title=f'{channel_name} - Collections')
class TwitchStreamIE(TwitchBaseIE): class TwitchStreamIE(TwitchVideosBaseIE):
IE_NAME = 'twitch:stream' IE_NAME = 'twitch:stream'
_VALID_URL = r'''(?x) _VALID_URL = r'''(?x)
https?:// https?://
@ -982,6 +992,7 @@ class TwitchStreamIE(TwitchBaseIE):
'skip_download': 'Livestream', 'skip_download': 'Livestream',
}, },
}] }]
_PAGE_LIMIT = 1
@classmethod @classmethod
def suitable(cls, url): def suitable(cls, url):
@ -995,6 +1006,20 @@ class TwitchStreamIE(TwitchBaseIE):
TwitchClipsIE)) TwitchClipsIE))
else super().suitable(url)) else super().suitable(url))
@staticmethod
def _extract_entry(node):
if not isinstance(node, dict) or not node.get('id'):
return None
video_id = node['id']
return {
'_type': 'url',
'ie_key': TwitchVodIE.ie_key(),
'id': 'v' + video_id,
'url': f'https://www.twitch.tv/videos/{video_id}',
'title': node.get('title'),
'timestamp': unified_timestamp(node.get('publishedAt')) or 0,
}
def _real_extract(self, url): def _real_extract(self, url):
channel_name = self._match_id(url).lower() channel_name = self._match_id(url).lower()
@ -1029,6 +1054,16 @@ class TwitchStreamIE(TwitchBaseIE):
if not stream: if not stream:
raise UserNotLive(video_id=channel_name) raise UserNotLive(video_id=channel_name)
timestamp = unified_timestamp(stream.get('createdAt'))
if self.get_param('live_from_start'):
self.to_screen(f'{channel_name}: Extracting VOD to download live from start')
entry = next(self._entries(channel_name, None, 'time'), None)
if entry and entry.pop('timestamp') >= (timestamp or float('inf')):
return entry
self.report_warning(
'Unable to extract the VOD associated with this livestream', video_id=channel_name)
access_token = self._download_access_token( access_token = self._download_access_token(
channel_name, 'stream', 'channelName') channel_name, 'stream', 'channelName')
@ -1038,7 +1073,6 @@ class TwitchStreamIE(TwitchBaseIE):
self._prefer_source(formats) self._prefer_source(formats)
view_count = stream.get('viewers') view_count = stream.get('viewers')
timestamp = unified_timestamp(stream.get('createdAt'))
sq_user = try_get(gql, lambda x: x[1]['data']['user'], dict) or {} sq_user = try_get(gql, lambda x: x[1]['data']['user'], dict) or {}
uploader = sq_user.get('displayName') uploader = sq_user.get('displayName')

View File

@ -20,7 +20,6 @@ from ..utils import (
remove_end, remove_end,
str_or_none, str_or_none,
strip_or_none, strip_or_none,
traverse_obj,
truncate_string, truncate_string,
try_call, try_call,
try_get, try_get,
@ -29,6 +28,7 @@ from ..utils import (
url_or_none, url_or_none,
xpath_text, xpath_text,
) )
from ..utils.traversal import require, traverse_obj
class TwitterBaseIE(InfoExtractor): class TwitterBaseIE(InfoExtractor):
@ -1342,7 +1342,7 @@ class TwitterIE(TwitterBaseIE):
'tweet_mode': 'extended', 'tweet_mode': 'extended',
}) })
except ExtractorError as e: except ExtractorError as e:
if not isinstance(e.cause, HTTPError) or not e.cause.status == 429: if not isinstance(e.cause, HTTPError) or e.cause.status != 429:
raise raise
self.report_warning('Rate-limit exceeded; falling back to syndication endpoint') self.report_warning('Rate-limit exceeded; falling back to syndication endpoint')
status = self._call_syndication_api(twid) status = self._call_syndication_api(twid)
@ -1596,8 +1596,8 @@ class TwitterAmplifyIE(TwitterBaseIE):
class TwitterBroadcastIE(TwitterBaseIE, PeriscopeBaseIE): class TwitterBroadcastIE(TwitterBaseIE, PeriscopeBaseIE):
IE_NAME = 'twitter:broadcast' IE_NAME = 'twitter:broadcast'
_VALID_URL = TwitterBaseIE._BASE_REGEX + r'i/broadcasts/(?P<id>[0-9a-zA-Z]{13})'
_VALID_URL = TwitterBaseIE._BASE_REGEX + r'i/(?P<type>broadcasts|events)/(?P<id>\w+)'
_TESTS = [{ _TESTS = [{
# untitled Periscope video # untitled Periscope video
'url': 'https://twitter.com/i/broadcasts/1yNGaQLWpejGj', 'url': 'https://twitter.com/i/broadcasts/1yNGaQLWpejGj',
@ -1605,6 +1605,7 @@ class TwitterBroadcastIE(TwitterBaseIE, PeriscopeBaseIE):
'id': '1yNGaQLWpejGj', 'id': '1yNGaQLWpejGj',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Andrea May Sahouri - Periscope Broadcast', 'title': 'Andrea May Sahouri - Periscope Broadcast',
'display_id': '1yNGaQLWpejGj',
'uploader': 'Andrea May Sahouri', 'uploader': 'Andrea May Sahouri',
'uploader_id': 'andreamsahouri', 'uploader_id': 'andreamsahouri',
'uploader_url': 'https://twitter.com/andreamsahouri', 'uploader_url': 'https://twitter.com/andreamsahouri',
@ -1612,6 +1613,8 @@ class TwitterBroadcastIE(TwitterBaseIE, PeriscopeBaseIE):
'upload_date': '20200601', 'upload_date': '20200601',
'thumbnail': r're:^https?://[^?#]+\.jpg\?token=', 'thumbnail': r're:^https?://[^?#]+\.jpg\?token=',
'view_count': int, 'view_count': int,
'concurrent_view_count': int,
'live_status': 'was_live',
}, },
}, { }, {
'url': 'https://twitter.com/i/broadcasts/1ZkKzeyrPbaxv', 'url': 'https://twitter.com/i/broadcasts/1ZkKzeyrPbaxv',
@ -1619,6 +1622,7 @@ class TwitterBroadcastIE(TwitterBaseIE, PeriscopeBaseIE):
'id': '1ZkKzeyrPbaxv', 'id': '1ZkKzeyrPbaxv',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Starship | SN10 | High-Altitude Flight Test', 'title': 'Starship | SN10 | High-Altitude Flight Test',
'display_id': '1ZkKzeyrPbaxv',
'uploader': 'SpaceX', 'uploader': 'SpaceX',
'uploader_id': 'SpaceX', 'uploader_id': 'SpaceX',
'uploader_url': 'https://twitter.com/SpaceX', 'uploader_url': 'https://twitter.com/SpaceX',
@ -1626,6 +1630,8 @@ class TwitterBroadcastIE(TwitterBaseIE, PeriscopeBaseIE):
'upload_date': '20210303', 'upload_date': '20210303',
'thumbnail': r're:^https?://[^?#]+\.jpg\?token=', 'thumbnail': r're:^https?://[^?#]+\.jpg\?token=',
'view_count': int, 'view_count': int,
'concurrent_view_count': int,
'live_status': 'was_live',
}, },
}, { }, {
'url': 'https://twitter.com/i/broadcasts/1OyKAVQrgzwGb', 'url': 'https://twitter.com/i/broadcasts/1OyKAVQrgzwGb',
@ -1633,6 +1639,7 @@ class TwitterBroadcastIE(TwitterBaseIE, PeriscopeBaseIE):
'id': '1OyKAVQrgzwGb', 'id': '1OyKAVQrgzwGb',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Starship Flight Test', 'title': 'Starship Flight Test',
'display_id': '1OyKAVQrgzwGb',
'uploader': 'SpaceX', 'uploader': 'SpaceX',
'uploader_id': 'SpaceX', 'uploader_id': 'SpaceX',
'uploader_url': 'https://twitter.com/SpaceX', 'uploader_url': 'https://twitter.com/SpaceX',
@ -1640,21 +1647,58 @@ class TwitterBroadcastIE(TwitterBaseIE, PeriscopeBaseIE):
'upload_date': '20230420', 'upload_date': '20230420',
'thumbnail': r're:^https?://[^?#]+\.jpg\?token=', 'thumbnail': r're:^https?://[^?#]+\.jpg\?token=',
'view_count': int, 'view_count': int,
'concurrent_view_count': int,
'live_status': 'was_live',
},
}, {
'url': 'https://x.com/i/events/1910629646300762112',
'info_dict': {
'id': '1LyxBWDRNqyKN',
'ext': 'mp4',
'title': '#ガンニバル ウォッチパーティー',
'concurrent_view_count': int,
'display_id': '1910629646300762112',
'live_status': 'was_live',
'release_date': '20250423',
'release_timestamp': 1745409000,
'tags': ['ガンニバル'],
'thumbnail': r're:https?://[^?#]+\.jpg\?token=',
'timestamp': 1745403328,
'upload_date': '20250423',
'uploader': 'ディズニープラス公式',
'uploader_id': 'DisneyPlusJP',
'uploader_url': 'https://twitter.com/DisneyPlusJP',
'view_count': int,
}, },
}] }]
def _real_extract(self, url): def _real_extract(self, url):
broadcast_id = self._match_id(url) broadcast_type, display_id = self._match_valid_url(url).group('type', 'id')
if broadcast_type == 'events':
timeline = self._call_api(
f'live_event/1/{display_id}/timeline.json', display_id)
broadcast_id = traverse_obj(timeline, (
'twitter_objects', 'broadcasts', ..., ('id', 'broadcast_id'),
{str}, any, {require('broadcast ID')}))
else:
broadcast_id = display_id
broadcast = self._call_api( broadcast = self._call_api(
'broadcasts/show.json', broadcast_id, 'broadcasts/show.json', broadcast_id,
{'ids': broadcast_id})['broadcasts'][broadcast_id] {'ids': broadcast_id})['broadcasts'][broadcast_id]
if not broadcast: if not broadcast:
raise ExtractorError('Broadcast no longer exists', expected=True) raise ExtractorError('Broadcast no longer exists', expected=True)
info = self._parse_broadcast_data(broadcast, broadcast_id) info = self._parse_broadcast_data(broadcast, broadcast_id)
info['title'] = broadcast.get('status') or info.get('title') info.update({
info['uploader_id'] = broadcast.get('twitter_username') or info.get('uploader_id') 'display_id': display_id,
info['uploader_url'] = format_field(broadcast, 'twitter_username', 'https://twitter.com/%s', default=None) 'title': broadcast.get('status') or info.get('title'),
'uploader_id': broadcast.get('twitter_username') or info.get('uploader_id'),
'uploader_url': format_field(
broadcast, 'twitter_username', 'https://twitter.com/%s', default=None),
})
if info['live_status'] == 'is_upcoming': if info['live_status'] == 'is_upcoming':
self.raise_no_formats('This live broadcast has not yet started', expected=True)
return info return info
media_key = broadcast['media_key'] media_key = broadcast['media_key']

View File

@ -1,98 +1,53 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import clean_html
int_or_none, from ..utils.traversal import find_element, traverse_obj
parse_filesize,
parse_iso8601,
)
class UMGDeIE(InfoExtractor): class UMGDeIE(InfoExtractor):
_WORKING = False
IE_NAME = 'umg:de' IE_NAME = 'umg:de'
IE_DESC = 'Universal Music Deutschland' IE_DESC = 'Universal Music Deutschland'
_VALID_URL = r'https?://(?:www\.)?universal-music\.de/[^/]+/videos/[^/?#]+-(?P<id>\d+)' _VALID_URL = r'https?://(?:www\.)?universal-music\.de/[^/?#]+/videos/(?P<slug>[^/?#]+-(?P<id>\d+))'
_TEST = { _TESTS = [{
'url': 'https://www.universal-music.de/sido/videos/jedes-wort-ist-gold-wert-457803', 'url': 'https://www.universal-music.de/sido/videos/jedes-wort-ist-gold-wert-457803',
'md5': 'ebd90f48c80dcc82f77251eb1902634f',
'info_dict': { 'info_dict': {
'id': '457803', 'id': '457803',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Jedes Wort ist Gold wert', 'title': 'Jedes Wort ist Gold wert',
'artists': ['Sido'],
'description': 'md5:df2dbffcff1a74e0a7c9bef4b497aeec',
'display_id': 'jedes-wort-ist-gold-wert-457803',
'duration': 210.0,
'thumbnail': r're:https?://images\.universal-music\.de/img/assets/.+\.jpg',
'timestamp': 1513591800, 'timestamp': 1513591800,
'upload_date': '20171218', 'upload_date': '20171218',
'view_count': int,
}, },
} }, {
'url': 'https://www.universal-music.de/alexander-eder/videos/der-doktor-hat-gesagt-609533',
'info_dict': {
'id': '609533',
'ext': 'mp4',
'title': 'Der Doktor hat gesagt',
'artists': ['Alexander Eder'],
'display_id': 'der-doktor-hat-gesagt-609533',
'duration': 146.0,
'thumbnail': r're:https?://images\.universal-music\.de/img/assets/.+\.jpg',
'timestamp': 1742982100,
'upload_date': '20250326',
},
}]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) display_id, video_id = self._match_valid_url(url).group('slug', 'id')
video_data = self._download_json( webpage = self._download_webpage(url, display_id)
'https://graphql.universal-music.de/',
video_id, query={
'query': '''{
universalMusic(channel:16) {
video(id:%s) {
headline
formats {
formatId
url
type
width
height
mimeType
fileSize
}
duration
createdDate
}
}
}''' % video_id})['data']['universalMusic']['video'] # noqa: UP031
title = video_data['headline']
hls_url_template = 'http://mediadelivery.universal-music-services.de/vod/mp4:autofill/storage/' + '/'.join(list(video_id)) + '/content/%s/file/playlist.m3u8'
thumbnails = []
formats = []
def add_m3u8_format(format_id):
formats.extend(self._extract_m3u8_formats(
hls_url_template % format_id, video_id, 'mp4',
'm3u8_native', m3u8_id='hls', fatal=False))
for f in video_data.get('formats', []):
f_url = f.get('url')
mime_type = f.get('mimeType')
if not f_url or mime_type == 'application/mxf':
continue
fmt = {
'url': f_url,
'width': int_or_none(f.get('width')),
'height': int_or_none(f.get('height')),
'filesize': parse_filesize(f.get('fileSize')),
}
f_type = f.get('type')
if f_type == 'Image':
thumbnails.append(fmt)
elif f_type == 'Video':
format_id = f.get('formatId')
if format_id:
fmt['format_id'] = format_id
if mime_type == 'video/mp4':
add_m3u8_format(format_id)
urlh = self._request_webpage(f_url, video_id, fatal=False)
if urlh:
first_byte = urlh.read(1)
if first_byte not in (b'F', b'\x00'):
continue
formats.append(fmt)
if not formats:
for format_id in (867, 836, 940):
add_m3u8_format(format_id)
return { return {
**self._search_json_ld(webpage, display_id),
'id': video_id, 'id': video_id,
'title': title, 'artists': traverse_obj(self._html_search_meta('umg-artist-screenname', webpage), (filter, all)),
'duration': int_or_none(video_data.get('duration')), # The JSON LD description duplicates the title
'timestamp': parse_iso8601(video_data.get('createdDate'), ' '), 'description': traverse_obj(webpage, ({find_element(cls='_3Y0Lj')}, {clean_html})),
'thumbnails': thumbnails, 'display_id': display_id,
'formats': formats, 'formats': self._extract_m3u8_formats(
'https://hls.universal-music.de/get', display_id, 'mp4', query={'id': video_id}),
} }

View File

@ -32,6 +32,7 @@ class ViceBaseIE(InfoExtractor):
class ViceIE(ViceBaseIE, AdobePassIE): class ViceIE(ViceBaseIE, AdobePassIE):
_WORKING = False
IE_NAME = 'vice' IE_NAME = 'vice'
_VALID_URL = r'https?://(?:(?:video|vms)\.vice|(?:www\.)?vice(?:land|tv))\.com/(?P<locale>[^/]+)/(?:video/[^/]+|embed)/(?P<id>[\da-f]{24})' _VALID_URL = r'https?://(?:(?:video|vms)\.vice|(?:www\.)?vice(?:land|tv))\.com/(?P<locale>[^/]+)/(?:video/[^/]+|embed)/(?P<id>[\da-f]{24})'
_EMBED_REGEX = [r'<iframe\b[^>]+\bsrc=["\'](?P<url>(?:https?:)?//video\.vice\.com/[^/]+/embed/[\da-f]{24})'] _EMBED_REGEX = [r'<iframe\b[^>]+\bsrc=["\'](?P<url>(?:https?:)?//video\.vice\.com/[^/]+/embed/[\da-f]{24})']
@ -99,6 +100,7 @@ class ViceIE(ViceBaseIE, AdobePassIE):
'url': 'https://www.viceland.com/en_us/video/thursday-march-1-2018/5a8f2d7ff1cdb332dd446ec1', 'url': 'https://www.viceland.com/en_us/video/thursday-march-1-2018/5a8f2d7ff1cdb332dd446ec1',
'only_matching': True, 'only_matching': True,
}] }]
_SOFTWARE_STATEMENT = 'eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiIwMTVjODBlZC04ZDcxLTQ4ZGEtOTZkZi00NzU5NjIwNzJlYTQiLCJuYmYiOjE2NjgwMTM0ODQsImlzcyI6ImF1dGguYWRvYmUuY29tIiwiaWF0IjoxNjY4MDEzNDg0fQ.CjhUnTrlh-bmYnEFHyC2Y4it5Y_Zfza1x66O4-ki5gBR7JT6aUunYI_YflXomQPACriMpObkITFz4grVaDwdd8Xp9hrQ2R0SwRBdaklkdy1_j68RqSP5PnexJIa0q_ThtOwfRBd5uGcb33nMJ9Qs92W4kVXuca0Ta-i7SJyWgXUaPDlRDdgyCL3hKj5wuM7qUIwrd9A5CMm-j3dMIBCDgw7X6TwRK65eUQe6gTWqcvL2yONHHTpmIfeOTUxGwwKFr29COOTBowm0VJ6HE08xjXCShP08Neusu-JsgkjzhkEbiDE2531EKgfAki_7WCd2JUZVsAsCusv4a1maokk6NA'
def _real_extract(self, url): def _real_extract(self, url):
locale, video_id = self._match_valid_url(url).groups() locale, video_id = self._match_valid_url(url).groups()
@ -116,7 +118,7 @@ class ViceIE(ViceBaseIE, AdobePassIE):
resource = self._get_mvpd_resource( resource = self._get_mvpd_resource(
'VICELAND', title, video_id, rating) 'VICELAND', title, video_id, rating)
query['tvetoken'] = self._extract_mvpd_auth( query['tvetoken'] = self._extract_mvpd_auth(
url, video_id, 'VICELAND', resource) url, video_id, 'VICELAND', resource, self._SOFTWARE_STATEMENT)
# signature generation algorithm is reverse engineered from signatureGenerator in # signature generation algorithm is reverse engineered from signatureGenerator in
# webpack:///../shared/~/vice-player/dist/js/vice-player.js in # webpack:///../shared/~/vice-player/dist/js/vice-player.js in
@ -181,6 +183,7 @@ class ViceIE(ViceBaseIE, AdobePassIE):
class ViceShowIE(ViceBaseIE): class ViceShowIE(ViceBaseIE):
_WORKING = False
IE_NAME = 'vice:show' IE_NAME = 'vice:show'
_VALID_URL = r'https?://(?:video\.vice|(?:www\.)?vice(?:land|tv))\.com/(?P<locale>[^/]+)/show/(?P<id>[^/?#&]+)' _VALID_URL = r'https?://(?:video\.vice|(?:www\.)?vice(?:land|tv))\.com/(?P<locale>[^/]+)/show/(?P<id>[^/?#&]+)'
_PAGE_SIZE = 25 _PAGE_SIZE = 25
@ -221,6 +224,7 @@ class ViceShowIE(ViceBaseIE):
class ViceArticleIE(ViceBaseIE): class ViceArticleIE(ViceBaseIE):
_WORKING = False
IE_NAME = 'vice:article' IE_NAME = 'vice:article'
_VALID_URL = r'https?://(?:www\.)?vice\.com/(?P<locale>[^/]+)/article/(?:[0-9a-z]{6}/)?(?P<id>[^?#]+)' _VALID_URL = r'https?://(?:www\.)?vice\.com/(?P<locale>[^/]+)/article/(?:[0-9a-z]{6}/)?(?P<id>[^?#]+)'

View File

@ -3,6 +3,7 @@ import functools
import itertools import itertools
import json import json
import re import re
import time
import urllib.parse import urllib.parse
from .common import InfoExtractor from .common import InfoExtractor
@ -13,10 +14,12 @@ from ..utils import (
OnDemandPagedList, OnDemandPagedList,
clean_html, clean_html,
determine_ext, determine_ext,
filter_dict,
get_element_by_class, get_element_by_class,
int_or_none, int_or_none,
join_nonempty, join_nonempty,
js_to_json, js_to_json,
jwt_decode_hs256,
merge_dicts, merge_dicts,
parse_filesize, parse_filesize,
parse_iso8601, parse_iso8601,
@ -39,6 +42,9 @@ class VimeoBaseInfoExtractor(InfoExtractor):
_NETRC_MACHINE = 'vimeo' _NETRC_MACHINE = 'vimeo'
_LOGIN_REQUIRED = False _LOGIN_REQUIRED = False
_LOGIN_URL = 'https://vimeo.com/log_in' _LOGIN_URL = 'https://vimeo.com/log_in'
_REFERER_HINT = (
'Cannot download embed-only video without embedding URL. Please call yt-dlp '
'with the URL of the page that embeds this video.')
_IOS_CLIENT_AUTH = 'MTMxNzViY2Y0NDE0YTQ5YzhjZTc0YmU0NjVjNDQxYzNkYWVjOWRlOTpHKzRvMmgzVUh4UkxjdU5FRW80cDNDbDhDWGR5dVJLNUJZZ055dHBHTTB4V1VzaG41bEx1a2hiN0NWYWNUcldSSW53dzRUdFRYZlJEZmFoTTArOTBUZkJHS3R4V2llYU04Qnl1bERSWWxUdXRidjNqR2J4SHFpVmtFSUcyRktuQw==' _IOS_CLIENT_AUTH = 'MTMxNzViY2Y0NDE0YTQ5YzhjZTc0YmU0NjVjNDQxYzNkYWVjOWRlOTpHKzRvMmgzVUh4UkxjdU5FRW80cDNDbDhDWGR5dVJLNUJZZ055dHBHTTB4V1VzaG41bEx1a2hiN0NWYWNUcldSSW53dzRUdFRYZlJEZmFoTTArOTBUZkJHS3R4V2llYU04Qnl1bERSWWxUdXRidjNqR2J4SHFpVmtFSUcyRktuQw=='
_IOS_CLIENT_HEADERS = { _IOS_CLIENT_HEADERS = {
'Accept': 'application/vnd.vimeo.*+json; version=3.4.10', 'Accept': 'application/vnd.vimeo.*+json; version=3.4.10',
@ -47,6 +53,7 @@ class VimeoBaseInfoExtractor(InfoExtractor):
} }
_IOS_OAUTH_CACHE_KEY = 'oauth-token-ios' _IOS_OAUTH_CACHE_KEY = 'oauth-token-ios'
_ios_oauth_token = None _ios_oauth_token = None
_viewer_info = None
@staticmethod @staticmethod
def _smuggle_referrer(url, referrer_url): def _smuggle_referrer(url, referrer_url):
@ -60,8 +67,21 @@ class VimeoBaseInfoExtractor(InfoExtractor):
headers['Referer'] = data['referer'] headers['Referer'] = data['referer']
return url, data, headers return url, data, headers
def _jwt_is_expired(self, token):
return jwt_decode_hs256(token)['exp'] - time.time() < 120
def _fetch_viewer_info(self, display_id=None, fatal=True):
if self._viewer_info and not self._jwt_is_expired(self._viewer_info['jwt']):
return self._viewer_info
self._viewer_info = self._download_json(
'https://vimeo.com/_next/viewer', display_id, 'Downloading web token info',
'Failed to download web token info', fatal=fatal, headers={'Accept': 'application/json'})
return self._viewer_info
def _perform_login(self, username, password): def _perform_login(self, username, password):
viewer = self._download_json('https://vimeo.com/_next/viewer', None, 'Downloading login token') viewer = self._fetch_viewer_info()
data = { data = {
'action': 'login', 'action': 'login',
'email': username, 'email': username,
@ -96,11 +116,10 @@ class VimeoBaseInfoExtractor(InfoExtractor):
expected=True) expected=True)
return password return password
def _verify_video_password(self, video_id): def _verify_video_password(self, video_id, path=None):
video_password = self._get_video_password() video_password = self._get_video_password()
token = self._download_json( token = self._fetch_viewer_info(video_id)['xsrft']
'https://vimeo.com/_next/viewer', video_id, 'Downloading viewer info')['xsrft'] url = join_nonempty('https://vimeo.com', path, video_id, delim='/')
url = f'https://vimeo.com/{video_id}'
try: try:
self._request_webpage( self._request_webpage(
f'{url}/password', video_id, f'{url}/password', video_id,
@ -117,6 +136,10 @@ class VimeoBaseInfoExtractor(InfoExtractor):
raise ExtractorError('Wrong password', expected=True) raise ExtractorError('Wrong password', expected=True)
raise raise
def _extract_config_url(self, webpage, **kwargs):
return self._html_search_regex(
r'\bdata-config-url="([^"]+)"', webpage, 'config URL', **kwargs)
def _extract_vimeo_config(self, webpage, video_id, *args, **kwargs): def _extract_vimeo_config(self, webpage, video_id, *args, **kwargs):
vimeo_config = self._search_regex( vimeo_config = self._search_regex(
r'vimeo\.config\s*=\s*(?:({.+?})|_extend\([^,]+,\s+({.+?})\));', r'vimeo\.config\s*=\s*(?:({.+?})|_extend\([^,]+,\s+({.+?})\));',
@ -164,6 +187,7 @@ class VimeoBaseInfoExtractor(InfoExtractor):
sep_pattern = r'/sep/video/' sep_pattern = r'/sep/video/'
for files_type in ('hls', 'dash'): for files_type in ('hls', 'dash'):
for cdn_name, cdn_data in (try_get(config_files, lambda x: x[files_type]['cdns']) or {}).items(): for cdn_name, cdn_data in (try_get(config_files, lambda x: x[files_type]['cdns']) or {}).items():
# TODO: Also extract 'avc_url'? Investigate if there are 'hevc_url', 'av1_url'?
manifest_url = cdn_data.get('url') manifest_url = cdn_data.get('url')
if not manifest_url: if not manifest_url:
continue continue
@ -212,7 +236,7 @@ class VimeoBaseInfoExtractor(InfoExtractor):
for tt in (request.get('text_tracks') or []): for tt in (request.get('text_tracks') or []):
subtitles.setdefault(tt['lang'], []).append({ subtitles.setdefault(tt['lang'], []).append({
'ext': 'vtt', 'ext': 'vtt',
'url': urljoin('https://vimeo.com', tt['url']), 'url': urljoin('https://player.vimeo.com/', tt['url']),
}) })
thumbnails = [] thumbnails = []
@ -244,7 +268,10 @@ class VimeoBaseInfoExtractor(InfoExtractor):
'formats': formats, 'formats': formats,
'subtitles': subtitles, 'subtitles': subtitles,
'live_status': live_status, 'live_status': live_status,
'release_timestamp': traverse_obj(live_event, ('ingest', 'scheduled_start_time', {parse_iso8601})), 'release_timestamp': traverse_obj(live_event, ('ingest', (
('scheduled_start_time', {parse_iso8601}),
('start_time', {int_or_none}),
), any)),
# Note: Bitrates are completely broken. Single m3u8 may contain entries in kbps and bps # Note: Bitrates are completely broken. Single m3u8 may contain entries in kbps and bps
# at the same time without actual units specified. # at the same time without actual units specified.
'_format_sort_fields': ('quality', 'res', 'fps', 'hdr:12', 'source'), '_format_sort_fields': ('quality', 'res', 'fps', 'hdr:12', 'source'),
@ -353,7 +380,7 @@ class VimeoIE(VimeoBaseInfoExtractor):
(?: (?:
(?P<u>user)| (?P<u>user)|
(?!(?:channels|album|showcase)/[^/?#]+/?(?:$|[?#])|[^/]+/review/|ondemand/) (?!(?:channels|album|showcase)/[^/?#]+/?(?:$|[?#])|[^/]+/review/|ondemand/)
(?:.*?/)?? (?:(?!event/).*?/)??
(?P<q> (?P<q>
(?: (?:
play_redirect_hls| play_redirect_hls|
@ -933,8 +960,7 @@ class VimeoIE(VimeoBaseInfoExtractor):
r'vimeo\.com/(?:album|showcase)/([^/]+)', url, 'album id', default=None) r'vimeo\.com/(?:album|showcase)/([^/]+)', url, 'album id', default=None)
if not album_id: if not album_id:
return return
viewer = self._download_json( viewer = self._fetch_viewer_info(album_id, fatal=False)
'https://vimeo.com/_rv/viewer', album_id, fatal=False)
if not viewer: if not viewer:
webpage = self._download_webpage(url, album_id) webpage = self._download_webpage(url, album_id)
viewer = self._parse_json(self._search_regex( viewer = self._parse_json(self._search_regex(
@ -992,9 +1018,7 @@ class VimeoIE(VimeoBaseInfoExtractor):
raise raise
errmsg = error.cause.response.read() errmsg = error.cause.response.read()
if b'Because of its privacy settings, this video cannot be played here' in errmsg: if b'Because of its privacy settings, this video cannot be played here' in errmsg:
raise ExtractorError( raise ExtractorError(self._REFERER_HINT, expected=True)
'Cannot download embed-only video without embedding URL. Please call yt-dlp '
'with the URL of the page that embeds this video.', expected=True)
# 403 == vimeo.com TLS fingerprint or DC IP block; 429 == player.vimeo.com TLS FP block # 403 == vimeo.com TLS fingerprint or DC IP block; 429 == player.vimeo.com TLS FP block
status = error.cause.status status = error.cause.status
dcip_msg = 'If you are using a data center IP or VPN/proxy, your IP may be blocked' dcip_msg = 'If you are using a data center IP or VPN/proxy, your IP may be blocked'
@ -1039,8 +1063,7 @@ class VimeoIE(VimeoBaseInfoExtractor):
channel_id = self._search_regex( channel_id = self._search_regex(
r'vimeo\.com/channels/([^/]+)', url, 'channel id', default=None) r'vimeo\.com/channels/([^/]+)', url, 'channel id', default=None)
if channel_id: if channel_id:
config_url = self._html_search_regex( config_url = self._extract_config_url(webpage, default=None)
r'\bdata-config-url="([^"]+)"', webpage, 'config URL', default=None)
video_description = clean_html(get_element_by_class('description', webpage)) video_description = clean_html(get_element_by_class('description', webpage))
info_dict.update({ info_dict.update({
'channel_id': channel_id, 'channel_id': channel_id,
@ -1333,8 +1356,7 @@ class VimeoAlbumIE(VimeoBaseInfoExtractor):
def _real_extract(self, url): def _real_extract(self, url):
album_id = self._match_id(url) album_id = self._match_id(url)
viewer = self._download_json( viewer = self._fetch_viewer_info(album_id, fatal=False)
'https://vimeo.com/_rv/viewer', album_id, fatal=False)
if not viewer: if not viewer:
webpage = self._download_webpage(url, album_id) webpage = self._download_webpage(url, album_id)
viewer = self._parse_json(self._search_regex( viewer = self._parse_json(self._search_regex(
@ -1626,3 +1648,377 @@ class VimeoProIE(VimeoBaseInfoExtractor):
return self.url_result(vimeo_url, VimeoIE, video_id, url_transparent=True, return self.url_result(vimeo_url, VimeoIE, video_id, url_transparent=True,
description=description) description=description)
class VimeoEventIE(VimeoBaseInfoExtractor):
IE_NAME = 'vimeo:event'
_VALID_URL = r'''(?x)
https?://(?:www\.)?vimeo\.com/event/(?P<id>\d+)(?:/
(?:
(?:embed/)?(?P<unlisted_hash>[\da-f]{10})|
videos/(?P<video_id>\d+)
)
)?'''
_EMBED_REGEX = [r'<iframe\b[^>]+\bsrc=["\'](?P<url>https?://vimeo\.com/event/\d+/embed(?:[/?][^"\']*)?)["\'][^>]*>']
_TESTS = [{
# stream_privacy.view: 'anybody'
'url': 'https://vimeo.com/event/5116195',
'info_dict': {
'id': '1082194134',
'ext': 'mp4',
'display_id': '5116195',
'title': 'Skidmore College Commencement 2025',
'description': 'md5:1902dd5165d21f98aa198297cc729d23',
'uploader': 'Skidmore College',
'uploader_id': 'user116066434',
'uploader_url': 'https://vimeo.com/user116066434',
'comment_count': int,
'like_count': int,
'duration': 9810,
'thumbnail': r're:https://i\.vimeocdn\.com/video/\d+-[\da-f]+-d',
'timestamp': 1747502974,
'upload_date': '20250517',
'release_timestamp': 1747502998,
'release_date': '20250517',
'live_status': 'was_live',
},
'params': {'skip_download': 'm3u8'},
'expected_warnings': ['Failed to parse XML: not well-formed'],
}, {
# stream_privacy.view: 'embed_only'
'url': 'https://vimeo.com/event/5034253/embed',
'info_dict': {
'id': '1071439154',
'ext': 'mp4',
'display_id': '5034253',
'title': 'Advancing Humans with AI',
'description': r're:AI is here to stay, but how do we ensure that people flourish in a world of pervasive AI use.{322}$',
'uploader': 'MIT Media Lab',
'uploader_id': 'mitmedialab',
'uploader_url': 'https://vimeo.com/mitmedialab',
'duration': 23235,
'thumbnail': r're:https://i\.vimeocdn\.com/video/\d+-[\da-f]+-d',
'chapters': 'count:37',
'release_timestamp': 1744290000,
'release_date': '20250410',
'live_status': 'was_live',
},
'params': {
'skip_download': 'm3u8',
'http_headers': {'Referer': 'https://www.media.mit.edu/events/aha-symposium/'},
},
'expected_warnings': ['Failed to parse XML: not well-formed'],
}, {
# Last entry on 2nd page of the 37 video playlist, but use clip_to_play_id API param shortcut
'url': 'https://vimeo.com/event/4753126/videos/1046153257',
'info_dict': {
'id': '1046153257',
'ext': 'mp4',
'display_id': '4753126',
'title': 'January 12, 2025 The True Vine (Pastor John Mindrup)',
'description': 'The True Vine (Pastor \tJohn Mindrup)',
'uploader': 'Salem United Church of Christ',
'uploader_id': 'user230181094',
'uploader_url': 'https://vimeo.com/user230181094',
'comment_count': int,
'like_count': int,
'duration': 4962,
'thumbnail': r're:https://i\.vimeocdn\.com/video/\d+-[\da-f]+-d',
'timestamp': 1736702464,
'upload_date': '20250112',
'release_timestamp': 1736702543,
'release_date': '20250112',
'live_status': 'was_live',
},
'params': {'skip_download': 'm3u8'},
'expected_warnings': ['Failed to parse XML: not well-formed'],
}, {
# "24/7" livestream
'url': 'https://vimeo.com/event/4768062',
'info_dict': {
'id': '1079901414',
'ext': 'mp4',
'display_id': '4768062',
'title': r're:GRACELAND CAM \d{4}-\d{2}-\d{2} \d{2}:\d{2}$',
'description': '24/7 camera at Graceland Mansion',
'uploader': 'Elvis Presley\'s Graceland',
'uploader_id': 'visitgraceland',
'uploader_url': 'https://vimeo.com/visitgraceland',
'release_timestamp': 1745975450,
'release_date': '20250430',
'live_status': 'is_live',
},
'params': {'skip_download': 'livestream'},
}, {
# stream_privacy.view: 'unlisted' with unlisted_hash in URL path (stream_privacy.embed: 'whitelist')
'url': 'https://vimeo.com/event/4259978/3db517c479',
'info_dict': {
'id': '939104114',
'ext': 'mp4',
'display_id': '4259978',
'title': 'Enhancing Credibility in Your Community Science Project',
'description': 'md5:eab953341168b9c146bc3cfe3f716070',
'uploader': 'NOAA Research',
'uploader_id': 'noaaresearch',
'uploader_url': 'https://vimeo.com/noaaresearch',
'comment_count': int,
'like_count': int,
'duration': 3961,
'thumbnail': r're:https://i\.vimeocdn\.com/video/\d+-[\da-f]+-d',
'timestamp': 1716408008,
'upload_date': '20240522',
'release_timestamp': 1716408062,
'release_date': '20240522',
'live_status': 'was_live',
},
'params': {'skip_download': 'm3u8'},
'expected_warnings': ['Failed to parse XML: not well-formed'],
}, {
# "done" event with video_id in URL and unlisted_hash in VimeoIE URL
'url': 'https://vimeo.com/event/595460/videos/498149131/',
'info_dict': {
'id': '498149131',
'ext': 'mp4',
'display_id': '595460',
'title': '2021 Eighth Annual John Cardinal Foley Lecture on Social Communications',
'description': 'Replay: https://vimeo.com/catholicphilly/review/498149131/544f26a12f',
'uploader': 'Kearns Media Consulting LLC',
'uploader_id': 'kearnsmediaconsulting',
'uploader_url': 'https://vimeo.com/kearnsmediaconsulting',
'comment_count': int,
'like_count': int,
'duration': 4466,
'thumbnail': r're:https://i\.vimeocdn\.com/video/\d+-[\da-f]+-d',
'timestamp': 1612228466,
'upload_date': '20210202',
'release_timestamp': 1612228538,
'release_date': '20210202',
'live_status': 'was_live',
},
'params': {'skip_download': 'm3u8'},
'expected_warnings': ['Failed to parse XML: not well-formed'],
}, {
# stream_privacy.view: 'password'; stream_privacy.embed: 'public'
'url': 'https://vimeo.com/event/4940578',
'info_dict': {
'id': '1059263570',
'ext': 'mp4',
'display_id': '4940578',
'title': 'TMAC AKC AGILITY 2-22-2025',
'uploader': 'Paws \'N Effect',
'uploader_id': 'pawsneffect',
'uploader_url': 'https://vimeo.com/pawsneffect',
'comment_count': int,
'like_count': int,
'duration': 33115,
'thumbnail': r're:https://i\.vimeocdn\.com/video/\d+-[\da-f]+-d',
'timestamp': 1740261836,
'upload_date': '20250222',
'release_timestamp': 1740261873,
'release_date': '20250222',
'live_status': 'was_live',
},
'params': {
'videopassword': '22',
'skip_download': 'm3u8',
},
'expected_warnings': ['Failed to parse XML: not well-formed'],
}, {
# API serves a playlist of 37 videos, but the site only streams the newest one (changes every Sunday)
'url': 'https://vimeo.com/event/4753126',
'only_matching': True,
}, {
# Scheduled for 2025.05.15 but never started; "unavailable"; stream_privacy.view: "anybody"
'url': 'https://vimeo.com/event/5120811/embed',
'only_matching': True,
}, {
'url': 'https://vimeo.com/event/5112969/embed?muted=1',
'only_matching': True,
}, {
'url': 'https://vimeo.com/event/5097437/embed/interaction?muted=1',
'only_matching': True,
}, {
'url': 'https://vimeo.com/event/5113032/embed?autoplay=1&muted=1',
'only_matching': True,
}, {
# Ended livestream with video_id
'url': 'https://vimeo.com/event/595460/videos/507329569/',
'only_matching': True,
}, {
# stream_privacy.view: 'unlisted' with unlisted_hash in URL path (stream_privacy.embed: 'public')
'url': 'https://vimeo.com/event/4606123/embed/358d60ce2e',
'only_matching': True,
}]
_WEBPAGE_TESTS = [{
# Same result as https://vimeo.com/event/5034253/embed
'url': 'https://www.media.mit.edu/events/aha-symposium/',
'info_dict': {
'id': '1071439154',
'ext': 'mp4',
'display_id': '5034253',
'title': 'Advancing Humans with AI',
'description': r're:AI is here to stay, but how do we ensure that people flourish in a world of pervasive AI use.{322}$',
'uploader': 'MIT Media Lab',
'uploader_id': 'mitmedialab',
'uploader_url': 'https://vimeo.com/mitmedialab',
'duration': 23235,
'thumbnail': r're:https://i\.vimeocdn\.com/video/\d+-[\da-f]+-d',
'chapters': 'count:37',
'release_timestamp': 1744290000,
'release_date': '20250410',
'live_status': 'was_live',
},
'params': {'skip_download': 'm3u8'},
'expected_warnings': ['Failed to parse XML: not well-formed'],
}]
_EVENT_FIELDS = (
'title', 'uri', 'schedule', 'stream_description', 'stream_privacy.embed', 'stream_privacy.view',
'clip_to_play.name', 'clip_to_play.uri', 'clip_to_play.config_url', 'clip_to_play.live.status',
'clip_to_play.privacy.embed', 'clip_to_play.privacy.view', 'clip_to_play.password',
'streamable_clip.name', 'streamable_clip.uri', 'streamable_clip.config_url', 'streamable_clip.live.status',
)
_VIDEOS_FIELDS = ('items', 'uri', 'name', 'config_url', 'duration', 'live.status')
def _call_events_api(
self, event_id, ep=None, unlisted_hash=None, note=None,
fields=(), referrer=None, query=None, headers=None,
):
resource = join_nonempty('event', ep, note, 'API JSON', delim=' ')
return self._download_json(
join_nonempty(
'https://api.vimeo.com/live_events',
join_nonempty(event_id, unlisted_hash, delim=':'), ep, delim='/'),
event_id, f'Downloading {resource}', f'Failed to download {resource}',
query=filter_dict({
'fields': ','.join(fields) or [],
# Correct spelling with 4 R's is deliberate
'referrer': referrer,
**(query or {}),
}), headers=filter_dict({
'Accept': 'application/json',
'Authorization': f'jwt {self._fetch_viewer_info(event_id)["jwt"]}',
'Referer': referrer,
**(headers or {}),
}))
@staticmethod
def _extract_video_id_and_unlisted_hash(video):
if not traverse_obj(video, ('uri', {lambda x: x.startswith('/videos/')})):
return None, None
video_id, _, unlisted_hash = video['uri'][8:].partition(':')
return video_id, unlisted_hash or None
def _vimeo_url_result(self, video_id, unlisted_hash=None, event_id=None):
# VimeoIE can extract more metadata and formats for was_live event videos
return self.url_result(
join_nonempty('https://vimeo.com', video_id, unlisted_hash, delim='/'), VimeoIE,
video_id, display_id=event_id, live_status='was_live', url_transparent=True)
@classmethod
def _extract_embed_urls(cls, url, webpage):
for embed_url in super()._extract_embed_urls(url, webpage):
yield cls._smuggle_referrer(embed_url, url)
def _real_extract(self, url):
url, _, headers = self._unsmuggle_headers(url)
# XXX: Keep key name in sync with _unsmuggle_headers
referrer = headers.get('Referer')
event_id, unlisted_hash, video_id = self._match_valid_url(url).group('id', 'unlisted_hash', 'video_id')
for retry in (False, True):
try:
live_event_data = self._call_events_api(
event_id, unlisted_hash=unlisted_hash, fields=self._EVENT_FIELDS,
referrer=referrer, query={'clip_to_play_id': video_id or '0'},
headers={'Accept': 'application/vnd.vimeo.*+json;version=3.4.9'})
break
except ExtractorError as e:
if retry or not isinstance(e.cause, HTTPError) or e.cause.status not in (400, 403):
raise
response = traverse_obj(e.cause.response.read(), ({json.loads}, {dict})) or {}
error_code = response.get('error_code')
if error_code == 2204:
self._verify_video_password(event_id, path='event')
continue
if error_code == 3200:
raise ExtractorError(self._REFERER_HINT, expected=True)
if error_msg := response.get('error'):
raise ExtractorError(f'Vimeo says: {error_msg}', expected=True)
raise
# stream_privacy.view can be: 'anybody', 'embed_only', 'nobody', 'password', 'unlisted'
view_policy = live_event_data['stream_privacy']['view']
if view_policy == 'nobody':
raise ExtractorError('This event has not been made available to anyone', expected=True)
clip_data = traverse_obj(live_event_data, ('clip_to_play', {dict})) or {}
# live.status can be: 'streaming' (is_live), 'done' (was_live), 'unavailable' (is_upcoming OR dead)
clip_status = traverse_obj(clip_data, ('live', 'status', {str}))
start_time = traverse_obj(live_event_data, ('schedule', 'start_time', {str}))
release_timestamp = parse_iso8601(start_time)
if clip_status == 'unavailable' and release_timestamp and release_timestamp > time.time():
self.raise_no_formats(f'This live event is scheduled for {start_time}', expected=True)
live_status = 'is_upcoming'
config_url = None
elif view_policy == 'embed_only':
webpage = self._download_webpage(
join_nonempty('https://vimeo.com/event', event_id, 'embed', unlisted_hash, delim='/'),
event_id, 'Downloading embed iframe webpage', impersonate=True, headers=headers)
# The _parse_config result will overwrite live_status w/ 'is_live' if livestream is active
live_status = 'was_live'
config_url = self._extract_config_url(webpage)
else: # view_policy in ('anybody', 'password', 'unlisted')
if video_id:
clip_id, clip_hash = self._extract_video_id_and_unlisted_hash(clip_data)
if video_id == clip_id and clip_status == 'done' and (clip_hash or view_policy != 'unlisted'):
return self._vimeo_url_result(clip_id, clip_hash, event_id)
video_filter = lambda _, v: self._extract_video_id_and_unlisted_hash(v)[0] == video_id
else:
video_filter = lambda _, v: v['live']['status'] in ('streaming', 'done')
for page in itertools.count(1):
videos_data = self._call_events_api(
event_id, 'videos', unlisted_hash=unlisted_hash, note=f'page {page}',
fields=self._VIDEOS_FIELDS, referrer=referrer, query={'page': page},
headers={'Accept': 'application/vnd.vimeo.*;version=3.4.1'})
video = traverse_obj(videos_data, ('data', video_filter, any))
if video or not traverse_obj(videos_data, ('paging', 'next', {str})):
break
live_status = {
'streaming': 'is_live',
'done': 'was_live',
}.get(traverse_obj(video, ('live', 'status', {str})))
if not live_status: # requested video_id is unavailable or no videos are available
raise ExtractorError('This event video is unavailable', expected=True)
elif live_status == 'was_live':
return self._vimeo_url_result(*self._extract_video_id_and_unlisted_hash(video), event_id)
config_url = video['config_url']
if config_url: # view_policy == 'embed_only' or live_status == 'is_live'
info = filter_dict(self._parse_config(
self._download_json(config_url, event_id, 'Downloading config JSON'), event_id))
else: # live_status == 'is_upcoming'
info = {'id': event_id}
if info.get('live_status') == 'post_live':
self.report_warning('This live event recently ended and some formats may not yet be available')
return {
**traverse_obj(live_event_data, {
'title': ('title', {str}),
'description': ('stream_description', {str}),
}),
'display_id': event_id,
'live_status': live_status,
'release_timestamp': release_timestamp,
**info,
}

View File

@ -548,21 +548,21 @@ class VKIE(VKBaseIE):
'formats': formats, 'formats': formats,
'subtitles': subtitles, 'subtitles': subtitles,
**traverse_obj(mv_data, { **traverse_obj(mv_data, {
'title': ('title', {unescapeHTML}), 'title': ('title', {str}, {unescapeHTML}),
'description': ('desc', {clean_html}, filter), 'description': ('desc', {clean_html}, filter),
'duration': ('duration', {int_or_none}), 'duration': ('duration', {int_or_none}),
'like_count': ('likes', {int_or_none}), 'like_count': ('likes', {int_or_none}),
'comment_count': ('commcount', {int_or_none}), 'comment_count': ('commcount', {int_or_none}),
}), }),
**traverse_obj(data, { **traverse_obj(data, {
'title': ('md_title', {unescapeHTML}), 'title': ('md_title', {str}, {unescapeHTML}),
'description': ('description', {clean_html}, filter), 'description': ('description', {clean_html}, filter),
'thumbnail': ('jpg', {url_or_none}), 'thumbnail': ('jpg', {url_or_none}),
'uploader': ('md_author', {unescapeHTML}), 'uploader': ('md_author', {str}, {unescapeHTML}),
'uploader_id': (('author_id', 'authorId'), {str_or_none}, any), 'uploader_id': (('author_id', 'authorId'), {str_or_none}, any),
'duration': ('duration', {int_or_none}), 'duration': ('duration', {int_or_none}),
'chapters': ('time_codes', lambda _, v: isinstance(v['time'], int), { 'chapters': ('time_codes', lambda _, v: isinstance(v['time'], int), {
'title': ('text', {unescapeHTML}), 'title': ('text', {str}, {unescapeHTML}),
'start_time': 'time', 'start_time': 'time',
}), }),
}), }),

View File

@ -1,7 +1,6 @@
import urllib.parse import urllib.parse
from .common import InfoExtractor from .common import InfoExtractor
from .once import OnceIE
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
int_or_none, int_or_none,
@ -10,7 +9,7 @@ from ..utils import (
) )
class VoxMediaVolumeIE(OnceIE): class VoxMediaVolumeIE(InfoExtractor):
_VALID_URL = r'https?://volume\.vox-cdn\.com/embed/(?P<id>[0-9a-f]{9})' _VALID_URL = r'https?://volume\.vox-cdn\.com/embed/(?P<id>[0-9a-f]{9})'
def _real_extract(self, url): def _real_extract(self, url):
@ -57,7 +56,8 @@ class VoxMediaVolumeIE(OnceIE):
if not provider_video_id: if not provider_video_id:
continue continue
if provider_video_type == 'brightcove': if provider_video_type == 'brightcove':
info['formats'] = self._extract_once_formats(provider_video_id) # TODO: Find embed example or confirm that Vox has stopped using Brightcove
raise ExtractorError('Vox Brightcove embeds are currently unsupported')
else: else:
info.update({ info.update({
'_type': 'url_transparent', '_type': 'url_transparent',
@ -155,20 +155,6 @@ class VoxMediaIE(InfoExtractor):
}, },
}], }],
'skip': 'Page no longer contain videos', 'skip': 'Page no longer contain videos',
}, {
# volume embed, Brightcove Once
'url': 'https://www.recode.net/2014/6/17/11628066/post-post-pc-ceo-the-full-code-conference-video-of-microsofts-satya',
'md5': '2dbc77b8b0bff1894c2fce16eded637d',
'info_dict': {
'id': '1231c973d',
'ext': 'mp4',
'title': 'Post-Post-PC CEO: The Full Code Conference Video of Microsoft\'s Satya Nadella',
'description': 'The longtime veteran was chosen earlier this year as the software giant\'s third leader in its history.',
'timestamp': 1402938000,
'upload_date': '20140616',
'duration': 4114,
},
'add_ie': ['VoxMediaVolume'],
}] }]
def _real_extract(self, url): def _real_extract(self, url):

View File

@ -2,9 +2,11 @@ from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
int_or_none, int_or_none,
join_nonempty,
try_get, try_get,
unified_strdate, unified_strdate,
) )
from ..utils.traversal import traverse_obj
class WatIE(InfoExtractor): class WatIE(InfoExtractor):
@ -70,8 +72,14 @@ class WatIE(InfoExtractor):
error_desc = video_info.get('error_desc') error_desc = video_info.get('error_desc')
if error_desc: if error_desc:
if video_info.get('error_code') == 'GEOBLOCKED': error_code = video_info.get('error_code')
if error_code == 'GEOBLOCKED':
self.raise_geo_restricted(error_desc, video_info.get('geoList')) self.raise_geo_restricted(error_desc, video_info.get('geoList'))
elif error_code == 'DELIVERY_ERROR':
if traverse_obj(video_data, ('delivery', 'code')) == 500:
self.report_drm(video_id)
error_desc = join_nonempty(
error_desc, traverse_obj(video_data, ('delivery', 'error', {str})), delim=': ')
raise ExtractorError(error_desc, expected=True) raise ExtractorError(error_desc, expected=True)
title = video_info['title'] title = video_info['title']

View File

@ -1,4 +1,5 @@
import base64 import base64
import functools
import hashlib import hashlib
import hmac import hmac
import itertools import itertools
@ -17,99 +18,227 @@ from ..utils import (
UserNotLive, UserNotLive,
float_or_none, float_or_none,
int_or_none, int_or_none,
join_nonempty,
jwt_decode_hs256,
str_or_none, str_or_none,
traverse_obj,
try_call, try_call,
update_url_query, update_url_query,
url_or_none, url_or_none,
) )
from ..utils.traversal import require, traverse_obj
class WeverseBaseIE(InfoExtractor): class WeverseBaseIE(InfoExtractor):
_NETRC_MACHINE = 'weverse' _NETRC_MACHINE = 'weverse'
_ACCOUNT_API_BASE = 'https://accountapi.weverse.io/web/api' _ACCOUNT_API_BASE = 'https://accountapi.weverse.io'
_CLIENT_PLATFORM = 'WEB'
_SIGNING_KEY = b'1b9cb6378d959b45714bec49971ade22e6e24e42'
_ACCESS_TOKEN_KEY = 'we2_access_token'
_REFRESH_TOKEN_KEY = 'we2_refresh_token'
_DEVICE_ID_KEY = 'we2_device_id'
_API_HEADERS = { _API_HEADERS = {
'Accept': 'application/json', 'Accept': 'application/json',
'Origin': 'https://weverse.io',
'Referer': 'https://weverse.io/', 'Referer': 'https://weverse.io/',
'WEV-device-Id': str(uuid.uuid4()), }
_LOGIN_HINT_TMPL = (
'You can log in using your refresh token with --username "{}" --password "REFRESH_TOKEN" '
'(replace REFRESH_TOKEN with the actual value of the "{}" cookie found in your web browser). '
'You can add an optional username suffix, e.g. --username "{}" , '
'if you need to manage multiple accounts. ')
_LOGIN_ERRORS_MAP = {
'login_required': 'This content is only available for logged-in users. ',
'invalid_username': '"{}" is not valid login username for this extractor. ',
'invalid_password': (
'Your password is not a valid refresh token. Make sure that '
'you are passing the refresh token, and NOT the access token. '),
'no_refresh_token': (
'Your access token has expired and there is no refresh token available. '
'Refresh your session/cookies in the web browser and try again. '),
'expired_refresh_token': (
'Your refresh token has expired. Log in to the site again using '
'your web browser to get a new refresh token or export fresh cookies. '),
}
_OAUTH_PREFIX = 'oauth'
_oauth_tokens = {}
_device_id = None
@property
def _oauth_headers(self):
return {
**self._API_HEADERS,
'X-ACC-APP-SECRET': '5419526f1c624b38b10787e5c10b2a7a',
'X-ACC-SERVICE-ID': 'weverse',
'X-ACC-TRACE-ID': str(uuid.uuid4()),
} }
def _perform_login(self, username, password): @functools.cached_property
if self._API_HEADERS.get('Authorization'): def _oauth_cache_key(self):
return username = self._get_login_info()[0]
if not username:
return 'cookies'
return join_nonempty(self._OAUTH_PREFIX, username.partition('+')[2])
headers = { @property
'x-acc-app-secret': '5419526f1c624b38b10787e5c10b2a7a', def _is_logged_in(self):
'x-acc-app-version': '3.3.6', return bool(self._oauth_tokens.get(self._ACCESS_TOKEN_KEY))
'x-acc-language': 'en',
'x-acc-service-id': 'weverse', def _access_token_is_valid(self):
'x-acc-trace-id': str(uuid.uuid4()), response = self._download_json(
'x-clog-user-device-id': str(uuid.uuid4()), f'{self._ACCOUNT_API_BASE}/api/v1/token/validate', None,
} 'Validating access token', 'Unable to valid access token',
valid_username = traverse_obj(self._download_json( expected_status=401, headers={
f'{self._ACCOUNT_API_BASE}/v2/signup/email/status', None, note='Checking username', **self._oauth_headers,
query={'email': username}, headers=headers, expected_status=(400, 404)), 'hasPassword') 'Authorization': f'Bearer {self._oauth_tokens[self._ACCESS_TOKEN_KEY]}',
if not valid_username: })
raise ExtractorError('Invalid username provided', expected=True) return traverse_obj(response, ('expiresIn', {int}), default=0) > 60
def _token_is_expired(self, key):
is_expired = jwt_decode_hs256(self._oauth_tokens[key])['exp'] - time.time() < 3600
if key == self._REFRESH_TOKEN_KEY or not is_expired:
return is_expired
return not self._access_token_is_valid()
def _refresh_access_token(self):
if not self._oauth_tokens.get(self._REFRESH_TOKEN_KEY):
self._report_login_error('no_refresh_token')
if self._token_is_expired(self._REFRESH_TOKEN_KEY):
self._report_login_error('expired_refresh_token')
headers = {'Content-Type': 'application/json'}
if self._is_logged_in:
headers['Authorization'] = f'Bearer {self._oauth_tokens[self._ACCESS_TOKEN_KEY]}'
headers['content-type'] = 'application/json'
try: try:
auth = self._download_json( response = self._download_json(
f'{self._ACCOUNT_API_BASE}/v3/auth/token/by-credentials', None, data=json.dumps({ f'{self._ACCOUNT_API_BASE}/api/v1/token/refresh', None,
'email': username, 'Refreshing access token', 'Unable to refresh access token',
'otpSessionId': 'BY_PASS', headers={**self._oauth_headers, **headers},
'password': password, data=json.dumps({
}, separators=(',', ':')).encode(), headers=headers, note='Logging in') 'refreshToken': self._oauth_tokens[self._REFRESH_TOKEN_KEY],
}, separators=(',', ':')).encode())
except ExtractorError as e: except ExtractorError as e:
if isinstance(e.cause, HTTPError) and e.cause.status == 401: if isinstance(e.cause, HTTPError) and e.cause.status == 401:
raise ExtractorError('Invalid password provided', expected=True) self._oauth_tokens.clear()
if self._oauth_cache_key == 'cookies':
self.cookiejar.clear(domain='.weverse.io', path='/', name=self._ACCESS_TOKEN_KEY)
self.cookiejar.clear(domain='.weverse.io', path='/', name=self._REFRESH_TOKEN_KEY)
else:
self.cache.store(self._NETRC_MACHINE, self._oauth_cache_key, self._oauth_tokens)
self._report_login_error('expired_refresh_token')
raise raise
WeverseBaseIE._API_HEADERS['Authorization'] = f'Bearer {auth["accessToken"]}' self._oauth_tokens.update(traverse_obj(response, {
self._ACCESS_TOKEN_KEY: ('accessToken', {str}, {require('access token')}),
self._REFRESH_TOKEN_KEY: ('refreshToken', {str}, {require('refresh token')}),
}))
def _real_initialize(self): if self._oauth_cache_key == 'cookies':
if self._API_HEADERS.get('Authorization'): self._set_cookie('.weverse.io', self._ACCESS_TOKEN_KEY, self._oauth_tokens[self._ACCESS_TOKEN_KEY])
self._set_cookie('.weverse.io', self._REFRESH_TOKEN_KEY, self._oauth_tokens[self._REFRESH_TOKEN_KEY])
else:
self.cache.store(self._NETRC_MACHINE, self._oauth_cache_key, self._oauth_tokens)
def _get_authorization_header(self):
if not self._is_logged_in:
return {}
if self._token_is_expired(self._ACCESS_TOKEN_KEY):
self._refresh_access_token()
return {'Authorization': f'Bearer {self._oauth_tokens[self._ACCESS_TOKEN_KEY]}'}
def _report_login_error(self, error_id):
error_msg = self._LOGIN_ERRORS_MAP[error_id]
username = self._get_login_info()[0]
if error_id == 'invalid_username':
error_msg = error_msg.format(username)
username = f'{self._OAUTH_PREFIX}+{username}'
elif not username:
username = f'{self._OAUTH_PREFIX}+USERNAME'
raise ExtractorError(join_nonempty(
error_msg, self._LOGIN_HINT_TMPL.format(self._OAUTH_PREFIX, self._REFRESH_TOKEN_KEY, username),
'Or else you can u', self._login_hint(method='session_cookies')[1:], delim=''), expected=True)
def _perform_login(self, username, password):
if self._is_logged_in:
return return
token = try_call(lambda: self._get_cookies('https://weverse.io/')['we2_access_token'].value) if username.partition('+')[0] != self._OAUTH_PREFIX:
if token: self._report_login_error('invalid_username')
WeverseBaseIE._API_HEADERS['Authorization'] = f'Bearer {token}'
self._oauth_tokens.update(self.cache.load(self._NETRC_MACHINE, self._oauth_cache_key, default={}))
if self._is_logged_in and self._access_token_is_valid():
return
rt_key = self._REFRESH_TOKEN_KEY
if not self._oauth_tokens.get(rt_key) or self._token_is_expired(rt_key):
if try_call(lambda: jwt_decode_hs256(password)['scope']) != 'refresh':
self._report_login_error('invalid_password')
self._oauth_tokens[rt_key] = password
self._refresh_access_token()
def _real_initialize(self):
cookies = self._get_cookies('https://weverse.io/')
if not self._device_id:
self._device_id = traverse_obj(cookies, (self._DEVICE_ID_KEY, 'value')) or str(uuid.uuid4())
if self._is_logged_in:
return
self._oauth_tokens.update(traverse_obj(cookies, {
self._ACCESS_TOKEN_KEY: (self._ACCESS_TOKEN_KEY, 'value'),
self._REFRESH_TOKEN_KEY: (self._REFRESH_TOKEN_KEY, 'value'),
}))
if self._is_logged_in and not self._access_token_is_valid():
self._refresh_access_token()
def _call_api(self, ep, video_id, data=None, note='Downloading API JSON'): def _call_api(self, ep, video_id, data=None, note='Downloading API JSON'):
# Ref: https://ssl.pstatic.net/static/wevweb/2_3_2_11101725/public/static/js/2488.a09b41ff.chunk.js # Ref: https://ssl.pstatic.net/static/wevweb/2_3_2_11101725/public/static/js/2488.a09b41ff.chunk.js
# From https://ssl.pstatic.net/static/wevweb/2_3_2_11101725/public/static/js/main.e206f7c1.js: # From https://ssl.pstatic.net/static/wevweb/2_3_2_11101725/public/static/js/main.e206f7c1.js:
key = b'1b9cb6378d959b45714bec49971ade22e6e24e42'
api_path = update_url_query(ep, { api_path = update_url_query(ep, {
# 'gcc': 'US', # 'gcc': 'US',
'appId': 'be4d79eb8fc7bd008ee82c8ec4ff6fd4', 'appId': 'be4d79eb8fc7bd008ee82c8ec4ff6fd4',
'language': 'en', 'language': 'en',
'os': 'WEB', 'os': self._CLIENT_PLATFORM,
'platform': 'WEB', 'platform': self._CLIENT_PLATFORM,
'wpf': 'pc', 'wpf': 'pc',
}) })
for is_retry in (False, True):
wmsgpad = int(time.time() * 1000) wmsgpad = int(time.time() * 1000)
wmd = base64.b64encode(hmac.HMAC( wmd = base64.b64encode(hmac.HMAC(
key, f'{api_path[:255]}{wmsgpad}'.encode(), digestmod=hashlib.sha1).digest()).decode() self._SIGNING_KEY, f'{api_path[:255]}{wmsgpad}'.encode(),
headers = {'Content-Type': 'application/json'} if data else {} digestmod=hashlib.sha1).digest()).decode()
try: try:
return self._download_json( return self._download_json(
f'https://global.apis.naver.com/weverse/wevweb{api_path}', video_id, note=note, f'https://global.apis.naver.com/weverse/wevweb{api_path}', video_id, note=note,
data=data, headers={**self._API_HEADERS, **headers}, query={ data=data, headers={
**self._API_HEADERS,
**self._get_authorization_header(),
**({'Content-Type': 'application/json'} if data else {}),
'WEV-device-Id': self._device_id,
}, query={
'wmsgpad': wmsgpad, 'wmsgpad': wmsgpad,
'wmd': wmd, 'wmd': wmd,
}) })
except ExtractorError as e: except ExtractorError as e:
if isinstance(e.cause, HTTPError) and e.cause.status == 401: if is_retry or not isinstance(e.cause, HTTPError):
self.raise_login_required( raise
'Session token has expired. Log in again or refresh cookies in browser') elif self._is_logged_in and e.cause.status == 401:
elif isinstance(e.cause, HTTPError) and e.cause.status == 403: self._refresh_access_token()
if 'Authorization' in self._API_HEADERS: continue
raise ExtractorError('Your account does not have access to this content', expected=True) elif e.cause.status == 403:
self.raise_login_required() if self._is_logged_in:
raise ExtractorError(
'Your account does not have access to this content', expected=True)
self._report_login_error('login_required')
raise raise
def _call_post_api(self, video_id): def _call_post_api(self, video_id):
path = '' if 'Authorization' in self._API_HEADERS else '/preview' path = '' if self._is_logged_in else '/preview'
return self._call_api(f'/post/v1.0/post-{video_id}{path}?fieldSet=postV1', video_id) return self._call_api(f'/post/v1.0/post-{video_id}{path}?fieldSet=postV1', video_id)
def _get_community_id(self, channel): def _get_community_id(self, channel):
@ -290,12 +419,14 @@ class WeverseIE(WeverseBaseIE):
elif live_status == 'is_live': elif live_status == 'is_live':
video_info = self._call_api( video_info = self._call_api(
f'/video/v1.2/lives/{api_video_id}/playInfo?preview.format=json&preview.version=v2', f'/video/v1.3/lives/{api_video_id}/playInfo?preview.format=json&preview.version=v2',
video_id, note='Downloading live JSON') video_id, note='Downloading live JSON')
playback = self._parse_json(video_info['lipPlayback'], video_id) playback = self._parse_json(video_info['lipPlayback'], video_id)
m3u8_url = traverse_obj(playback, ( m3u8_url = traverse_obj(playback, (
'media', lambda _, v: v['protocol'] == 'HLS', 'path', {url_or_none}), get_all=False) 'media', lambda _, v: v['protocol'] == 'HLS', 'path', {url_or_none}), get_all=False)
formats = self._extract_m3u8_formats(m3u8_url, video_id, 'mp4', m3u8_id='hls', live=True) # Live subtitles are not downloadable, but extract to silence "ignoring subs" warning
formats, _ = self._extract_m3u8_formats_and_subtitles(
m3u8_url, video_id, 'mp4', m3u8_id='hls', live=True)
elif live_status == 'post_live': elif live_status == 'post_live':
if availability in ('premium_only', 'subscriber_only'): if availability in ('premium_only', 'subscriber_only'):

View File

@ -45,7 +45,7 @@ class XinpianchangIE(InfoExtractor):
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id=video_id) webpage = self._download_webpage(url, video_id=video_id, headers={'Referer': url})
video_data = self._search_nextjs_data(webpage, video_id)['props']['pageProps']['detail']['video'] video_data = self._search_nextjs_data(webpage, video_id)['props']['pageProps']['detail']['video']
data = self._download_json( data = self._download_json(

View File

@ -35,6 +35,7 @@ from ...utils import (
class _PoTokenContext(enum.Enum): class _PoTokenContext(enum.Enum):
PLAYER = 'player' PLAYER = 'player'
GVS = 'gvs' GVS = 'gvs'
SUBS = 'subs'
# any clients starting with _ cannot be explicitly requested by the user # any clients starting with _ cannot be explicitly requested by the user
@ -174,6 +175,15 @@ INNERTUBE_CLIENTS = {
'INNERTUBE_CONTEXT_CLIENT_NAME': 7, 'INNERTUBE_CONTEXT_CLIENT_NAME': 7,
'SUPPORTS_COOKIES': True, 'SUPPORTS_COOKIES': True,
}, },
'tv_simply': {
'INNERTUBE_CONTEXT': {
'client': {
'clientName': 'TVHTML5_SIMPLY',
'clientVersion': '1.0',
},
},
'INNERTUBE_CONTEXT_CLIENT_NAME': 75,
},
# This client now requires sign-in for every video # This client now requires sign-in for every video
# It was previously an age-gate workaround for videos that were `playable_in_embed` # It was previously an age-gate workaround for videos that were `playable_in_embed`
# It may still be useful if signed into an EU account that is not age-verified # It may still be useful if signed into an EU account that is not age-verified
@ -787,6 +797,7 @@ class YoutubeBaseInfoExtractor(InfoExtractor):
def _download_ytcfg(self, client, video_id): def _download_ytcfg(self, client, video_id):
url = { url = {
'mweb': 'https://m.youtube.com',
'web': 'https://www.youtube.com', 'web': 'https://www.youtube.com',
'web_music': 'https://music.youtube.com', 'web_music': 'https://music.youtube.com',
'web_embedded': f'https://www.youtube.com/embed/{video_id}?html5=1', 'web_embedded': f'https://www.youtube.com/embed/{video_id}?html5=1',

View File

@ -37,6 +37,7 @@ class YoutubeClipIE(YoutubeTabBaseInfoExtractor):
'chapters': 'count:20', 'chapters': 'count:20',
'comment_count': int, 'comment_count': int,
'heatmap': 'count:100', 'heatmap': 'count:100',
'media_type': 'clip',
}, },
}] }]
@ -59,6 +60,7 @@ class YoutubeClipIE(YoutubeTabBaseInfoExtractor):
'url': f'https://www.youtube.com/watch?v={video_id}', 'url': f'https://www.youtube.com/watch?v={video_id}',
'ie_key': YoutubeIE.ie_key(), 'ie_key': YoutubeIE.ie_key(),
'id': clip_id, 'id': clip_id,
'media_type': 'clip',
'section_start': int(clip_data['startTimeMs']) / 1000, 'section_start': int(clip_data['startTimeMs']) / 1000,
'section_end': int(clip_data['endTimeMs']) / 1000, 'section_end': int(clip_data['endTimeMs']) / 1000,
'_format_sort_fields': ( # https protocol is prioritized for ffmpeg compatibility '_format_sort_fields': ( # https protocol is prioritized for ffmpeg compatibility

View File

@ -35,6 +35,7 @@ class YoutubeYtBeIE(YoutubeBaseInfoExtractor):
'duration': 59, 'duration': 59,
'comment_count': int, 'comment_count': int,
'channel_follower_count': int, 'channel_follower_count': int,
'media_type': 'short',
}, },
'params': { 'params': {
'noplaylist': True, 'noplaylist': True,

View File

@ -23,6 +23,8 @@ from ._base import (
_split_innertube_client, _split_innertube_client,
short_client_name, short_client_name,
) )
from .pot._director import initialize_pot_director
from .pot.provider import PoTokenContext, PoTokenRequest
from ..openload import PhantomJSwrapper from ..openload import PhantomJSwrapper
from ...jsinterp import JSInterpreter from ...jsinterp import JSInterpreter
from ...networking.exceptions import HTTPError from ...networking.exceptions import HTTPError
@ -66,9 +68,13 @@ from ...utils import (
urljoin, urljoin,
variadic, variadic,
) )
from ...utils.networking import clean_headers, clean_proxies, select_proxy
STREAMING_DATA_CLIENT_NAME = '__yt_dlp_client' STREAMING_DATA_CLIENT_NAME = '__yt_dlp_client'
STREAMING_DATA_INITIAL_PO_TOKEN = '__yt_dlp_po_token' STREAMING_DATA_INITIAL_PO_TOKEN = '__yt_dlp_po_token'
STREAMING_DATA_FETCH_SUBS_PO_TOKEN = '__yt_dlp_fetch_subs_po_token'
STREAMING_DATA_INNERTUBE_CONTEXT = '__yt_dlp_innertube_context'
PO_TOKEN_GUIDE_URL = 'https://github.com/yt-dlp/yt-dlp/wiki/PO-Token-Guide' PO_TOKEN_GUIDE_URL = 'https://github.com/yt-dlp/yt-dlp/wiki/PO-Token-Guide'
@ -244,7 +250,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'400': {'ext': 'mp4', 'height': 1440, 'format_note': 'DASH video', 'vcodec': 'av01.0.12M.08'}, '400': {'ext': 'mp4', 'height': 1440, 'format_note': 'DASH video', 'vcodec': 'av01.0.12M.08'},
'401': {'ext': 'mp4', 'height': 2160, 'format_note': 'DASH video', 'vcodec': 'av01.0.12M.08'}, '401': {'ext': 'mp4', 'height': 2160, 'format_note': 'DASH video', 'vcodec': 'av01.0.12M.08'},
} }
_SUBTITLE_FORMATS = ('json3', 'srv1', 'srv2', 'srv3', 'ttml', 'vtt') _SUBTITLE_FORMATS = ('json3', 'srv1', 'srv2', 'srv3', 'ttml', 'srt', 'vtt')
_DEFAULT_CLIENTS = ('tv', 'ios', 'web') _DEFAULT_CLIENTS = ('tv', 'ios', 'web')
_DEFAULT_AUTHED_CLIENTS = ('tv', 'web') _DEFAULT_AUTHED_CLIENTS = ('tv', 'web')
@ -376,6 +382,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'uploader': 'Afrojack', 'uploader': 'Afrojack',
'uploader_url': 'https://www.youtube.com/@Afrojack', 'uploader_url': 'https://www.youtube.com/@Afrojack',
'uploader_id': '@Afrojack', 'uploader_id': '@Afrojack',
'media_type': 'video',
}, },
'params': { 'params': {
'youtube_include_dash_manifest': True, 'youtube_include_dash_manifest': True,
@ -413,10 +420,11 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'channel_is_verified': True, 'channel_is_verified': True,
'heatmap': 'count:100', 'heatmap': 'count:100',
'timestamp': 1401991663, 'timestamp': 1401991663,
'media_type': 'video',
}, },
}, },
{ {
'note': 'Age-gate video with embed allowed in public site', 'note': 'Formerly an age-gate video with embed allowed in public site',
'url': 'https://youtube.com/watch?v=HsUATh_Nc2U', 'url': 'https://youtube.com/watch?v=HsUATh_Nc2U',
'info_dict': { 'info_dict': {
'id': 'HsUATh_Nc2U', 'id': 'HsUATh_Nc2U',
@ -424,8 +432,8 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'title': 'Godzilla 2 (Official Video)', 'title': 'Godzilla 2 (Official Video)',
'description': 'md5:bf77e03fcae5529475e500129b05668a', 'description': 'md5:bf77e03fcae5529475e500129b05668a',
'upload_date': '20200408', 'upload_date': '20200408',
'age_limit': 18, 'age_limit': 0,
'availability': 'needs_auth', 'availability': 'public',
'channel_id': 'UCYQT13AtrJC0gsM1far_zJg', 'channel_id': 'UCYQT13AtrJC0gsM1far_zJg',
'channel': 'FlyingKitty', 'channel': 'FlyingKitty',
'channel_url': 'https://www.youtube.com/channel/UCYQT13AtrJC0gsM1far_zJg', 'channel_url': 'https://www.youtube.com/channel/UCYQT13AtrJC0gsM1far_zJg',
@ -443,8 +451,8 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'uploader_id': '@FlyingKitty900', 'uploader_id': '@FlyingKitty900',
'comment_count': int, 'comment_count': int,
'channel_is_verified': True, 'channel_is_verified': True,
'media_type': 'video',
}, },
'skip': 'Age-restricted; requires authentication',
}, },
{ {
'note': 'Age-gate video embedable only with clientScreen=EMBED', 'note': 'Age-gate video embedable only with clientScreen=EMBED',
@ -507,6 +515,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'uploader': 'Herr Lurik', 'uploader': 'Herr Lurik',
'uploader_url': 'https://www.youtube.com/@HerrLurik', 'uploader_url': 'https://www.youtube.com/@HerrLurik',
'uploader_id': '@HerrLurik', 'uploader_id': '@HerrLurik',
'media_type': 'video',
}, },
}, },
{ {
@ -546,6 +555,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'uploader': 'deadmau5', 'uploader': 'deadmau5',
'uploader_url': 'https://www.youtube.com/@deadmau5', 'uploader_url': 'https://www.youtube.com/@deadmau5',
'uploader_id': '@deadmau5', 'uploader_id': '@deadmau5',
'media_type': 'video',
}, },
'expected_warnings': [ 'expected_warnings': [
'DASH manifest missing', 'DASH manifest missing',
@ -581,6 +591,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'uploader_id': '@Olympics', 'uploader_id': '@Olympics',
'channel_is_verified': True, 'channel_is_verified': True,
'timestamp': 1440707674, 'timestamp': 1440707674,
'media_type': 'livestream',
}, },
'params': { 'params': {
'skip_download': 'requires avconv', 'skip_download': 'requires avconv',
@ -615,6 +626,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'uploader_url': 'https://www.youtube.com/@AllenMeow', 'uploader_url': 'https://www.youtube.com/@AllenMeow',
'uploader_id': '@AllenMeow', 'uploader_id': '@AllenMeow',
'timestamp': 1299776999, 'timestamp': 1299776999,
'media_type': 'video',
}, },
}, },
# url_encoded_fmt_stream_map is empty string # url_encoded_fmt_stream_map is empty string
@ -809,6 +821,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'like_count': int, 'like_count': int,
'age_limit': 0, 'age_limit': 0,
'channel_follower_count': int, 'channel_follower_count': int,
'media_type': 'video',
}, },
'params': { 'params': {
'skip_download': True, 'skip_download': True,
@ -868,6 +881,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'uploader_id': '@BKCHarvard', 'uploader_id': '@BKCHarvard',
'uploader_url': 'https://www.youtube.com/@BKCHarvard', 'uploader_url': 'https://www.youtube.com/@BKCHarvard',
'timestamp': 1422422076, 'timestamp': 1422422076,
'media_type': 'video',
}, },
'params': { 'params': {
'skip_download': True, 'skip_download': True,
@ -904,6 +918,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'channel_is_verified': True, 'channel_is_verified': True,
'heatmap': 'count:100', 'heatmap': 'count:100',
'timestamp': 1447987198, 'timestamp': 1447987198,
'media_type': 'video',
}, },
'params': { 'params': {
'skip_download': True, 'skip_download': True,
@ -968,6 +983,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'comment_count': int, 'comment_count': int,
'channel_is_verified': True, 'channel_is_verified': True,
'timestamp': 1484761047, 'timestamp': 1484761047,
'media_type': 'video',
}, },
'params': { 'params': {
'skip_download': True, 'skip_download': True,
@ -1070,6 +1086,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'tags': 'count:11', 'tags': 'count:11',
'live_status': 'not_live', 'live_status': 'not_live',
'channel_follower_count': int, 'channel_follower_count': int,
'media_type': 'video',
}, },
'params': { 'params': {
'skip_download': True, 'skip_download': True,
@ -1124,6 +1141,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'uploader_url': 'https://www.youtube.com/@ElevageOrVert', 'uploader_url': 'https://www.youtube.com/@ElevageOrVert',
'uploader_id': '@ElevageOrVert', 'uploader_id': '@ElevageOrVert',
'timestamp': 1497343210, 'timestamp': 1497343210,
'media_type': 'video',
}, },
'params': { 'params': {
'skip_download': True, 'skip_download': True,
@ -1163,6 +1181,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'channel_is_verified': True, 'channel_is_verified': True,
'heatmap': 'count:100', 'heatmap': 'count:100',
'timestamp': 1377976349, 'timestamp': 1377976349,
'media_type': 'video',
}, },
'params': { 'params': {
'skip_download': True, 'skip_download': True,
@ -1207,6 +1226,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'channel_follower_count': int, 'channel_follower_count': int,
'uploader': 'The Cinematic Orchestra', 'uploader': 'The Cinematic Orchestra',
'comment_count': int, 'comment_count': int,
'media_type': 'video',
}, },
'params': { 'params': {
'skip_download': True, 'skip_download': True,
@ -1275,6 +1295,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'uploader_url': 'https://www.youtube.com/@walkaroundjapan7124', 'uploader_url': 'https://www.youtube.com/@walkaroundjapan7124',
'uploader_id': '@walkaroundjapan7124', 'uploader_id': '@walkaroundjapan7124',
'timestamp': 1605884416, 'timestamp': 1605884416,
'media_type': 'video',
}, },
'params': { 'params': {
'skip_download': True, 'skip_download': True,
@ -1371,6 +1392,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'channel_is_verified': True, 'channel_is_verified': True,
'heatmap': 'count:100', 'heatmap': 'count:100',
'timestamp': 1395685455, 'timestamp': 1395685455,
'media_type': 'video',
}, 'params': {'format': 'mhtml', 'skip_download': True}, }, 'params': {'format': 'mhtml', 'skip_download': True},
}, { }, {
# Ensure video upload_date is in UTC timezone (video was uploaded 1641170939) # Ensure video upload_date is in UTC timezone (video was uploaded 1641170939)
@ -1401,6 +1423,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'uploader_id': '@LeonNguyen', 'uploader_id': '@LeonNguyen',
'heatmap': 'count:100', 'heatmap': 'count:100',
'timestamp': 1641170939, 'timestamp': 1641170939,
'media_type': 'video',
}, },
}, { }, {
# date text is premiered video, ensure upload date in UTC (published 1641172509) # date text is premiered video, ensure upload date in UTC (published 1641172509)
@ -1434,6 +1457,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'channel_is_verified': True, 'channel_is_verified': True,
'heatmap': 'count:100', 'heatmap': 'count:100',
'timestamp': 1641172509, 'timestamp': 1641172509,
'media_type': 'video',
}, },
}, },
{ # continuous livestream. { # continuous livestream.
@ -1495,6 +1519,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'uploader': 'Lesmiscore', 'uploader': 'Lesmiscore',
'uploader_url': 'https://www.youtube.com/@lesmiscore', 'uploader_url': 'https://www.youtube.com/@lesmiscore',
'timestamp': 1648005313, 'timestamp': 1648005313,
'media_type': 'short',
}, },
}, { }, {
# Prefer primary title+description language metadata by default # Prefer primary title+description language metadata by default
@ -1523,6 +1548,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'uploader_id': '@coletdjnz', 'uploader_id': '@coletdjnz',
'uploader': 'cole-dlp-test-acc', 'uploader': 'cole-dlp-test-acc',
'timestamp': 1662677394, 'timestamp': 1662677394,
'media_type': 'video',
}, },
'params': {'skip_download': True}, 'params': {'skip_download': True},
}, { }, {
@ -1551,6 +1577,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'uploader': 'cole-dlp-test-acc', 'uploader': 'cole-dlp-test-acc',
'timestamp': 1659073275, 'timestamp': 1659073275,
'like_count': int, 'like_count': int,
'media_type': 'video',
}, },
'params': {'skip_download': True, 'extractor_args': {'youtube': {'lang': ['fr']}}}, 'params': {'skip_download': True, 'extractor_args': {'youtube': {'lang': ['fr']}}},
'expected_warnings': [r'Preferring "fr" translated fields'], 'expected_warnings': [r'Preferring "fr" translated fields'],
@ -1587,6 +1614,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'comment_count': int, 'comment_count': int,
'channel_is_verified': True, 'channel_is_verified': True,
'heatmap': 'count:100', 'heatmap': 'count:100',
'media_type': 'video',
}, },
'params': {'extractor_args': {'youtube': {'player_client': ['ios']}}, 'format': '233-1'}, 'params': {'extractor_args': {'youtube': {'player_client': ['ios']}}, 'format': '233-1'},
}, { }, {
@ -1687,6 +1715,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'comment_count': int, 'comment_count': int,
'channel_is_verified': True, 'channel_is_verified': True,
'heatmap': 'count:100', 'heatmap': 'count:100',
'media_type': 'video',
}, },
'params': { 'params': {
'extractor_args': {'youtube': {'player_client': ['ios'], 'player_skip': ['webpage']}}, 'extractor_args': {'youtube': {'player_client': ['ios'], 'player_skip': ['webpage']}},
@ -1719,6 +1748,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'channel_follower_count': int, 'channel_follower_count': int,
'categories': ['People & Blogs'], 'categories': ['People & Blogs'],
'tags': [], 'tags': [],
'media_type': 'short',
}, },
}, },
] ]
@ -1754,6 +1784,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'uploader_id': '@ChristopherSykesDocumentaries', 'uploader_id': '@ChristopherSykesDocumentaries',
'heatmap': 'count:100', 'heatmap': 'count:100',
'timestamp': 1211825920, 'timestamp': 1211825920,
'media_type': 'video',
}, },
'params': { 'params': {
'skip_download': True, 'skip_download': True,
@ -1784,6 +1815,11 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
self._code_cache = {} self._code_cache = {}
self._player_cache = {} self._player_cache = {}
self._pot_director = None
def _real_initialize(self):
super()._real_initialize()
self._pot_director = initialize_pot_director(self)
def _prepare_live_from_start_formats(self, formats, video_id, live_start_time, url, webpage_url, smuggled_data, is_live): def _prepare_live_from_start_formats(self, formats, video_id, live_start_time, url, webpage_url, smuggled_data, is_live):
lock = threading.Lock() lock = threading.Lock()
@ -1819,6 +1855,12 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
else: else:
retry.error = f'Cannot find refreshed manifest for format {format_id}{bug_reports_message()}' retry.error = f'Cannot find refreshed manifest for format {format_id}{bug_reports_message()}'
continue continue
# Formats from ended premieres will be missing a manifest_url
# See https://github.com/yt-dlp/yt-dlp/issues/8543
if not f.get('manifest_url'):
break
return f['manifest_url'], f['manifest_stream_number'], is_live return f['manifest_url'], f['manifest_stream_number'], is_live
return None return None
@ -2186,21 +2228,21 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
def _extract_n_function_name(self, jscode, player_url=None): def _extract_n_function_name(self, jscode, player_url=None):
varname, global_list = self._interpret_player_js_global_var(jscode, player_url) varname, global_list = self._interpret_player_js_global_var(jscode, player_url)
if debug_str := traverse_obj(global_list, (lambda _, v: v.endswith('_w8_'), any)): if debug_str := traverse_obj(global_list, (lambda _, v: v.endswith('-_w8_'), any)):
funcname = self._search_regex( pattern = r'''(?x)
r'''(?xs) \{\s*return\s+%s\[%d\]\s*\+\s*(?P<argname>[a-zA-Z0-9_$]+)\s*\}
[;\n](?: ''' % (re.escape(varname), global_list.index(debug_str))
(?P<f>function\s+)| if match := re.search(pattern, jscode):
(?:var\s+)? pattern = r'''(?x)
)(?P<funcname>[a-zA-Z0-9_$]+)\s*(?(f)|=\s*function\s*) \{\s*\)%s\(\s*
\((?P<argname>[a-zA-Z0-9_$]+)\)\s*\{ (?:
(?:(?!\}[;\n]).)+ (?P<funcname_a>[a-zA-Z0-9_$]+)\s*noitcnuf\s*
\}\s*catch\(\s*[a-zA-Z0-9_$]+\s*\)\s* |noitcnuf\s*=\s*(?P<funcname_b>[a-zA-Z0-9_$]+)(?:\s+rav)?
\{\s*return\s+%s\[%d\]\s*\+\s*(?P=argname)\s*\}\s*return\s+[^}]+\}[;\n] )[;\n]
''' % (re.escape(varname), global_list.index(debug_str)), ''' % re.escape(match.group('argname')[::-1])
jscode, 'nsig function name', group='funcname', default=None) if match := re.search(pattern, jscode[match.start()::-1]):
if funcname: a, b = match.group('funcname_a', 'funcname_b')
return funcname return (a or b)[::-1]
self.write_debug(join_nonempty( self.write_debug(join_nonempty(
'Initial search was unable to find nsig function name', 'Initial search was unable to find nsig function name',
player_url and f' player = {player_url}', delim='\n'), only_once=True) player_url and f' player = {player_url}', delim='\n'), only_once=True)
@ -2247,8 +2289,8 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
rf'var {re.escape(funcname)}\s*=\s*(\[.+?\])\s*[,;]', jscode, rf'var {re.escape(funcname)}\s*=\s*(\[.+?\])\s*[,;]', jscode,
f'Initial JS player n function list ({funcname}.{idx})')))[int(idx)] f'Initial JS player n function list ({funcname}.{idx})')))[int(idx)]
def _extract_player_js_global_var(self, jscode, player_url): def _interpret_player_js_global_var(self, jscode, player_url):
"""Returns tuple of strings: variable assignment code, variable name, variable value code""" """Returns tuple of: variable name string, variable value list"""
extract_global_var = self._cached(self._search_regex, 'js global array', player_url) extract_global_var = self._cached(self._search_regex, 'js global array', player_url)
varcode, varname, varvalue = extract_global_var( varcode, varname, varvalue = extract_global_var(
r'''(?x) r'''(?x)
@ -2266,27 +2308,23 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
self.write_debug(join_nonempty( self.write_debug(join_nonempty(
'No global array variable found in player JS', 'No global array variable found in player JS',
player_url and f' player = {player_url}', delim='\n'), only_once=True) player_url and f' player = {player_url}', delim='\n'), only_once=True)
return varcode, varname, varvalue return None, None
def _interpret_player_js_global_var(self, jscode, player_url): jsi = JSInterpreter(varcode)
"""Returns tuple of: variable name string, variable value list"""
_, varname, array_code = self._extract_player_js_global_var(jscode, player_url)
jsi = JSInterpreter(array_code)
interpret_global_var = self._cached(jsi.interpret_expression, 'js global list', player_url) interpret_global_var = self._cached(jsi.interpret_expression, 'js global list', player_url)
return varname, interpret_global_var(array_code, {}, allow_recursion=10) return varname, interpret_global_var(varvalue, {}, allow_recursion=10)
def _fixup_n_function_code(self, argnames, nsig_code, jscode, player_url): def _fixup_n_function_code(self, argnames, nsig_code, jscode, player_url):
varcode, varname, _ = self._extract_player_js_global_var(jscode, player_url) varname, global_list = self._interpret_player_js_global_var(jscode, player_url)
if varcode and varname: if varname and global_list:
nsig_code = varcode + '; ' + nsig_code nsig_code = f'var {varname}={json.dumps(global_list)}; {nsig_code}'
_, global_list = self._interpret_player_js_global_var(jscode, player_url)
else: else:
varname = 'dlp_wins' varname = 'dlp_wins'
global_list = [] global_list = []
undefined_idx = global_list.index('undefined') if 'undefined' in global_list else r'\d+' undefined_idx = global_list.index('undefined') if 'undefined' in global_list else r'\d+'
fixed_code = re.sub( fixed_code = re.sub(
rf'''(?x) fr'''(?x)
;\s*if\s*\(\s*typeof\s+[a-zA-Z0-9_$]+\s*===?\s*(?: ;\s*if\s*\(\s*typeof\s+[a-zA-Z0-9_$]+\s*===?\s*(?:
(["\'])undefined\1| (["\'])undefined\1|
{re.escape(varname)}\[{undefined_idx}\] {re.escape(varname)}\[{undefined_idx}\]
@ -2360,6 +2398,11 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
return sts return sts
def _mark_watched(self, video_id, player_responses): def _mark_watched(self, video_id, player_responses):
# cpn generation algorithm is reverse engineered from base.js.
# In fact it works even with dummy cpn.
CPN_ALPHABET = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_'
cpn = ''.join(CPN_ALPHABET[random.randint(0, 256) & 63] for _ in range(16))
for is_full, key in enumerate(('videostatsPlaybackUrl', 'videostatsWatchtimeUrl')): for is_full, key in enumerate(('videostatsPlaybackUrl', 'videostatsWatchtimeUrl')):
label = 'fully ' if is_full else '' label = 'fully ' if is_full else ''
url = get_first(player_responses, ('playbackTracking', key, 'baseUrl'), url = get_first(player_responses, ('playbackTracking', key, 'baseUrl'),
@ -2370,11 +2413,6 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
parsed_url = urllib.parse.urlparse(url) parsed_url = urllib.parse.urlparse(url)
qs = urllib.parse.parse_qs(parsed_url.query) qs = urllib.parse.parse_qs(parsed_url.query)
# cpn generation algorithm is reverse engineered from base.js.
# In fact it works even with dummy cpn.
CPN_ALPHABET = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_'
cpn = ''.join(CPN_ALPHABET[random.randint(0, 256) & 63] for _ in range(16))
# # more consistent results setting it to right before the end # # more consistent results setting it to right before the end
video_length = [str(float((qs.get('len') or ['1.5'])[0]) - 1)] video_length = [str(float((qs.get('len') or ['1.5'])[0]) - 1)]
@ -2824,7 +2862,8 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
continue continue
def fetch_po_token(self, client='web', context=_PoTokenContext.GVS, ytcfg=None, visitor_data=None, def fetch_po_token(self, client='web', context=_PoTokenContext.GVS, ytcfg=None, visitor_data=None,
data_sync_id=None, session_index=None, player_url=None, video_id=None, **kwargs): data_sync_id=None, session_index=None, player_url=None, video_id=None, webpage=None,
required=False, **kwargs):
""" """
Fetch a PO Token for a given client and context. This function will validate required parameters for a given context and client. Fetch a PO Token for a given client and context. This function will validate required parameters for a given context and client.
@ -2838,10 +2877,15 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
@param session_index: session index. @param session_index: session index.
@param player_url: player URL. @param player_url: player URL.
@param video_id: video ID. @param video_id: video ID.
@param webpage: video webpage.
@param required: Whether the PO Token is required (i.e. try to fetch unless policy is "never").
@param kwargs: Additional arguments to pass down. May be more added in the future. @param kwargs: Additional arguments to pass down. May be more added in the future.
@return: The fetched PO Token. None if it could not be fetched. @return: The fetched PO Token. None if it could not be fetched.
""" """
# TODO(future): This validation should be moved into pot framework.
# Some sort of middleware or validation provider perhaps?
# GVS WebPO Token is bound to visitor_data / Visitor ID when logged out. # GVS WebPO Token is bound to visitor_data / Visitor ID when logged out.
# Must have visitor_data for it to function. # Must have visitor_data for it to function.
if player_url and context == _PoTokenContext.GVS and not visitor_data and not self.is_authenticated: if player_url and context == _PoTokenContext.GVS and not visitor_data and not self.is_authenticated:
@ -2863,6 +2907,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
f'Got a GVS PO Token for {client} client, but missing Data Sync ID for account. Formats may not work.' f'Got a GVS PO Token for {client} client, but missing Data Sync ID for account. Formats may not work.'
f'You may need to pass a Data Sync ID with --extractor-args "youtube:data_sync_id=XXX"') f'You may need to pass a Data Sync ID with --extractor-args "youtube:data_sync_id=XXX"')
self.write_debug(f'{video_id}: Retrieved a {context.value} PO Token for {client} client from config')
return config_po_token return config_po_token
# Require GVS WebPO Token if logged in for external fetching # Require GVS WebPO Token if logged in for external fetching
@ -2872,7 +2917,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
f'You may need to pass a Data Sync ID with --extractor-args "youtube:data_sync_id=XXX"') f'You may need to pass a Data Sync ID with --extractor-args "youtube:data_sync_id=XXX"')
return return
return self._fetch_po_token( po_token = self._fetch_po_token(
client=client, client=client,
context=context.value, context=context.value,
ytcfg=ytcfg, ytcfg=ytcfg,
@ -2881,11 +2926,68 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
session_index=session_index, session_index=session_index,
player_url=player_url, player_url=player_url,
video_id=video_id, video_id=video_id,
video_webpage=webpage,
required=required,
**kwargs, **kwargs,
) )
if po_token:
self.write_debug(f'{video_id}: Retrieved a {context.value} PO Token for {client} client')
return po_token
def _fetch_po_token(self, client, **kwargs): def _fetch_po_token(self, client, **kwargs):
"""(Unstable) External PO Token fetch stub""" context = kwargs.get('context')
# Avoid fetching PO Tokens when not required
fetch_pot_policy = self._configuration_arg('fetch_pot', [''], ie_key=YoutubeIE)[0]
if fetch_pot_policy not in ('never', 'auto', 'always'):
fetch_pot_policy = 'auto'
if (
fetch_pot_policy == 'never'
or (
fetch_pot_policy == 'auto'
and _PoTokenContext(context) not in self._get_default_ytcfg(client)['PO_TOKEN_REQUIRED_CONTEXTS']
and not kwargs.get('required', False)
)
):
return None
headers = self.get_param('http_headers').copy()
proxies = self._downloader.proxies.copy()
clean_headers(headers)
clean_proxies(proxies, headers)
innertube_host = self._select_api_hostname(None, default_client=client)
pot_request = PoTokenRequest(
context=PoTokenContext(context),
innertube_context=traverse_obj(kwargs, ('ytcfg', 'INNERTUBE_CONTEXT')),
innertube_host=innertube_host,
internal_client_name=client,
session_index=kwargs.get('session_index'),
player_url=kwargs.get('player_url'),
video_webpage=kwargs.get('video_webpage'),
is_authenticated=self.is_authenticated,
visitor_data=kwargs.get('visitor_data'),
data_sync_id=kwargs.get('data_sync_id'),
video_id=kwargs.get('video_id'),
request_cookiejar=self._downloader.cookiejar,
# All requests that would need to be proxied should be in the
# context of www.youtube.com or the innertube host
request_proxy=(
select_proxy('https://www.youtube.com', proxies)
or select_proxy(f'https://{innertube_host}', proxies)
),
request_headers=headers,
request_timeout=self.get_param('socket_timeout'),
request_verify_tls=not self.get_param('nocheckcertificate'),
request_source_address=self.get_param('source_address'),
bypass_cache=False,
)
return self._pot_director.get_po_token(pot_request)
@staticmethod @staticmethod
def _is_agegated(player_response): def _is_agegated(player_response):
@ -3034,6 +3136,8 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
player_url = self._download_player_url(video_id) player_url = self._download_player_url(video_id)
tried_iframe_fallback = True tried_iframe_fallback = True
pr = initial_pr if client == 'web' else None
visitor_data = visitor_data or self._extract_visitor_data(master_ytcfg, initial_pr, player_ytcfg) visitor_data = visitor_data or self._extract_visitor_data(master_ytcfg, initial_pr, player_ytcfg)
data_sync_id = data_sync_id or self._extract_data_sync_id(master_ytcfg, initial_pr, player_ytcfg) data_sync_id = data_sync_id or self._extract_data_sync_id(master_ytcfg, initial_pr, player_ytcfg)
@ -3043,16 +3147,24 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'video_id': video_id, 'video_id': video_id,
'data_sync_id': data_sync_id if self.is_authenticated else None, 'data_sync_id': data_sync_id if self.is_authenticated else None,
'player_url': player_url if require_js_player else None, 'player_url': player_url if require_js_player else None,
'webpage': webpage,
'session_index': self._extract_session_index(master_ytcfg, player_ytcfg), 'session_index': self._extract_session_index(master_ytcfg, player_ytcfg),
'ytcfg': player_ytcfg, 'ytcfg': player_ytcfg or self._get_default_ytcfg(client),
} }
player_po_token = self.fetch_po_token( # Don't need a player PO token for WEB if using player response from webpage
player_po_token = None if pr else self.fetch_po_token(
context=_PoTokenContext.PLAYER, **fetch_po_token_args) context=_PoTokenContext.PLAYER, **fetch_po_token_args)
gvs_po_token = self.fetch_po_token( gvs_po_token = self.fetch_po_token(
context=_PoTokenContext.GVS, **fetch_po_token_args) context=_PoTokenContext.GVS, **fetch_po_token_args)
fetch_subs_po_token_func = functools.partial(
self.fetch_po_token,
context=_PoTokenContext.SUBS,
**fetch_po_token_args,
)
required_pot_contexts = self._get_default_ytcfg(client)['PO_TOKEN_REQUIRED_CONTEXTS'] required_pot_contexts = self._get_default_ytcfg(client)['PO_TOKEN_REQUIRED_CONTEXTS']
if ( if (
@ -3079,7 +3191,6 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
only_once=True) only_once=True)
deprioritize_pr = True deprioritize_pr = True
pr = initial_pr if client == 'web' else None
try: try:
pr = pr or self._extract_player_response( pr = pr or self._extract_player_response(
client, video_id, client, video_id,
@ -3097,10 +3208,13 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
if pr_id := self._invalid_player_response(pr, video_id): if pr_id := self._invalid_player_response(pr, video_id):
skipped_clients[client] = pr_id skipped_clients[client] = pr_id
elif pr: elif pr:
# Save client name for introspection later # Save client details for introspection later
sd = traverse_obj(pr, ('streamingData', {dict})) or {} innertube_context = traverse_obj(player_ytcfg or self._get_default_ytcfg(client), 'INNERTUBE_CONTEXT')
sd = pr.setdefault('streamingData', {})
sd[STREAMING_DATA_CLIENT_NAME] = client sd[STREAMING_DATA_CLIENT_NAME] = client
sd[STREAMING_DATA_INITIAL_PO_TOKEN] = gvs_po_token sd[STREAMING_DATA_INITIAL_PO_TOKEN] = gvs_po_token
sd[STREAMING_DATA_INNERTUBE_CONTEXT] = innertube_context
sd[STREAMING_DATA_FETCH_SUBS_PO_TOKEN] = fetch_subs_po_token_func
for f in traverse_obj(sd, (('formats', 'adaptiveFormats'), ..., {dict})): for f in traverse_obj(sd, (('formats', 'adaptiveFormats'), ..., {dict})):
f[STREAMING_DATA_CLIENT_NAME] = client f[STREAMING_DATA_CLIENT_NAME] = client
f[STREAMING_DATA_INITIAL_PO_TOKEN] = gvs_po_token f[STREAMING_DATA_INITIAL_PO_TOKEN] = gvs_po_token
@ -3109,9 +3223,19 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
else: else:
prs.append(pr) prs.append(pr)
# web_embedded can work around age-gate and age-verification for some embeddable videos
if self._is_agegated(pr) and variant != 'web_embedded':
append_client(f'web_embedded.{base_client}')
# Unauthenticated users will only get web_embedded client formats if age-gated
if self._is_agegated(pr) and not self.is_authenticated:
self.to_screen(
f'{video_id}: This video is age-restricted; some formats may be missing '
f'without authentication. {self._youtube_login_hint}', only_once=True)
# EU countries require age-verification for accounts to access age-restricted videos # EU countries require age-verification for accounts to access age-restricted videos
# If account is not age-verified, _is_agegated() will be truthy for non-embedded clients # If account is not age-verified, _is_agegated() will be truthy for non-embedded clients
if self.is_authenticated and self._is_agegated(pr): embedding_is_disabled = variant == 'web_embedded' and self._is_unplayable(pr)
if self.is_authenticated and (self._is_agegated(pr) or embedding_is_disabled):
self.to_screen( self.to_screen(
f'{video_id}: This video is age-restricted and YouTube is requiring ' f'{video_id}: This video is age-restricted and YouTube is requiring '
'account age-verification; some formats may be missing', only_once=True) 'account age-verification; some formats may be missing', only_once=True)
@ -3152,6 +3276,25 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
else: else:
self.report_warning(msg, only_once=True) self.report_warning(msg, only_once=True)
def _report_pot_subtitles_skipped(self, video_id, client_name, msg=None):
msg = msg or (
f'{video_id}: Some {client_name} client subtitles require a PO Token which was not provided. '
'They will be discarded since they are not downloadable as-is. '
f'You can manually pass a Subtitles PO Token for this client with '
f'--extractor-args "youtube:po_token={client_name}.subs+XXX" . '
f'For more information, refer to {PO_TOKEN_GUIDE_URL}')
subs_wanted = any((
self.get_param('writesubtitles'),
self.get_param('writeautomaticsub'),
self.get_param('listsubtitles')))
# Only raise a warning for non-default clients, to not confuse users.
if not subs_wanted or client_name in (*self._DEFAULT_CLIENTS, *self._DEFAULT_AUTHED_CLIENTS):
self.write_debug(msg, only_once=True)
else:
self.report_warning(msg, only_once=True)
def _extract_formats_and_subtitles(self, streaming_data, video_id, player_url, live_status, duration): def _extract_formats_and_subtitles(self, streaming_data, video_id, player_url, live_status, duration):
CHUNK_SIZE = 10 << 20 CHUNK_SIZE = 10 << 20
PREFERRED_LANG_VALUE = 10 PREFERRED_LANG_VALUE = 10
@ -3255,8 +3398,15 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
self._decrypt_signature(encrypted_sig, video_id, player_url), self._decrypt_signature(encrypted_sig, video_id, player_url),
) )
except ExtractorError as e: except ExtractorError as e:
self.report_warning('Signature extraction failed: Some formats may be missing', self.report_warning(
f'Signature extraction failed: Some formats may be missing\n'
f' player = {player_url}\n'
f' {bug_reports_message(before="")}',
video_id=video_id, only_once=True) video_id=video_id, only_once=True)
self.write_debug(
f'{video_id}: Signature extraction failure info:\n'
f' encrypted sig = {encrypted_sig}\n'
f' player = {player_url}')
self.write_debug(e, only_once=True) self.write_debug(e, only_once=True)
continue continue
@ -3443,6 +3593,9 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
hls_manifest_url = hls_manifest_url.rstrip('/') + f'/pot/{po_token}' hls_manifest_url = hls_manifest_url.rstrip('/') + f'/pot/{po_token}'
fmts, subs = self._extract_m3u8_formats_and_subtitles( fmts, subs = self._extract_m3u8_formats_and_subtitles(
hls_manifest_url, video_id, 'mp4', fatal=False, live=live_status == 'is_live') hls_manifest_url, video_id, 'mp4', fatal=False, live=live_status == 'is_live')
for sub in traverse_obj(subs, (..., ..., {dict})):
# HLS subs (m3u8) do not need a PO token; save client name for debugging
sub[STREAMING_DATA_CLIENT_NAME] = client_name
subtitles = self._merge_subtitles(subs, subtitles) subtitles = self._merge_subtitles(subs, subtitles)
for f in fmts: for f in fmts:
if process_manifest_format(f, 'hls', client_name, self._search_regex( if process_manifest_format(f, 'hls', client_name, self._search_regex(
@ -3454,6 +3607,9 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
if po_token: if po_token:
dash_manifest_url = dash_manifest_url.rstrip('/') + f'/pot/{po_token}' dash_manifest_url = dash_manifest_url.rstrip('/') + f'/pot/{po_token}'
formats, subs = self._extract_mpd_formats_and_subtitles(dash_manifest_url, video_id, fatal=False) formats, subs = self._extract_mpd_formats_and_subtitles(dash_manifest_url, video_id, fatal=False)
for sub in traverse_obj(subs, (..., ..., {dict})):
# TODO: Investigate if DASH subs ever need a PO token; save client name for debugging
sub[STREAMING_DATA_CLIENT_NAME] = client_name
subtitles = self._merge_subtitles(subs, subtitles) # Prioritize HLS subs over DASH subtitles = self._merge_subtitles(subs, subtitles) # Prioritize HLS subs over DASH
for f in formats: for f in formats:
if process_manifest_format(f, 'dash', client_name, f['format_id'], po_token): if process_manifest_format(f, 'dash', client_name, f['format_id'], po_token):
@ -3645,7 +3801,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
reason = self._get_text(pemr, 'reason') or get_first(playability_statuses, 'reason') reason = self._get_text(pemr, 'reason') or get_first(playability_statuses, 'reason')
subreason = clean_html(self._get_text(pemr, 'subreason') or '') subreason = clean_html(self._get_text(pemr, 'subreason') or '')
if subreason: if subreason:
if subreason == 'The uploader has not made this video available in your country.': if subreason.startswith('The uploader has not made this video available in your country'):
countries = get_first(microformats, 'availableCountries') countries = get_first(microformats, 'availableCountries')
if not countries: if not countries:
regions_allowed = search_meta('regionsAllowed') regions_allowed = search_meta('regionsAllowed')
@ -3771,53 +3927,94 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'tags': keywords, 'tags': keywords,
'playable_in_embed': get_first(playability_statuses, 'playableInEmbed'), 'playable_in_embed': get_first(playability_statuses, 'playableInEmbed'),
'live_status': live_status, 'live_status': live_status,
'media_type': 'livestream' if get_first(video_details, 'isLiveContent') else None, 'media_type': (
'livestream' if get_first(video_details, 'isLiveContent')
else 'short' if get_first(microformats, 'isShortsEligible')
else 'video'),
'release_timestamp': live_start_time, 'release_timestamp': live_start_time,
'_format_sort_fields': ( # source_preference is lower for potentially damaged formats '_format_sort_fields': ( # source_preference is lower for potentially damaged formats
'quality', 'res', 'fps', 'hdr:12', 'source', 'vcodec', 'channels', 'acodec', 'lang', 'proto'), 'quality', 'res', 'fps', 'hdr:12', 'source', 'vcodec', 'channels', 'acodec', 'lang', 'proto'),
} }
subtitles = {}
pctr = traverse_obj(player_responses, (..., 'captions', 'playerCaptionsTracklistRenderer'), expected_type=dict)
if pctr:
def get_lang_code(track): def get_lang_code(track):
return (remove_start(track.get('vssId') or '', '.').replace('.', '-') return (remove_start(track.get('vssId') or '', '.').replace('.', '-')
or track.get('languageCode')) or track.get('languageCode'))
# Converted into dicts to remove duplicates def process_language(container, base_url, lang_code, sub_name, client_name, query):
captions = {
get_lang_code(sub): sub
for sub in traverse_obj(pctr, (..., 'captionTracks', ...))}
translation_languages = {
lang.get('languageCode'): self._get_text(lang.get('languageName'), max_runs=1)
for lang in traverse_obj(pctr, (..., 'translationLanguages', ...))}
def process_language(container, base_url, lang_code, sub_name, query):
lang_subs = container.setdefault(lang_code, []) lang_subs = container.setdefault(lang_code, [])
for fmt in self._SUBTITLE_FORMATS: for fmt in self._SUBTITLE_FORMATS:
query.update({ query = {**query, 'fmt': fmt}
'fmt': fmt,
})
lang_subs.append({ lang_subs.append({
'ext': fmt, 'ext': fmt,
'url': urljoin('https://www.youtube.com', update_url_query(base_url, query)), 'url': urljoin('https://www.youtube.com', update_url_query(base_url, query)),
'name': sub_name, 'name': sub_name,
STREAMING_DATA_CLIENT_NAME: client_name,
}) })
subtitles = {}
skipped_subs_clients = set()
# Only web/mweb clients provide translationLanguages, so include initial_pr in the traversal
translation_languages = {
lang['languageCode']: self._get_text(lang['languageName'], max_runs=1)
for lang in traverse_obj(player_responses, (
..., 'captions', 'playerCaptionsTracklistRenderer', 'translationLanguages',
lambda _, v: v['languageCode'] and v['languageName']))
}
# NB: Constructing the full subtitle dictionary is slow # NB: Constructing the full subtitle dictionary is slow
get_translated_subs = 'translated_subs' not in self._configuration_arg('skip') and ( get_translated_subs = 'translated_subs' not in self._configuration_arg('skip') and (
self.get_param('writeautomaticsub', False) or self.get_param('listsubtitles')) self.get_param('writeautomaticsub', False) or self.get_param('listsubtitles'))
for lang_code, caption_track in captions.items():
base_url = caption_track.get('baseUrl') # Filter out initial_pr which does not have streamingData (smuggled client context)
orig_lang = parse_qs(base_url).get('lang', [None])[-1] prs = traverse_obj(player_responses, (
if not base_url: lambda _, v: v['streamingData'] and v['captions']['playerCaptionsTracklistRenderer']))
continue all_captions = traverse_obj(prs, (
..., 'captions', 'playerCaptionsTracklistRenderer', 'captionTracks', ..., {dict}))
need_subs_langs = {get_lang_code(sub) for sub in all_captions if sub.get('kind') != 'asr'}
need_caps_langs = {
remove_start(get_lang_code(sub), 'a-')
for sub in all_captions if sub.get('kind') == 'asr'}
for pr in prs:
pctr = pr['captions']['playerCaptionsTracklistRenderer']
client_name = pr['streamingData'][STREAMING_DATA_CLIENT_NAME]
innertube_client_name = pr['streamingData'][STREAMING_DATA_INNERTUBE_CONTEXT]['client']['clientName']
required_contexts = self._get_default_ytcfg(client_name)['PO_TOKEN_REQUIRED_CONTEXTS']
fetch_subs_po_token_func = pr['streamingData'][STREAMING_DATA_FETCH_SUBS_PO_TOKEN]
pot_params = {}
already_fetched_pot = False
for caption_track in traverse_obj(pctr, ('captionTracks', lambda _, v: v['baseUrl'])):
base_url = caption_track['baseUrl']
qs = parse_qs(base_url)
lang_code = get_lang_code(caption_track)
requires_pot = (
# We can detect the experiment for now
any(e in traverse_obj(qs, ('exp', ...)) for e in ('xpe', 'xpv'))
or _PoTokenContext.SUBS in required_contexts)
if not already_fetched_pot:
already_fetched_pot = True
if subs_po_token := fetch_subs_po_token_func(required=requires_pot):
pot_params.update({
'pot': subs_po_token,
'potc': '1',
'c': innertube_client_name,
})
if not pot_params and requires_pot:
skipped_subs_clients.add(client_name)
self._report_pot_subtitles_skipped(video_id, client_name)
break
orig_lang = qs.get('lang', [None])[-1]
lang_name = self._get_text(caption_track, 'name', max_runs=1) lang_name = self._get_text(caption_track, 'name', max_runs=1)
if caption_track.get('kind') != 'asr': if caption_track.get('kind') != 'asr':
if not lang_code: if not lang_code:
continue continue
process_language( process_language(
subtitles, base_url, lang_code, lang_name, {}) subtitles, base_url, lang_code, lang_name, client_name, pot_params)
if not caption_track.get('isTranslatable'): if not caption_track.get('isTranslatable'):
continue continue
for trans_code, trans_name in translation_languages.items(): for trans_code, trans_name in translation_languages.items():
@ -3837,10 +4034,25 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
# Add an "-orig" label to the original language so that it can be distinguished. # Add an "-orig" label to the original language so that it can be distinguished.
# The subs are returned without "-orig" as well for compatibility # The subs are returned without "-orig" as well for compatibility
process_language( process_language(
automatic_captions, base_url, f'{trans_code}-orig', f'{trans_name} (Original)', {}) automatic_captions, base_url, f'{trans_code}-orig',
f'{trans_name} (Original)', client_name, pot_params)
# Setting tlang=lang returns damaged subtitles. # Setting tlang=lang returns damaged subtitles.
process_language(automatic_captions, base_url, trans_code, trans_name, process_language(
{} if orig_lang == orig_trans_code else {'tlang': trans_code}) automatic_captions, base_url, trans_code, trans_name, client_name,
pot_params if orig_lang == orig_trans_code else {'tlang': trans_code, **pot_params})
# Avoid duplication if we've already got everything we need
need_subs_langs.difference_update(subtitles)
need_caps_langs.difference_update(automatic_captions)
if not (need_subs_langs or need_caps_langs):
break
if skipped_subs_clients and (need_subs_langs or need_caps_langs):
self._report_pot_subtitles_skipped(video_id, True, msg=join_nonempty(
f'{video_id}: There are missing subtitles languages because a PO token was not provided.',
need_subs_langs and f'Subtitles for these languages are missing: {", ".join(need_subs_langs)}.',
need_caps_langs and f'Automatic captions for {len(need_caps_langs)} languages are missing.',
delim=' '))
info['automatic_captions'] = automatic_captions info['automatic_captions'] = automatic_captions
info['subtitles'] = subtitles info['subtitles'] = subtitles

View File

@ -0,0 +1,309 @@
# YoutubeIE PO Token Provider Framework
As part of the YouTube extractor, we have a framework for providing PO Tokens programmatically. This can be used by plugins.
Refer to the [PO Token Guide](https://github.com/yt-dlp/yt-dlp/wiki/PO-Token-Guide) for more information on PO Tokens.
> [!TIP]
> If publishing a PO Token Provider plugin to GitHub, add the [yt-dlp-pot-provider](https://github.com/topics/yt-dlp-pot-provider) topic to your repository to help users find it.
## Public APIs
- `yt_dlp.extractor.youtube.pot.cache`
- `yt_dlp.extractor.youtube.pot.provider`
- `yt_dlp.extractor.youtube.pot.utils`
Everything else is internal-only and no guarantees are made about the API stability.
> [!WARNING]
> We will try our best to maintain stability with the public APIs.
> However, due to the nature of extractors and YouTube, we may need to remove or change APIs in the future.
> If you are using these APIs outside yt-dlp plugins, please account for this by importing them safely.
## PO Token Provider
`yt_dlp.extractor.youtube.pot.provider`
```python
from yt_dlp.extractor.youtube.pot.provider import (
PoTokenRequest,
PoTokenContext,
PoTokenProvider,
PoTokenResponse,
PoTokenProviderError,
PoTokenProviderRejectedRequest,
register_provider,
register_preference,
ExternalRequestFeature,
)
from yt_dlp.networking.common import Request
from yt_dlp.extractor.youtube.pot.utils import get_webpo_content_binding
from yt_dlp.utils import traverse_obj
from yt_dlp.networking.exceptions import RequestError
import json
@register_provider
class MyPoTokenProviderPTP(PoTokenProvider): # Provider class name must end with "PTP"
PROVIDER_VERSION = '0.2.1'
# Define a unique display name for the provider
PROVIDER_NAME = 'my-provider'
BUG_REPORT_LOCATION = 'https://issues.example.com/report'
# -- Validation shortcuts. Set these to None to disable. --
# Innertube Client Name.
# For example, "WEB", "ANDROID", "TVHTML5".
# For a list of WebPO client names,
# see yt_dlp.extractor.youtube.pot.utils.WEBPO_CLIENTS.
# Also see yt_dlp.extractor.youtube._base.INNERTUBE_CLIENTS
# for a list of client names currently supported by the YouTube extractor.
_SUPPORTED_CLIENTS = ('WEB', 'TVHTML5')
_SUPPORTED_CONTEXTS = (
PoTokenContext.GVS,
)
# If your provider makes external requests to websites (i.e. to youtube.com)
# using another library or service (i.e., not _request_webpage),
# set the request features that are supported here.
# If only using _request_webpage to make external requests, set this to None.
_SUPPORTED_EXTERNAL_REQUEST_FEATURES = (
ExternalRequestFeature.PROXY_SCHEME_HTTP,
ExternalRequestFeature.SOURCE_ADDRESS,
ExternalRequestFeature.DISABLE_TLS_VERIFICATION
)
def is_available(self) -> bool:
"""
Check if the provider is available (e.g. all required dependencies are available)
This is used to determine if the provider should be used and to provide debug information.
IMPORTANT: This method SHOULD NOT make any network requests or perform any expensive operations.
Since this is called multiple times, we recommend caching the result.
"""
return True
def close(self):
# Optional close hook, called when YoutubeDL is closed.
pass
def _real_request_pot(self, request: PoTokenRequest) -> PoTokenResponse:
# If you need to validate the request before making the request to the external source.
# Raise yt_dlp.extractor.youtube.pot.provider.PoTokenProviderRejectedRequest if the request is not supported.
if request.is_authenticated:
raise PoTokenProviderRejectedRequest(
'This provider does not support authenticated requests'
)
# Settings are pulled from extractor args passed to yt-dlp with the key `youtubepot-<PROVIDER_KEY>`.
# For this example, the extractor arg would be:
# `--extractor-args "youtubepot-mypotokenprovider:url=https://custom.example.com/get_pot"`
external_provider_url = self._configuration_arg(
'url', default=['https://provider.example.com/get_pot'])[0]
# See below for logging guidelines
self.logger.trace(f'Using external provider URL: {external_provider_url}')
# You should use the internal HTTP client to make requests where possible,
# as it will handle cookies and other networking settings passed to yt-dlp.
try:
# See docstring in _request_webpage method for request tips
response = self._request_webpage(
Request(external_provider_url, data=json.dumps({
'content_binding': get_webpo_content_binding(request),
'proxy': request.request_proxy,
'headers': request.request_headers,
'source_address': request.request_source_address,
'verify_tls': request.request_verify_tls,
# Important: If your provider has its own caching, please respect `bypass_cache`.
# This may be used in the future to request a fresh PO Token if required.
'do_not_cache': request.bypass_cache,
}).encode(), proxies={'all': None}),
pot_request=request,
note=(
f'Requesting {request.context.value} PO Token '
f'for {request.internal_client_name} client from external provider'),
)
except RequestError as e:
# If there is an error, raise PoTokenProviderError.
# You can specify whether it is expected or not. If it is unexpected,
# the log will include a link to the bug report location (BUG_REPORT_LOCATION).
raise PoTokenProviderError(
'Networking error while fetching to get PO Token from external provider',
expected=True
) from e
# Note: PO Token is expected to be base64url encoded
po_token = traverse_obj(response, 'po_token')
if not po_token:
raise PoTokenProviderError(
'Bad PO Token Response from external provider',
expected=False
)
return PoTokenResponse(
po_token=po_token,
# Optional, add a custom expiration timestamp for the token. Use for caching.
# By default, yt-dlp will use the default ttl from a registered cache spec (see below)
# Set to 0 or -1 to not cache this response.
expires_at=None,
)
# If there are multiple PO Token Providers that can handle the same PoTokenRequest,
# you can define a preference function to increase/decrease the priority of providers.
@register_preference(MyPoTokenProviderPTP)
def my_provider_preference(provider: PoTokenProvider, request: PoTokenRequest) -> int:
return 50
```
## Logging Guidelines
- Use the `self.logger` object to log messages.
- When making HTTP requests or any other expensive operation, use `self.logger.info` to log a message to standard non-verbose output.
- This lets users know what is happening when a time-expensive operation is taking place.
- It is recommended to include the PO Token context and internal client name in the message if possible.
- For example, `self.logger.info(f'Requesting {request.context.value} PO Token for {request.internal_client_name} client from external provider')`.
- Use `self.logger.debug` to log a message to the verbose output (`--verbose`).
- For debugging information visible to users posting verbose logs.
- Try to not log too much, prefer using trace logging for detailed debug messages.
- Use `self.logger.trace` to log a message to the PO Token debug output (`--extractor-args "youtube:pot_trace=true"`).
- Log as much as you like here as needed for debugging your provider.
- Avoid logging PO Tokens or any sensitive information to debug or info output.
## Debugging
- Use `-v --extractor-args "youtube:pot_trace=true"` to enable PO Token debug output.
## Caching
> [!WARNING]
> The following describes more advance features that most users/developers will not need to use.
> [!IMPORTANT]
> yt-dlp currently has a built-in LRU Memory Cache Provider and a cache spec provider for WebPO Tokens.
> You should only need to implement cache providers if you want an external cache, or a cache spec if you are handling non-WebPO Tokens.
### Cache Providers
`yt_dlp.extractor.youtube.pot.cache`
```python
from yt_dlp.extractor.youtube.pot.cache import (
PoTokenCacheProvider,
register_preference,
register_provider
)
from yt_dlp.extractor.youtube.pot.provider import PoTokenRequest
@register_provider
class MyCacheProviderPCP(PoTokenCacheProvider): # Provider class name must end with "PCP"
PROVIDER_VERSION = '0.1.0'
# Define a unique display name for the provider
PROVIDER_NAME = 'my-cache-provider'
BUG_REPORT_LOCATION = 'https://issues.example.com/report'
def is_available(self) -> bool:
"""
Check if the provider is available (e.g. all required dependencies are available)
This is used to determine if the provider should be used and to provide debug information.
IMPORTANT: This method SHOULD NOT make any network requests or perform any expensive operations.
Since this is called multiple times, we recommend caching the result.
"""
return True
def get(self, key: str):
# Similar to PO Token Providers, Cache Providers and Cache Spec Providers
# are passed down extractor args matching key youtubepot-<PROVIDER_KEY>.
some_setting = self._configuration_arg('some_setting', default=['default_value'])[0]
return self.my_cache.get(key)
def store(self, key: str, value: str, expires_at: int):
# ⚠ expires_at MUST be respected.
# Cache entries should not be returned if they have expired.
self.my_cache.store(key, value, expires_at)
def delete(self, key: str):
self.my_cache.delete(key)
def close(self):
# Optional close hook, called when the YoutubeDL instance is closed.
pass
# If there are multiple PO Token Cache Providers available, you can
# define a preference function to increase/decrease the priority of providers.
# IMPORTANT: Providers should be in preference of cache lookup time.
# For example, a memory cache should have a higher preference than a disk cache.
# VERY IMPORTANT: yt-dlp has a built-in memory cache with a priority of 10000.
# Your cache provider should be lower than this.
@register_preference(MyCacheProviderPCP)
def my_cache_preference(provider: PoTokenCacheProvider, request: PoTokenRequest) -> int:
return 50
```
### Cache Specs
`yt_dlp.extractor.youtube.pot.cache`
These are used to provide information on how to cache a particular PO Token Request.
You might have a different cache spec for different kinds of PO Tokens.
```python
from yt_dlp.extractor.youtube.pot.cache import (
PoTokenCacheSpec,
PoTokenCacheSpecProvider,
CacheProviderWritePolicy,
register_spec,
)
from yt_dlp.utils import traverse_obj
from yt_dlp.extractor.youtube.pot.provider import PoTokenRequest
@register_spec
class MyCacheSpecProviderPCSP(PoTokenCacheSpecProvider): # Provider class name must end with "PCSP"
PROVIDER_VERSION = '0.1.0'
# Define a unique display name for the provider
PROVIDER_NAME = 'mycachespec'
BUG_REPORT_LOCATION = 'https://issues.example.com/report'
def generate_cache_spec(self, request: PoTokenRequest):
client_name = traverse_obj(request.innertube_context, ('client', 'clientName'))
if client_name != 'ANDROID':
# If the request is not supported by the cache spec, return None
return None
# Generate a cache spec for the request
return PoTokenCacheSpec(
# Key bindings to uniquely identify the request. These are used to generate a cache key.
key_bindings={
'client_name': client_name,
'content_binding': 'unique_content_binding',
'ip': traverse_obj(request.innertube_context, ('client', 'remoteHost')),
'source_address': request.request_source_address,
'proxy': request.request_proxy,
},
# Default Cache TTL in seconds
default_ttl=21600,
# Optional: Specify a write policy.
# WRITE_FIRST will write to the highest priority provider only,
# whereas WRITE_ALL will write to all providers.
# WRITE_FIRST may be useful if the PO Token is short-lived
# and there is no use writing to all providers.
write_policy=CacheProviderWritePolicy.WRITE_ALL,
)
```

View File

@ -0,0 +1,3 @@
# Trigger import of built-in providers
from ._builtin.memory_cache import MemoryLRUPCP as _MemoryLRUPCP # noqa: F401
from ._builtin.webpo_cachespec import WebPoPCSP as _WebPoPCSP # noqa: F401

View File

@ -0,0 +1,78 @@
from __future__ import annotations
import datetime as dt
import typing
from threading import Lock
from yt_dlp.extractor.youtube.pot._provider import BuiltinIEContentProvider
from yt_dlp.extractor.youtube.pot._registry import _pot_memory_cache
from yt_dlp.extractor.youtube.pot.cache import (
PoTokenCacheProvider,
register_preference,
register_provider,
)
def initialize_global_cache(max_size: int):
if _pot_memory_cache.value.get('cache') is None:
_pot_memory_cache.value['cache'] = {}
_pot_memory_cache.value['lock'] = Lock()
_pot_memory_cache.value['max_size'] = max_size
if _pot_memory_cache.value['max_size'] != max_size:
raise ValueError('Cannot change max_size of initialized global memory cache')
return (
_pot_memory_cache.value['cache'],
_pot_memory_cache.value['lock'],
_pot_memory_cache.value['max_size'],
)
@register_provider
class MemoryLRUPCP(PoTokenCacheProvider, BuiltinIEContentProvider):
PROVIDER_NAME = 'memory'
DEFAULT_CACHE_SIZE = 25
def __init__(
self,
*args,
initialize_cache: typing.Callable[[int], tuple[dict[str, tuple[str, int]], Lock, int]] = initialize_global_cache,
**kwargs,
):
super().__init__(*args, **kwargs)
self.cache, self.lock, self.max_size = initialize_cache(self.DEFAULT_CACHE_SIZE)
def is_available(self) -> bool:
return True
def get(self, key: str) -> str | None:
with self.lock:
if key not in self.cache:
return None
value, expires_at = self.cache.pop(key)
if expires_at < int(dt.datetime.now(dt.timezone.utc).timestamp()):
return None
self.cache[key] = (value, expires_at)
return value
def store(self, key: str, value: str, expires_at: int):
with self.lock:
if expires_at < int(dt.datetime.now(dt.timezone.utc).timestamp()):
return
if key in self.cache:
self.cache.pop(key)
self.cache[key] = (value, expires_at)
if len(self.cache) > self.max_size:
oldest_key = next(iter(self.cache))
self.cache.pop(oldest_key)
def delete(self, key: str):
with self.lock:
self.cache.pop(key, None)
@register_preference(MemoryLRUPCP)
def memorylru_preference(*_, **__):
# Memory LRU Cache SHOULD be the highest priority
return 10000

View File

@ -0,0 +1,48 @@
from __future__ import annotations
from yt_dlp.extractor.youtube.pot._provider import BuiltinIEContentProvider
from yt_dlp.extractor.youtube.pot.cache import (
CacheProviderWritePolicy,
PoTokenCacheSpec,
PoTokenCacheSpecProvider,
register_spec,
)
from yt_dlp.extractor.youtube.pot.provider import (
PoTokenRequest,
)
from yt_dlp.extractor.youtube.pot.utils import ContentBindingType, get_webpo_content_binding
from yt_dlp.utils import traverse_obj
@register_spec
class WebPoPCSP(PoTokenCacheSpecProvider, BuiltinIEContentProvider):
PROVIDER_NAME = 'webpo'
def generate_cache_spec(self, request: PoTokenRequest) -> PoTokenCacheSpec | None:
bind_to_visitor_id = self._configuration_arg(
'bind_to_visitor_id', default=['true'])[0] == 'true'
content_binding, content_binding_type = get_webpo_content_binding(
request, bind_to_visitor_id=bind_to_visitor_id)
if not content_binding or not content_binding_type:
return None
write_policy = CacheProviderWritePolicy.WRITE_ALL
if content_binding_type == ContentBindingType.VIDEO_ID:
write_policy = CacheProviderWritePolicy.WRITE_FIRST
return PoTokenCacheSpec(
key_bindings={
't': 'webpo',
'cb': content_binding,
'cbt': content_binding_type.value,
'ip': traverse_obj(request.innertube_context, ('client', 'remoteHost')),
'sa': request.request_source_address,
'px': request.request_proxy,
},
# Integrity token response usually states it has a ttl of 12 hours (43200 seconds).
# We will default to 6 hours to be safe.
default_ttl=21600,
write_policy=write_policy,
)

Some files were not shown because too many files have changed in this diff Show More