I attempted a lot of different things in the beginning trying to figure out what the actual issues were that prevented it from working. The first thing I found was that Jellyfin is really made to work with complete video files, which is kind of an issue when you are trying to watch a livestream. What I found was that Jellyfin didn't really have a problem with opening a video file that was currently being written to with new data. The problem was it only recognised whatever video data was there when opening the file and there didn't seem to be an obvious way to get Jellyfin to read the newly written data at the end of the first stint, without restarting video playback. Then I thought, what if we could trick Jellyfin into thinking the video is longer than it actually is? So I had a look at different media container formats.
With the assumption that the video needed to be played in a browser I needed something widely supported. The first stop was MP4.
I tried a few different things with command-line flags in FFmpeg, but I didn't manage to create a video file that said it was longer than it actually was, so I quickly moved on.
MPEG-TS is the format Jellyfin uses when it transcodes video and is built around broadcasting, so on the surface it isn't a bad choice. I did also manage to create a video file that reported itself as being longer than it actually was, but I ran into the original issue once again. The video stopped playing when it had played whatever video was there when the file was initially loaded.
MKV is a great container format, but there is no support for playing in directly in browsers, so the video has to be at least remuxed first. Luckily remuxing is a lot lighter than transcoding, so if you make sure the browser supports playing the video and audio codec in the mkv file, it can work on low-end hardware. So with that in mind I thought, let's try.
It pretty much worked on the first try and the thing I thought of as a negative, the remuxing, was actually the thing that made the whole thing work.
Let's say we have a livestream that we are currently downloading using Streamlink, and putting that into a MKV container, that we pretend is 1 hour long, and storing it on disk. Then let's play the file using Jellyfin. Let's say there is 1 minute worth of video available when we press play. When we press play Jellyfin starts a FFmpeg process to remux the video as your browser can't handle the MKV container. FFmpeg quickly goes through the data and exits at the 1-minute mark as there is no more video data. Jellyfin just keeps playing until we hit the 1-minute mark and when it does, it restarts the FFmpeg process. Jellyfin thinks the video is 1 hour long and because the MKV file can't natively be played Jellyfin restarts the FFmpeg process in order to continue playing.
Now this setup isn't perfect. It works great for short watch times, but you start to run into problems the longer you watch. Jellyfin only starts the FFmpeg process when it needs the next video segment, so there is no buffering. It isn't really a problem when you initially start watching, but when the FFmpeg process starts it needs to seek through the file until it gets to where you are, before it can continue remuxing. The longer you watch, the longer it takes to seek and the bigger the pauses gets when it needs to restart FFmpeg.
To get around the seeking issues and possibly having to babysit FFmpeg while watching I ended up choosing to have FFmpeg read the input file in real-time. That meant the stream download and remux could go along at a constant rate, great! Or at least somewhat, it caused a few problems with the way I initially did it. I had everybody go into a syncplay room, and for each person I manually started a FFmpeg process that mimicked the one Jellyfin started, but mine just ran in real-time. It worked, but I meant if somebody dropped out for some reason, we would either have to start from the beginning or wait for the real-time remux to catch up again. This is because when you press play an ID is generated based on a few parameters including time, which means it will always be different. So you won't be able to use the previous remux if for example your browser crashes.
To work around that I made some small modifications to Jellyfin, created a little helper script and made a docker container for it, so it would be easy to go back and forth between my custom version and the official one. First I modified the code to create a predictable name for the transcode cache files, this does a few things. I can start the transcode on the command-line before even pressing play in Jellyfin. Everybody now shares the same transcode cache files, so only one remux needs to run and if somebody crashes they can jump right back in. Lastly, to make it work properly, I disabled the code to delete the transcode cache if somebody leaves the playback session, since everyone is sharing the same files, they would be deleted for everyone. Instead, I made the little helper script, used to start the transcode, clean up the cache files when stopping the FFmpeg process.
If you would like to try it out, luckily it's pretty easy especially if you already have Jellyfin set up with the official docker container. The container is intended to be a drop-in replacement for the official one, with one required and one optional config option. If you're unsure how to get started with the official docker container you can have a look at the instruction from the Jellyfin wiki here.
The two options are configured through environment variables and are JELLYFIN_LIVESTREAM_TRANSCODE_DIR
and JELLYFIN_LIVESTREAM_DEFAULT_MEDIA_PATH
.
JELLYFIN_LIVESTREAM_TRANSCODE_DIR
has to be set to the transcode directory used by Jellyfin, or it won't work properly. By default, it is set to /config/transcodes
.
JELLYFIN_LIVESTREAM_DEFAULT_MEDIA_PATH
is a quality of life option allowing you to set the path to the media library that will contain the livestreams. This allows you to run the helper script with just the name of the file instead of the absolute path. Remember it should be the path to the library inside the container.
When you have the container up and running and a livestream download going you can start the transcode using the below command:
docker exec -it CONTAINER_NAME start-transcode livestream.mkv
If you didn't specify a default media path, use the absolute path to livestream.mkv inside the container instead. To stop the transcode you can just do a Ctrl+C, that will stop the FFmpeg process and clean up the transcode cache.
To get a livestream you want to watch with some friends, I would recommend Streamlink. It's a great program and very easy to use. Now most if not all livestreams you download will be stored in a .mp4 container where we want an MKV container. To get that you have two options.
First options is to either pipe the stream directly to FFmpeg and have FFmpeg remux the stream into a mkv container with an arbitrarily long length. In this example I use 6 hours.
streamlink -O "LIVESTREAM_LINK" best | ffmpeg -i pipe: -codec copy -t 06:00:00 livestream.mkv
Or you can save the .mp4 and then have FFmpeg read from that file in real-time.
streamlink -o temp.mp4 "LIVESTREAM_LINK" best
And then
ffmpeg -re -i temp.mp4 -codec copy -t 06:00:00 livestream.mkv
The first option is nice because you don't have to store the video twice, but I have experienced issue with the audio being delay using that method. If you run into that as well try out the second method.
To manage it all I can definitely recommend having a look at tmux or screen to make everything easier.
You can find a prebuilt docker container of the custom Jellyfin version on Docker Hub.
You can find the source on my Gitea instance, and I also have a mirror on GitHub. I'm active both places, so you are welcome to open an issue on either platform, if you have any bugs, feature requests, questions or anything else that comes to mind.
]]>Shadowsocks is a socks5 proxy with the main purpose of bypassing internet censorship. Shadowsocks was originally written in python, but since the original release there have been made many different implementations of Shadowsocks. I'll be showing you how to install and setup Shadowsocks-libev, an implementation written in C. It's very resource light and can run on very low-end hardware.
To get the latest version of Shadowsocks-libev we first need to enable the backports repository for Debian Stretch
echo "deb http://ftp.debian.org/debian stretch-backports main" | sudo tee /etc/apt/sources.list.d/backports.list
Followed by
sudo apt update
Now we need to install the shadowsocks-libev package from the backports repository instead of the default repository.
sudo apt -t stretch-backports install shadowsocks-libev
There is a systemd service file included with shadowsocks-libev, so it can be completely managed by systemd.
Now that you should be up and running with a fairly recent version of shadowsocks-libev, let's take a look at some configuration.
Let's start by opening up the shadowsocks configuration file. You'll find that located in /etc/shadowsocks-libev/config.json
sudo nano /etc/shadowsocks-libev/config.json
Now let's go over some of the many configuration options you have.
Option | Description |
---|---|
"server": "" | The IP address or URL of the shadowsocks server |
"server_port": "" | The server port to use |
"local_address": "" | The local listening address |
"local_port": "" | The local port to use |
"password": "" | The password for the shadowsocks server |
"method": "" | The encryption method to use |
"timeout": "" | Timeout in seconds |
"fast_open": true/false | |
"nameserver": "" | Choose a different nameserver than the server's default |
"mode": "" | Choose if you want to use TDP("tcp_only"), UDP("udp_only") or both(tcp_and_udp). Default is TCP only. |
The local address and port are only relevant for shadowsocks configurations on your client machines.
For encryption methods, I recommend using the default as it is very secure and also quite fast.
For best performance is it generally a good idea to set TCP fast open to true, but to use it you also need to enable it on the system, which I'll show in the next section of the guide.
Here is an example Shadowsocks server configuration:
{
"server": "0.0.0.0",
"server_port": 8388,
"password": "thisisapassword",
"timeout": 60,
"method": "chacha20-ietf-poly1305"
}
After you have setup shadowsocks to your liking remember to do restart it
sudo systemctl restart shadowsocks-libev
There are some changes you can make to your system to optimize shadowsocks. If you are interested in a bit more info on these optimizations have at look at shadowsocks.org
First open up the following file
sudo nano /etc/security/limits.conf
And add the following 2 lines
* soft nofile 51200
* hard nofile 51200
Then before starting shadowsocks run
ulimit -n 51200
Open up the following
sudo nano /etc/sysctl.conf
And add the following
fs.file-max = 51200
net.core.rmem_max = 67108864
net.core.wmem_max = 67108864
net.core.netdev_max_backlog = 250000
net.core.somaxconn = 4096
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.ip_local_port_range = 10000 65000
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_fastopen = 3
net.ipv4.tcp_mem = 25600 51200 102400
net.ipv4.tcp_rmem = 4096 87380 67108864
net.ipv4.tcp_wmem = 4096 65536 67108864
net.ipv4.tcp_mtu_probing = 1
net.ipv4.tcp_congestion_control = hybla
When you can done that run the following to apply the changes
sudo sysctl -p
Among the changes are TCP fast open which I talked about earlier. So you can now proceed with enabling TCP fast open in your shadowsocks config by adding the following to the config file
"fast_open": true
If you have kernel 4.9 or newer you can use TCP BBR for congestion control. It should give a noticeable improvement to performance. Debian 9 ships with kernel 4.9 by default. If you are unsure which kernel version you have you can run uname -r
in your terminal to check.
To enable TCP BBR open up the sysctl.conf
sudo nano /etc/sysctl.conf
And add the following 2 lines to it
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
And afterwards run the following to apply the changes
sudo sysctl -p
To use shadowsocks on android you can download the offical app from either the Play store or directly from github
The setup should be quite simple, just input the information you setup in the server config file, into the app and it should be ready to go. The shadowsocks app works like a VPN on android so everything gets routed through it so no special configuration should be needed.
To use shadowsocks on linux you can follow the same installation instructions as on for the server, as the shadowsocks-libev packages contains both the server and client component. Now to run shadowsocks you can either create a config file or just use command-line options. I recommend creating a config file in somewhere in your home directory. Below you can see an example config file that works with the server config I showed previously. This config assumes you have enabled TCP Fast open, if not please delete the last line.
{
"server": "0.0.0.0,
"server_port": 8388,
"method": "chacha20-ietf-poly1305",
"password": "thisisapassword",
"local_address": "127.0.0.1",
"local_port": "1080",
"timeout": 60,
"fast_open": true
}
Let's assume you name the file config.json. To start shadowsocks you simply type the following in a terminal window.
ss-local -c /path/to/file/config.json
Now to actually use shadowsocks you need to tell the programs you use to connect through the proxy. For example with Firefox you need to go into your network settings and tell it to use the proxy.
Depending on your reasons for using shadowsocks it might be a good idea to have DNS queries go through it as well. If you're not quite sure, I would recommend enabling it, like shown above.
Depending on your usage scenario it might be a good idea to use obfuscation. Luckily there is a nice plugin made for shadowsocks called simple-obfs.
For Debian 9, there are 2 ways to install simple-obfs. The first is to install it from the stretch-backports repository, the same way we installed shadowsocks-libev.
sudo apt -t stretch-backports install simple-obfs
The second way is to compile it from source. This method works on any distro, but there are slight difference in which packages, and their names, you need to install on your system before hand. Below I'll show you how to do it on debian 9, if you're using any other distro I recommend checking out the simple-obfs github
First we need to install the following packages
sudo apt install --no-install-recommends build-essential autoconf libtool libssl-dev libpcre3-dev libev-dev asciidoc xmlto automake git
Next we need to clone the github repository and compile simple-obfs using the following commands.
git clone https://github.com/shadowsocks/simple-obfs.git
cd simple-obfs
git submodule update --init --recursive
./autogen.sh
./configure && make
sudo make install
Next we need to configure shadowsocks to actually use simple-obfs. To do so add the following to the bottom of your shadowsocks config file.
"plugin": "obfs-server",
"plugin_opts": "obfs=tls;fast-open"
The next step is to enable it in your clients.
On android you need to download the simple-obfs app. You can get the app from the Play store or directly from github
After you have installed the app, you then proceed to the shadowsocks app. At the bottom of the settings for an individual server of find plugin options where you can enable simple-obfs
You can then change the settings for the simple-obfs plugin. If you're following this guide I recommend using tls compared to http. The domain to use for disguising traffic can be anything you really like but bing.com is a good default.
The install process for the client on linux is the same as for installing it on the server, as it includes both the server and client component. To enable the use of simple-obfs, add the following lines to your shadowsocks config.
"plugin": "obfs-local",
"plugin_opts": "obfs=tls;obfs-host=www.bing.com"
]]>