Navigation
Home
Home
RSSGit ServerRainchan ImageboardBrowse Site on Tor
Technology

Making a Mess of an E-mail Host

I made an attempt at making my own e-mail server. I was talked into it by Qorg, who reminded me of the existence of the original emailwiz script by Luke Smith. Seeing as I run an Arch Linux server, rather than a Debian-derivative, I would attempt to port it.

I intended to finish this completely, if only for the sake of credibility when I share my findings regarding the process of setting up an e-mail server, but in the end I have left it 98% finished. For various reasons I'll talk about, it wasn't quite worth finishing for me. But still, nonetheless, I hope what I have learned from it can be of use to someone.

How would this in theory be done?

I have made a port of the script, to Arch Linux, which you can find here. If you use a Debian-derivative you can make use of the original script, but for the other handful of weirdos out there that use Arch as a server OS, this may help you. I've taken the time to test most aspects of the script and iron out bugs caused by the peculiarities between Debian and Arch. The README details much of the prerequisites, and the process involved in set-up.

As I have said, I got 98% of the way there before throwing in the towel, not 100%. I am confident that this script should get you most of the way there. It will not get you all of the way there, there will be debugging to be done, but it should make your set-up process substantially simpler than it would have been otherwise.

Is this a good idea to try?

Eh, maybe. Here's some criteria that may help you make up your mind on whether or not you should try it yourself, if you are interested.

You must be able to set up a PTR record (reverse DNS) to your e-mail host. You may need to negotiate it with your ISP, or your server provider. This also typically means you have a static IP. I suspect this was the stumbling block that made my set-up much more difficult, and without it your e-mail server will be much less useful (large providers will filter you).

You should meet more than half of these criteria:

Debugging is your friend

You will in all likelihood run into issues. Here's some good things to try:

Thoughts on the pros and cons of self-hosting mail

Privacy

Naturally, self-hosting will always be better from a privacy standpoint. The number of components you need to trust is smaller. Any other mail provider will be able to read your e-mails if they want to. Even if they offer server-side encryption in the event of a possible attack, this doesn't mean they can't decrypt it themselves.

Client-side encryption will to a degree somewhat level the playing field here. With client-side encryption, any mail you send is illegible to anyone par the sender and the recipient, provided your keys do not become compromised, no matter what e-mail provider you are using. Of course, if your e-mail provider has things like your real name, your phone number, and so on as part of registration, you are screwed anyway.

Security

This one will probably go to larger e-mail providers, rather than to small individual projects. Unless you really think you know security and administration better than teams of people serving thousands - if you do, I have no idea how you got here and why you're reading this. In most cases you will need to thoroughly educate yourself to be able to utilise a comparable level of security - although, it can be argued that a mail client serving a single individual or a few people is also a smaller and less enticing target for attacks.

Reliability

This could go either way. If you can guarantee the physical safety of your server, the lack of network traffic, you may find it to be more reliable and faster than a large provider. Or you may not. It really depends on your server, which you should know better than anyone.

Why have I left the e-mail server unfinished?

Mainly because it would not be worth it to me. As I mentioned, I cannot get a PTR record as I do not have a static IP address, so no reverse DNS lookups. This would make my e-mail less useful, as the e-mail I would send to large providers would be silently dropped and never reach their destination. I planned to finish this largely for the credibility in what I'm talking about. But, I decided to cut this short, because I'm beginning to build up a backlog of other things I'd like to do and things I would prefer to talk about.

My thoughts on some e-mail providers

If you've read all of the above, and would prefer to find a better, more privacy-respecting provider, but without going as far as making your own server, I have experience with some.

Cock.li

Their privacy policy is not bad, you can see for yourself. You can use your own clients. You can sign up via Tor (Cock.li additionally has a Tor address), there are no Captchas last I checked, no personal information is required, and there are no mandatory fees.

Note however that Cock.li is fairly honest about the fact that it's entirely possible to read your e-mails. They are transparent about this on their home page:

Cock.li doesn't parse your E-mail to provide you with targeted ads, nor does cock.li read E-mail contents unless it's for a legal court order. However, it is 100% possible for me to read E-mail, and IMAP/SMTP doesn't provide user-side/client-side encryption, so you're just going to have to take my word for it. Any encryption implementation would still technically allow me to read E-mail, too. This was true for Lavabit as well -- while your E-mail was stored encrypted (only if you were a paid member, which most people forget), E-mail could still technically be intercepted while being received / sent (SMTP), or while being read by your mail client (IMAP). For privacy, we recommend encrypting your E-mails using PGP using a mail client add-on like Enigmail, or downloading your mail locally with POP and regularly deleting your mail from our server.

With the lack of sign-up requirements and the ability to use your own clients - and when combined with use of client-side encryption - I find Cock.li works well for my use case. It sells itself as a joke service, but there are a few serious-sounding domains. People complain that it goes down, but in my experience that has been a number of times I could count on my hand in the last year, for a few hours at most.

Dismail.de

Dismail supports alternative mail clients, you may sign up with Tor without filling a Captcha, and it requires no personal information. The process of signing up requires you to respond to an automated message over XMPP, which in my experience I was waiting an hour or so.

Their terms of service do not sound ideal, and give the impression there are many ways in which you could be banned. One example is below:

sending of messages with the aim of harming or destroying, violate privacy, infringe the intellectual property, to issue statements offensive, fraudulent, obscene, racist, xenophobic, discriminatory, or any other form of content prohibited by law.

Why not Disroot/RiseUp/other privacy-respecting mail service?

As I said, I have tried many, and I found trying to apply to them impractical. RiseUp is an insider-invite-only service, and Disroot has a large list of potential offences you could fall under. Additionally, when signing up to Disroot, I was told to wait because it was the weekend. I came back during the week, and I was told to wait because it was the weekend. I try to balance maximising my privacy with ensuring actual practicality.

2020.11.07

2020.11.07Article Page

YouTube Ecosystem Wholly in a Terminal

There's a reason for which I sudden got really into RSS as a format. This all started when quite recently, the 'primary' web mirror for the software Invidious, was going down - that being, invidio.us (now hosts a list of other instances). Of course, this didn't mean the other instances would be going down, so I thought little of it. I preferred to use Invidious rather than the normal YouTube front-end, for various reasons related to internet privacy:

Alongside the fact that the default YouTube front-end is an unnecessarily bloated CPU-eater. I believe they have removed the legacy front-end now, though I've not checked in a long time.

Invidious was convenient, as rather than use a Google account for subscriptions - it is effectively just a GUI front-end for an RSS feed. It can also play videos without the need for JavaScript.

But Invidious has its own problems. Hosting tends to be highly unreliable. I can't tell how how easy or difficult it is to host an Invidious instance as I've never done it, but particularly in offshoot instances, they tend to break often and are generally unreliable, and sometimes inaccessible.

Returning to the Original Story

So, before long, invidio.us went 'down', as expected. But I noticed something strange begin to happen. As I used other instances instead, it seemed like they were breaking even more often. If that were true, it would be highly conspicuous - it would imply that in some way, these offshoot instances depended on the primary instance, and were now running into problems. That's quite troublesome design. Of course, I can't statistically prove that is true without taking a serious look at the codebase, which I don't plan to - it certainly could just be cognitive bias. But it got the ball rolling to make me begin to consider my alternatives.

YouTube offers an RSS feed to channels by default. You can in fact subscribe to YouTube channels in an RSS feed reader - this is exactly how Invidious works. There are RSS feed readers - such as newsboat - where you can read RSS feeds from a terminal. Put 2 and 2 together - I can see my YouTube subscriptions from the terminal.

So I went ahead with that

It all worked perfectly well. I added a few macros, which you can run via ",<key>" in newsboat, to do things such as - play audio, play video, download audio, download video, and open in Qutebrowser. You can see what I've done below:

macro v set browser "mpv --ytdl-format=22\/18 --ytdl %u" ; open-in-browser ; set browser "lynx %u"
macro V set browser "youtube-dl --add-metadata -f 22/18 %u" ; open-in-browser ; set browser "lynx %u"
macro a set browser "mpv --ytdl-format bestaudio --ytdl %u" ; open-in-browser ; set browser "lynx %u"
macro A set browser "youtube-dl --add-metadata -x --audio-format mp3 %u" ; open-in-browser ; set browser "lynx %u"
macro q set browser "qutebrowser %u" ; open-in-browser ; set browser "lynx %u"

This was great. And then I wanted to search for a song I didn't have a copy of, and didn't quite remember the name of. And then I thought, "Wait, how am I supposed to search?"

A TUI client for YouTube

The first thing I did, rather than reinvent the wheel, was to look to see what else others had done. But it was all pretty poor.

Generally they were written in Python. Almost all of the time they didn't work. The few that did 'work' packaged a bunch of other things I didn't need - like a video player, when I already had mpv. I just needed the ability to query.

And so, I wrote my own

The program, 'ytsearch', offers you this functionality in selecting queried videos:

Data showed in search queries:

Actions on selected videos:

Preferences which can be set as parameters or in a config file:

Beyond a standard GNU/Linux system with coreutils - this program depends on mpv, youtube-dl, and to a minor extent on xclip (which makes video sharing more convenient - no functionality is broken by editing it out).

At first, I tried to use the functionality packaged with youtube-dl - 'ytsearch' - which can be used to get one result, 5 (ytsearch5), or however many you can (ytsearchall). But I found this to be an extremely slow process, that was difficult to format and work with. I also think it is 'unfriendly' to YouTube, as it makes many video requests in formatting a single list of search results. This led to me getting banned very quickly.

So I pretty quickly ditched that idea. I couldn't curl YouTube either though - their results don't appear without JavaScript enabled. So, I settled for curling Invidious instances for links (their format is pretty universal across instances, I've not noticed any problems yet), and then passing those links to download/stream from YouTube itself. This method is much 'friendlier', and by far much faster to query - even for playlists with hundreds of videos (which are displayed in pages you can scroll through).

Drawbacks and Concessions

It has some drawbacks, of course. It is very scrappy scripting - this program evolved as I needed it to - and as such the code is an absolute shambles. It at least does work. Naturally, it's not even close to POSIX compliant, and depends heavily on Bash. I could likely have made a better program if I bothered learning ncurses, but I don't use YouTube often enough to justify that kind of effort.

It also only works as well as your Invidious instance works - if the instance has problems, so will the script. To mitigate this, I've made it very easy to test and change instances via parameters and config files. It reports reasonably well on what the issue could be if it fails to fetch videos.

Companion Programs

'ytsearch' has two companion programs - 'ytformat' and 'ytchannel'.

'ytformat' just takes links pointing to an Invidious instance, and makes them a YouTube link, if you are looking to share them as such. Fairly straightforward.

'ytchannel' is a fairly hacky script, that downloads and deletes a low-quality audio copy of a video, in order to get the YouTube channel it belongs to and to give you it in RSS feed compatible format. Thus, with the 3 programs, it is entirely possible to run the full YouTube 'ecosystem' from a terminal, and to never need to leave a terminal, and still have advanced search features and channel subscriptions.

Getting the Scripts

You can find all of the aforementioned scripts on my Git server, in a repository of their own, here. I hope they work for you. I'd also love to see - if anyone is interested in it - improvements to it, and probably written in something better than Bash.

2020.09.25

2020.09.25Article Page

Why I Like Qutebrowser

I've used Qutebrowser for quite a long time now - get it here. This is going to be a post from the perspective of a fan, rather than for something I've worked on myself, so I've given it a new section of its own. Generally, I don't like to repeat myself, or to repeat what others have already said. That's the reason why, although there are plenty of programs I love to work with, I may not talk about them in great excess because the reasons I like them are much the same as those already stated by others elsewhere.

But I feel like Qutebrowser, as a web browser, is an extremely undersold program, and although its claim to fame (Vim-like keybindings) is for good reason, there's so much more to it that I've never heard nor seen anyone speak about. As someone that has had exposure to many over many years (including more obscure ones like Waterfox, IceCat, and Pale Moon - I still have a little fondness for the latter two), I couldn't imagine using anything else at this point. So, I thought I'd do some missionary work.

Firstly: Vim-like Keybindings

This is the main focus of anyone speaking about it, so I suppose I have to speak about it too. And it is a great feature, but, as aforementioned, just not all there is to this browser.

Why would someone want to use a browser structured like Vim?

There's the obvious speed benefit involved. This is the first thing that comes to mind for anyone approaching Qutebrowser for the first time. Having your movement keys as h, j, k, and l on the home row (presuming you use a QWERTY keyboard - if you don't e.g. DVORAK, you can absolutely change these settings) leads for far faster movements when you get used to it. Of course, there's nothing to stop you using the arrow keys, but that's for plebs, right?

Then, of course, you can use J and K to move between tabs, gl and gr to move tabs, and so on. Every action you can think of making on any standard, fully-featured web browser is available here, largely on the home row. But "you can do most things from the home row" massively undersells the 'point' of Vim, and likewise, of Qutebrowser.

Bulk actions and macros

This is where the power of Vim, and of Qutebrowser, actually begins to show. Naturally there are many more keybindings on Qutebrowser, but for the sake of example, I'll use the "gl" command from earlier - this moves your current tab one tab to the left in amidst the other tabs. But it can also be run as a bulk action - say you have 5 tabs open, and your highlighted tab is at position 5. You can run "g3l" and your tab moves to position 2, and the tabs from positions 2 to 4 move up by one. These bulk actions can be applied to most actions available, where doing it multiple times would make sense. To get a feeling for the amount of power this can give you in complex actions, take a look below:

Of course, you don't need to remember all of those, so don't feel intimidated. You'll quickly and naturally find yourself able to recall those that are of use to you often. But for these complex bulk actions, perhaps there's a very complex one particular to your exact use case, that's going to be hard to remember. Like Vim, this is where inbuilt macros come in.

These can be accessed within Qutebrowser's 'command line' with :record-macro, and :run-macro. More conveniently however, by default they can be reached with q and @, much like Vim. Press q to begin a macro, press a key to bind that macro to, and then you can make whatever action you want, however long and complex it is. Hit q again to finish recording. Then, play it again with @. These functionalities make Qutebrowser more powerful and convenient than any other browser. For your extremely weird, specific command, particular to you and only you, you can easily make it happen instantly with something like @k - two key presses.

Hints and Follow

A key component to how Qutebrowser functions as an almost entirely keyboard based browser (you can use clicks, if you really want to), are 'hints', and 'follow'. The 'f' key is used to follow links - because of course, the point of hypertext is to be able to quickly follow 'links', jumping from one information source to the next. But how does it cope, if there are, for example, 200 links on the screen? Do you just type 'f157' and are expected to know what link is 157th on page?

Contrary, the way Qutebrowser handles links is fairly intelligent - this is where 'hints' come in. Rather than '157', if the page has 9 links or less, they will be found on the home row - 'asdfghjkl'. So, you can type 'f s' to go to link 's'. If there are dozens, links may be designated with a second 'digit', for example you may type 'f jk'. If there are hundreds, a third 'digit' - 'f skl'. You never leave the home row (of course, you can again, change the default keys), and the enumeration of links never gets out of hand. But, how do you know which link is which?

Hints are an essential part of Qutebrowser's GUI. Upon typing 'f', you are in 'follow hint' mode. All links appearing within the window will appear by default with a highlighted letter, or letters, beside them. As you type, in the case of multiple letters, the hints appearing on screen will be narrowed down (e.g. upon typing 'f g' only links appearing as 'g*' will still be present). It makes following links easier than doing so by the mouse. I'll show an example below, of course, bearing in mind I've slightly configured the appearance of my Qutebrowser install:

Incidentally, I've actually altered some of the quirks on my website to work well with Qutebrowser. Browsing this site on Qutebrowser is a dream, it's great.

And of course, as an aside mention, Qutebrowser has history, tab-completion, and bookmarks, like any other good browser. It also has 'quickmarks', allowing you to bind a URL to a shorter term to 'o'pen more quickly. It also has a console, inspect element, and most things you would generally like to see.

Minimalist GUI saves space on your screen and your disk

Qutebrowser's minimalist GUI takes up very little real estate on your screen, which is another bonus, especially if like mine your screen is about 11 inches in size. Upon 'o'pening a URL, loading a quickmark, so on, Qutebrowser will open a completion window, which takes up half the screen - this takes the appearance of a simple list, and can be configured to be smaller - I prefer to keep it at about a third of the screen. Essentially, everything is only as complex as it needs to be, and is generally out of the way.

This has another benefit - that Qutebrowser is far more lightweight than many other browsers (even despite being Chromium based), and has a small install size (if you forgive the dependencies on Qt4 and Python - which you probably already have installed for something else anyway). Let me show you:

~ $ pacman -Qi firefox | grep 'Installed Size'
Installed Size  : 205.47 MiB
~ $ pacman -Qi chromium | grep 'Installed Size'
Installed Size  : 201.36 MiB
~ $ pacman -Qi qutebrowser | grep 'Installed Size'
Installed Size  : 7.58 MiB

That is such a stark difference I wouldn't blame you if you didn't believe me at first. And despite its small size, if you want JavaScript, WebRTC, and whatever else supported, it's there.

Configurability

Another great point of Qutebrowser, whether you're some kind of 'ricer' or just extremely particular, is its configurability - there are almost definitely more options to configure the look, feel, and keyboard interaction with Qutebrowser than you could ever possibly need. There are a variety of places it can be set.

'ss'

Type 'ss' and you will immediately open the :set command at the prompt, and you'll be given a well thought-out and organised alphabetical list of everything you can change within Qutebrowser. Start typing, and you can narrow it down into sections and further subsections. When you've tab-completed or moved to an option, e.g. "colors.completion.fg", it will bring up a prompt to enter a new value, including listing its default value to switch back to. The best part about this is that everything changes instantaneously at runtime, and you can see the effects of your changes immediately.

qute://settings/

This includes a GUI menu in plain HTML of all of the settings listed above, to effect changes. I consider it inferior to the above method, but it's there if you want it, or just to get a feel for how huge your list of options are.

autoconfig.yml and config.py

Somewhere on your machine, Qutebrowser will have a configuration file, controlled by the GUI setting editor. This is called autoconfig.yml, and for me it can be found under .config/qutebrowser/ under my $HOME. Additionally however, you can overwrite these settings by creating a file in that directory, called 'config.py'. Importantly, make sure to run the function config.load_autoconfig() on the first line, otherwise it will not load the defaults as set in autoconfig.yml, and will just overwrite all of them.

The settings you change in the GUI are already permanent thanks to autoconfig.yml - so that isn't really the purpose of config.py. There is something it is incredibly useful for, however - scripting. The contents of config.py can be controlled by various scripts, to achieve different functionalities. And by running :config-source, you can see your changes take effect immediately.

To give some examples, I have two main scripts handling config.py for me. One sets 'browser modes'. Say for example, I want to have my browser completely locked down - proxy over Tor, no JS, no WebRTC, no nothing. I can do that immediately, by the script writing to config.py, and sourcing config.py. And the same script can go and revert all those changes again, instantly. Much less arduous than writing out dozens of lines for different settings. And for the ricers, I have another script handling colour schemes - it can read from .Xresources, and change the colours of various GUI elements to match, as just one example.

Qutebrowser also sets a variety of helpful environment variables for use in userscripts (generally found in $HOME/.local/share/qutebrowser/userscripts, and made executable), such as writing commands straight to Qutebrowser's FIFO (the completion prompt), parsing the HTML page, the current URL, the URL selected via 'hints', and so on.

Long story short - the thing I appreciate most about Qutebrowser isn't the Vim keybindings - it's scripting.

Privacy - Pros and Cons

Because Qutebrowser isn't a direct copy of one of the "big two" - Chrome and Firefox - it is lacking in a component for add-ons. It also doesn't have the graphical components necessary to display add-ons which are based on menus. For many people that use a variety of privacy enhancing add-ons, this is a downgrade, an inconvenience. Although you can implement some of the same functions via userscripts, it's going to be less convenient to do so. It's something to be aware of going into this browser. Let's talk about some of the positives again now.

Ad-blocking

You may be under the impression that without add-ons, Qutebrowser can't block ads - in fact, ad-blocking is inbuilt, regularly updated, and uses an /etc/hosts style format - which runs much quicker than a standard bloated ad-blocker. I have not once seen an ad while using this browser. Just run :adblock-update.

Selectively disabling JavaScript

JS can be handled on a host-by-host basis, thanks to config.py. If you would like to keep it off as a general default, but need it in a few select places, you can add URLs to your config.py like so:

config.set('content.javascript.enabled', True, '*://jisho.org/*')
Be sure to turn off JavaScript on my website ;3c (I have none).

Handling WebRTC

By default, WebRTC works the same as it does in any browser. If you'd rather you didn't leak your IP address over Tor or a VPN though, completely neutering it is a fairly straightforward process.

The below can be set in the GUI (via 'qt.args' and 'content.webrtc...'), or alternatively in config.py like so:

c.qt.args = ["force-webrtc-ip-handling-policy=disable_non_proxied_udp"]
c.content.webrtc_ip_handling_policy = 'disable-non-proxied-udp'

Proxying over Tor via SOCKS

I mentioned earlier I use Qutebrowser over Tor, this is also a surprisingly straightforward process, built-into the functionality of the browser. All you need to do is :set content.proxy.

Integrating hints with scripting

As I've mentioned both hints and scripting earlier, another useful thing to mention is that both can be integrated. These most famous example being 'hint links spawn mpv --ytdl {hint-url}' - this goes into 'follow hint' mode, and spawns mpv with the value of the URL you choose in follow mode (the only real way to watch videos). Likewise, you can simply use 'spawn mpv --ytdl {url}' to spawn mpv for the current page without going into follow mode. This has far more uses beyond just mpv however - you can bind your own scripts in your $PATH, with :bindings-commands, to pass URLs to your own scripts.

Perhaps you want to keep a list of certain URLs and add to it in a convenient way without copying URLs and going back and forth between a text file. Or you're on an onion mirror (as I often am), and would like to paste a link to someone not using Tor - pass it to the script, format the URL to it's clearnet mirror, and xclip output. For a great range of use cases, you can integrate following hints with your own scripts, while only using a few key presses and never leaving Qutebrowser.

And done

Another long rambling article. In short: I like Qutebrowser because it's highly configurable, and highly scriptable. Vim keybindings are just a bonus. For a better reference of information, go to the source, here. Hope you enjoyed whatever this was.

2020.09.02

2020.09.02Article Page

A Public Git Server People can Clone From

But not necessarily edit - without your permission.

Starting set-up

Obviously, you need to have Git installed, and you need SSH access to your server. I'll let you work that out.

If you already have a domain with web content on it, you'll want to give your Git server a separate domain and root directory. See your web server software. Most subdomains e.g. git.*.* are generally free.

I'm going to make an effort to not repeat the words of other people and other articles because it's just filler, in setting up a basic Git server for your own use, refer here.

You basically need 3 things at this point - your personal, public SSH key; a repository on your machine; and a "bare" repository directory on the remote server. Run ssh-keygen if you don't already have a key, it's fine (I prefer) to leave it blank.

You may like to add the directory "git-shell-commands" to the home directory of your Git user, this allows for a Git prompt on logging in as the Git user.

If you haven't done so already (this is a system-wide procedure), you may also want to disable password SSH logins, and rely purely on your key. See "/etc/ssh/ssh_config" and "/etc/ssh/sshd_config", or your system's equivalent.

Having Completed the Above

You should be able to push and pull to your repository with your SSH key. You can add collaborators to "authorized_keys" also.

However this isn't particularly useful for a public Git server - in order to clone from your repository, users would need to have an authorized SSH key.

SSH by design only allows for authenticated logins. So, you need to depend on HTTP.

A Git Server with HTTP Read (but not Write) Access

First of all, just make sure you have an open HTTP (80) or HTTPS (443) port.

Two things are needed:

Firstly, you'll need to set up the Git daemon, to allow exporting of given repositories. This is done on a per repository basis.

It's simple enough to just run the command, and go. My Git repositories are stored under "/usr/share/nginx/git", that's reflected here.

git daemon --reuseaddr --base-path=/usr/share/nginx/git/ /usr/share/nginx/git/

But it is far more sensible to include it as part of an init script, with whatever init system you happen to be using. Systemd shown below:

[Unit] Description=Start Git Daemon [Service] ExecStart=/usr/bin/git daemon --reuseaddr --base-path=/usr/share/nginx/git/ /usr/share/nginx/git/ Restart=always RestartSec=500ms StandardOutput=syslog StandardError=syslog SyslogIdentifier=git-daemon User=git Group=git [Install] WantedBy=multi-user.target

Adjust as you need, move to the relevant location, and enable it.

With the daemon enabled, repositories to be shared over HTTP should have an empty file called "git-daemon-export-ok" in the repository directory. This takes effect instantly. However, it won't work without the second part:

Secondly: Hooks

On your server-side repository, you will find a directory called "hooks". The hook "post-update" must be enabled - this is as simple as moving "hooks/post-update.sample" to "hooks/post-update".

For this to take effect, run "git update-server-info".

You should now have a working HTTP Git server. You can run, for example, "git clone https://git.concealed.world/website", and get my "website" repository. Full paths are not needed - the repository is taken from the root of the web server. No one has write access, nor can they SSH into your server without the correct authentication.

How is anyone going to see or find this, though?

That's where stagit comes in, a web front-end for Git. Clone and compile it on your server.

Repository pages are created by entering the repository directory, and running "stagit ./". A listing of repositories can be created by running "stagit-index dir1/ dir2/ dir3/ > index.html" for each repository. To get your web front-end to reflect the most recent changes, you'll need to run stagit again.

Stagit formats the pages with 4 files:

"url" contains the url to be displayed as the "git clone" link, the rest are fairly self-explanatory.

You can set these to be different values for every repository directory - but you likely want them to be consistent. Rather than maintain many copies of them, you can simply copy them to each repository with "ln -s ../style.css", etc.

That's way too much to maintain for a single 'git push'

You are absolutely right.

Which is why I have a script that does all of the above for me automatically (did you guess?). Heavy use of "find", an underappreciated GNU program.

#!/bin/sh [[ "$EUID" -ne 0 ]] && echo This script must be run as root. && exit 1 cd /usr/share/nginx/git/ sudo -u git bash -c 'stagit-index $(find -maxdepth 1 -type d | grep /) > index.html' sudo -u git bash -c 'for dir in $(find -maxdepth 1 -type d | grep /); do cd "${dir}" find style.css; [[ $(echo $?) == 1 ]] && ln -s ../style.css find favicon.png; [[ $(echo $?) == 1 ]] && ln -s ../favicon.png find logo.png; [[ $(echo $?) == 1 ]] && ln -s ../logo.png find url; [[ $(echo $?) == 1 ]] && formatdir=$(echo ${dir} | cut -d / -f 2) && echo "git://git.concealed.world/${formatdir}" > url find git-daemon-export-ok; [[ $(echo $?) == 1 ]] && touch git-daemon-export-ok find hooks/post-update; [[ $(echo $?) == 1 ]] && mv hooks/post-update.sample hooks/post-update && git update-server-info stagit ./ cd .. done' cd

It checks for running as the root user, and moves into the correct directory (further explanation later).

Then checks for every subdirectory of the current directory. All of them are added to "stagit-index", and output to "index.html" (git.concealed.world's index file).

Next, it enters each directory one by one, and performs various checks, to see if files exist:

The above assumes all repositories are desired to be public.

You can use hooks to detect 'git push', or just run the above script periodically using cron. Because scripts use the directory they run in, I set the "cd" at the start and end for convenience. I'm using cron as my server isn't very high-maintenance.

That should be about it. Following this, you have a web server serving Git over HTTP/HTTPS which people can readily clone using a web interface, but only you can SSH in and push changes. So, anyway. Enjoy.

2020.07.24

2020.07.24Article Page

Scripting Essentials

I thought before I throw up any more of my hacked together scripts across the front page of this site, I should talk a bit more about scripting in general - referring to shell scripting on GNU/Linux systems, in particular.

Why would anyone care?

Scripting offers up far more potential for the tools you have at your disposal, to create larger meta-tools from them without writing it out all over again - at its simplest. As everything on *nix is a text file, and most things can be achieved by editing text and text streams - it's fantastic for creating incredibly specific, precise tools suited to exactly what you need, while taking out the manual element. There is no need to write anything twice.

For example, if you...

I can think of ways in which all of the above can be relatively easily achieved. These are just very specific examples, to illustrate that the key here is to achieve very specific things with the minimal amount of effort.

I also think it's a pretty strong selling point for the GNU toolkit and those like it on Unix-like systems, although to be fair such things may be provided in similar depth in other operating systems - I've not checked. I'd love for you to prove me wrong.

With this post, I aim to give a reasonably comprehensive shorthand to scripting on Unix-like systems, as a reference point to anyone looking to script something they have a rough idea for. It's not at all complete, but should give a good start for something you can read through and finish relatively quickly. It gives you an idea of what options are available, and where a good starting point to learn would be.

Some helpful concepts

stdin - Input from the terminal/console, or input piped from another program.

stdout - The result output from the program, either to the display (shown as text on screen), or piped into the next program.

variables - Variables can be used in the same way as any other language, to refer to a value. Typically this is altered over time. Below shows the syntax of a variable being saved, and output to the screen:

var="Output" echo $var

piping - As above, this takes the output of one program and creates it as the input of another. This is done with the "|" character. See:

echo "Hello world" | awk '{print $2}'

This will print the second word of "Hello world", as echo pipes its output ("Hello world") to awk as awk's input, and awk outputs the second word of its input to the screen - awk is the last program in the chain.

Other characters to chain together programs:

The above can be used to create long chains of programs, that has a planned output on failure. For example:

echo Return this && echo and this && echo and this && echo and this || echo If any in the chain fail, return this.

Beyond a certain point of complexity, you may prefer to use fully fledged if-elif-else statements. For example:

if [[ -z $(ls ~/Documents/)]]; then echo "Your documents are empty." fi

fi ends if statements, as such done ends loops, and esac ends case statements. Case statements and For/While/Until loops exist here as they do in other languages.

However, the above can also be shortened, using the "||" from earlier:

[[ -z $(ls ~/Documents/)]] || echo "Your documents are empty."

You can also see my use of "-z". "-z" here means empty, and likewise "-n" means non-empty. "-lt" means less than, "-ge" means greater than or equal to, etc. Running "man test" on your Unix-like system will give you a longer list of some of the other conditional expressions available to you, which I won't go into any further detail here. Depending on what shell you are using (if you don't know, likely bash) you can run "man bash", which may also give you even more depth.

And now, the actual tools

This intends to be a list of useful programs to understand, or at least be aware of, on a GNU system. Awareness of the tools at your disposal lends itself to coming up with more ideas of how to use them in conjunction with each other. I suggest these ones, because in any script I write they tend to make up 90% of anything I happen to be using.

There are many more programs you could use in your shell script - anything you can run in a terminal likely counts (and it's part of why anyone sensible will tell you terminal programs are in almost all cases more powerful, of course including the elitism factor), although the above will likely account for a large bulk of it.

Some honourable mentions

cron - cron jobs are used to automate the running of programs, and I find myself using them often. Their format allows you to run scripts based on minutes and hours of the day, days of the week, months of the year, or on any specific minute in time possibly weeks or months away. By creating a script, you now have a very specific, complex program - which you can now run at a very specific time(s) of your choosing. The amount of power this offers should be pretty plain to see. cron can even be set up to mail you if an error occurs - very useful if an important job on your server doesn't go through.

mutt - mutt is a terminal-based e-mail client, with an ncurses TUI interface for browsing mailboxes (and offers a lot in terms of writing macros, which is great). It's important, because after the rigorous set-up, it also allows you to send mail directly from the terminal without needing to open the client. You still want to automate those death threats, right?

In conclusion

If you have some dumb manual task you're spending half an hour clicking through, where a robot could do it, go script it now. Save some hours of your life.

Getting good with shell scripting, as with anything else, generally arises from practise and necessity. After making something, this post will hopefully seem more coherent. Hope you enjoyed whatever this was.

2020.07.1X

2020.07.1XArticle Page
Search All Posts in Technology

Part of the Lainchan Webring.


Visit a random site:
ClearnetOnion

Send me something interesting: nixx@firemail.cc