• 0 Posts
  • 42 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle
  • You could use aliases on your .bashrc for git (and a bare repo), that would let you manage your $HOME and /etc directly with git without using symlinks, only downside is having them separated in two aliases and two repos.

    # user config repo
    alias dotfiles='git --git-dir=$HOME/.dotfiles --work-tree=$HOME'
    
    # system config repo
    alias etcfiles='sudo git --git-dir=$HOME/.etcfiles --work-tree=/etc'
    

    It is also recommended that you run:

    <alias> config --local status.showUntrackedFiles no
    

    in the terminal for both the dotfiles and etcfiles aliases (you can pick the aliases and git-dir names you want)

    The aliases help you have a custom named folder instead of .git located in a custom path, and you can manage them without symlinks as you use git directly on the file’s original location, this would solve your issue of other solutions that depend on symlinks

    Note: you could technically have the root directory --work-tree=/ as a work tree to use only one command, but It is not recommended to give git the possibility to rewrite any file on the entire file system.

    Some reference links:

    Text

    Video


  • The TL, DR version of sharing with No License, is that technically speaking you are not explicitly permitting others to use your code in any way, just allowing them to look, a license is a formal way to give permissions to others to copy, modify, or use your code.

    You don’t need an extra file for the license, you can embed it on a section at the top of your file, as you did with the description, just add a # License section at the very top, if you want the most permissive one you can just use MIT, just need to replace the year of publication of the code, and you can use a pseudonym/username like ‘hereforawhile@lemmy.ml’ if you don’t want to use something like email, username on another site or real name, that can be used to identify you, if that’s a concern





  • On your first part, clarifying your intent, I think that you are overcomplicating yourself by expecting traffic to come to the server via domain name (pass through proxy) from Router A network and by IP:Port from Router B network, you can access all, from anywhere through domains and subdomains, and avoid using numbers.

    If you can’t set up a DNS directly on Router A, you can set it per device you would want to access the server through port forwarding of Router B, meaning setting the laptop to use itself as primary DNS and as secondary use external, and any other device you would want in that LAN do the same (laptop as primary), It is a bit tedious to do per device instead but still possible.

    Wouldn’t this link to the 192.168.0.y address of router B pass through router A, and loop back to router B, routing through the slower cable? Or is the router smart enough to realize he’s just talking to itself and just cut out `router A from the traffic?

    No, the request would stop on Router B, and maintain all traffic, on the 10.0.0.* network it would not change subnets, or anything.

    In other words any device on 10.0.0.* will do a DNS request, ask the Router where the DNS server is, then the DNS query itself is sent directly to the server on port 53, then when the response of the DNS is received, via domain, query the server again, but on port 80|443, and then receiving the HTTP/HTTPS response.

    Remember that all my advice so far is so you don’t use any IP or Port anywhere, and your experience is seamless on any device using domains, and subdomains, the only place where you would need to put IP or ports, is on the reverse proxy itself, to tell anything reaching it, where the specific app/service is, as those would need to be running on different ports but be reached through the reverse proxy on defaults 80 or 443, so that you don’t have to put numbers anywhere.


  • If you decide on doing the secondary local DNS on the server on Router B network, there is no need to loop back, as that DNS will maintain domain lookup and the requests on 10.0.0.x all internal to Router B network.

    On Router B then you would have as primary DNS the Server IP, and as secondary an external one like Cloudflare or Google.

    You can still decide to put rules on the reverse proxy if the origin IP is from 192.168.0.* or 10.0.0.* if you see the need to differentiate traffic, but I think that is not necessary.


  • Do yourself a favor and use the default ports for HTTP(80), HTTPS(443) or DNS(53), you are not port forwarding to the internet, so there should be no issues.

    That way, you can do URLs like https://app1.home.internal/ and https://app2.home.internal/ without having to add ports on anything outside the reverse proxy.

    From what you have described your hardware is connected something like this:

    Internet -> Router A (192.168.0.1) -> Laptop (192.168.0.x), Router B (192.168.0.y10.0.0.1) -> [ Desktop Server (10.0.0.114) ]

    You could run only one DNS on the laptop (or another device) connected to Router A and point the domain to Router B, redirect for example the domain home.internal (recommend <something>.internal as it is the intended one to use by convention), to the 192.168.0.y IP, and it will redirect all devices to the server by port forwarding.

    If Router B has Port Forwarding of Ports 80 and 443 to the Server 10.0.0.114 all the request are going to reach, no matter the LAN they are from. The devices connected to Router A will reach the server thanks to port forwarding, and the devices on Router B can reach anything connected to Router A Network 192.168.0.*, they will make an extra hop but still reach.

    Both routers would have to point the primary DNS to the Laptop IP 192.168.0.x (should be a static IP), and secondary to either Cloudflare 1.1.1.1 or Google 8.8.8.8.

    That setup would be dependent on having the laptop (or another device) always turned ON and connected to Router A network to have that DNS working.

    You could run a second DNS on the server for only the 10.0.0.* LAN, but that would not be reachable from Router A or the Laptop, or any device on that outer LAN, only for devices directly connected to Router B, and the only change would be to change the primary DNS on Router B to the Server IP 10.0.0.114 to use that secondary local DNS as primary.

    Lots of information, be sure to read slowly and separate steps to handle them one by one, but this should be the final setup, considering the information you have given.

    You should be able to setup the certificates and the reverse proxy using subdomains without much trouble, only using IP:PORT on the reverse proxy.


  • Most routers, or devices, let you set up at least a primary and secondary DNS resolver (some let you add more), so you could have your local one as primary and an external one like google or Cloudflare as secondary. That way, if your local DNS resolver is down, it will directly go and query the external one, and still resolve them.

    Still. Thanks for the tips. I’ll update the post with the solution once I figure it out.

    You are welcome.


  • Should not be an issue to have everything internally, you can setup a local DNS resolver, and config the device that handles your DHCP (router or other) to set that as the default/primary DNS for any devices on your network.

    To give you some options if you want to investigate, there is: dnsmasq, Technitium, Pi-Hole, Adguard Home. They can resolve external DNS queries, and also do domain rewrite/redirection to handle your internal only domain and redirect to the device with your reverse proxy.

    That way, you can have a local domain like domain.lan or domain.internal that only works and is managed on your Internal network. And can use subdomains as well.

    I’m sorry if I’m not making sense. It’s the first time I’m working with webservers. And I genuinely have no idea of what I’m doing. Hell. The whole project has basically been a baptism by fire, since it’s my first proper server.

    Don’t worry, we all started almost the same, and gradually learned more and more, if you have any questions a place like this is exactly for that, just ask.


  • Not all services/apps work well with subdirectories through a reverse proxy.

    Some services/apps have a config option to add a prefix to all paths on their side to help with it, some others don’t have any config and always expect paths after the domain to not be changed.

    But if you need to do some kind of path rewrite only on the reverse proxy side to add/change a segment of the path, there can be issues if all path changes are not going through the proxy.

    In your case, transmission internally doesn’t know about the subdirectory, so even if you can get to the index/login from your first page load, when the app itself changes paths it redirects you to a path without the subdirectory.

    Another example of this is with PWAs that when you click a link that should change the path, don’t reload the page (the action that would force a load that goes through the reverse proxy and that way trigger the rewrite), but instead use JavaScript to rewrite the path text locally and do DOM manipulation without triggering a page load.

    To be honest, the best way out of this headache is to use subdomains instead of subdirectories, it is the standard used these days to avoid the need to do path rewrite magic that doesn’t work in a bunch of situations.

    Yes, it could be annoying to handle SSL certificates if you don’t want or can’t issue wildcard certificates, but if you can do a cert with both maindomain.tld and *.maindomain.tld then you don’t need to touch that anymore and can use the same certificate for any service/app you could want to host behind the reverse proxy.



  • The simplest (really the simplest) would be to do a git init --bare in a directory on one machine, and that way you can clone, push or pull from it, with the directory path as URL from the same machine and using ssh from the other (you could do this bare repo inside a container but really would be complicating it), you would have to init a new bare repo per project in a new directory.

    If a self-hosted server meaning something with a web UI to handle multiple repositories with pull requests, issues, etc. like your own local Github/Gitlab. The answer is forgejo (this link has the instructions to deploy with docker), and if you want to see how that looks like there is an online public instance called codeberg where the forgejo code is hosted, alongside other projects.




  • But I think I’m understanding a bit! I need to literally create a file named “/etc/radicale/config”.

    Yes, you will need to create that config file, on one of those paths so you then continue with any of the configuration steps on the documentation, you can do that Addresses step first.

    A second file for the users is needed as well, that I would guess the best location would be /etc/radicale/users

    For the Authentication part, you will need to install the apache2-utils package with sudo apt-get install apache2-utils to use the htpasswd command to add users

    So the command to add users would be htpasswd -5 -c /etc/radicale/users user1 and instead of user1, your username.

    And what you need to add to the config file for it to read your user file would be:

    [auth]
    type = htpasswd
    htpasswd_filename = /etc/radicale/users
    htpasswd_encryption = autodetect
    

    Replacing the path with the one where you created your users file.


  • I’m trying to follow the tutorial on the radicale website but am getting stuck in the “addresses” part.

    From reading from the link you provided, you have to create a config file on one of two locations if they don’t exist:

    “Radicale tries to load configuration files from /etc/radicale/config and ~/.config/radicale/config

    after that, add what the Addresses sections says to the file:

    [server]
    hosts = 0.0.0.0:5232, [::]:5232
    

    And then start/restart Radicale.

    You should be able to access from another device with the IP of the Pi and the port after that


  • Yeah, I started the same, hosting LAN parties with Minecraft and Counter Strike 1.6 servers on my own Windows machine at the time.

    But what happens when you want to install some app/service that doesn’t have a native binary installer for your OS, you will not only have to learn how to configure/manage said app/service, you will also need to learn one or multiple additional layers.

    I could have said “simple bare metal OS and a binary installer” and for some people it would sound as Alien, and others would be nitpicky about it as they are with me saying docker (not seeing that this terminology I used was not for a newbie but for them), If the apps you want to self-host are offered with things like Yunohost or CasaOS, that’s great, and there are apps/services that can be installed directly on your OS without much trouble, that’s also great. But there are cases where you will need to learn something extra (and for me that extra was Docker).


  • XKCD 2501 applies in this thread.

    I agree, there are so many layers of complexity in self-hosting, that most of us tend to forget, when the most basic thing would be a simple bare metal OS and Docker

    you’ll probably want to upgrade the ram soon

    His hardware has a max ram limit of 4, so the only probable upgrade he could do is a SATA SSD, even so I’m running around 15 docker containers on similar specs so as a starting point is totally fine.


  • I get your point, and know it has its merits, I would actually recommend Proxmox for a later stage when you are familiar with handling the basics of a server, and also if you have hardware that can properly handle virtualization, for OP that has a machine that is fairly old and low specs, and also is a newbie, I think fewer layers of complexity would be a better starting point to not be overwhelmed and just quit, and then in the future they can build on top of that.