• 0 Posts
  • 14 Comments
Joined 2 years ago
cake
Cake day: September 10th, 2023

help-circle

  • Well then very little of what I said actually applies!

    Unless you know the hours on a drive, you might get brand new ones, or you might get ones with 50k hours on them. They may also be from the same batch, which isn’t ideal for data durability. If you’re ok with all that, then go for it. I generally don’t buy used drives because I don’t want to take the additional risk.

    I’d be surprised if you can’t find a better deal on used spinning rust though… the shipping alone is probably half the value on a good chunk of sales from SmS.


  • I get that, that was also something I used to like about old servers, but let me float a few of the things that I’ve come to realize through my home-lab career to you:

    • Raid is perfectly feasible in consumer hardware. If your motherboard doesn’t have enough SATA ports, you can always get an HBA or a JBOD to support for more disks. There’s really no good reason (that I have heard of) for hardware raid today. Just remember raid is not a backup :)
    • There are consumer ATX PSUs with redundancy. However, the only reason for PSU redundancy is when you cannot tolerate downtime due to a PSU or UPS failure, and that redundancy might save you a few hours of uptime over 10+ years in comparison to a non-redundant consumer PSU that you can go out and buy if it fails. When was the last time you had a (reputable) PSU fail on you? What kind of uptime are you targeting? If you don’t have an answer for that, 99% is very easy to reach even on consumer gear, and is a strong indicator that you don’t need enterprise levels of redundancy. 99% is literally 3 days of downtime per year. Also keep in mind that redundant PSUs are just going to gobble more power and increase operating costs.
    • KVM features - this was the big one for me. I wanted to be able to perform out-of-band remote maintenance on my servers. Then I took a leap and got a Sipeed NanoKVM, and I haven’t looked back. there are plenty of them out there - PiKVM is another reputable one. When buying old enterprise servers, you often have to pay for the remote management license, and that is just another added cost. Not to mention that they lose support pretty quickly, and you end up running out of date software on one of your most critical interfaces to the machine. A NanoKVM, PiKVM, and others aren’t built into the machine, so they continue to be supported for much longer.

    One other thing that I’ll mention and you probably already know - enterprise servers are LOUD - even just a single one can literally sound like a jet engine. That’s not a hyperbolae. If this is your first one, don’t underestimate it. I had my servers in the basement with decent insulation, I used IPMI to throttle the fans back to 10%, and I could still hear the whine on my first floor when everything is quiet. If you end up having to turn down the fans due to noise, you’re going to start having heat issues, and then you’re losing out on performance and shortening component lifespan. Noise-proofing a server is non-trivial - you have to allow air flow still, and where there’s air flow, there’s a path for noise too. My current setups all have 120mm and 140mm fans, and I can barely hear them when I’m working right next to them. My 3D printers are the loud ones in the basement now!


  • Yeah, they’re legit. Bought a few servers from them over the years. No major issues, packing was good, reasonable ship time.

    Had one case where they sent a different NIC than what was listed. They just shipped me the correct one and told me not to bother sending the old one back.

    Stopped buying from them though because I prefer off-the-shelf modern consumer hardware nowadays. The real cost is always power consumption, and I prefer to shell out more money up front in exchange for huge savings on power usage down the line. I can always run over to microcenter and replace a part same-day as opposed to ordering it online and hoping it comes soon.

    If you’re a home-labber, I’d strongly suggest doing the same. Some of those old enterprise servers just gobble power for not that much compute relative to current day consumer machines.

    If I was still buying older servers though, I’d probably be looking at their prices.

    What are you considering buying?




  • I was on an old repurposed desktop with 16gb ram and a i7 6700k at the time.

    I haven’t felt that I’ve been missing any features from Gitlab. I do use Woodpecker-CI for runners because Forgejo action’s weren’t working for Docker builds, but I think the Forgejo actions have come a long way since I made that decision; I’ll have to try them out again one of these days.


  • I tried hosting Gitlab for a while, but configuration and upgrades were difficult, and your really have to stay on top of updates due to vulnerabilities. It also used a lot of resources and wasn’t super responsive.

    I moved to Forgejo (a hard fork of Gitea), and haven’t looked back; I cant recommend it enough. It’s fast, doesn’t take a lot of resources, actively developed, and has all the features I need.

    Codeberg is a public instance of Forgejo if you want to try it out first.


  • Regardless of whether you are using a block or an allow list, you have to maintain the list…

    I’m not sure what your point is; if you want to devote your time, effort, and potential liabilities to it, that’s up to you. I just figured I would share a perspective on why I didn’t want to do that.

    I appreciate all the hard work done by instance hosts; using individual Lemmy instances are a privelege, not a right. I would fully understand and not be upset if my home instance were to shut down at a moments notice.



  • I self hosted a Lemmy instance for a little while, but I stopped over concerns of malicious actors posting CSAM which would then get federated over to my server. I don’t have the appetite to deal with that, and I’m glad I shut it down because just a few weeks later there was a big instance of it happening all over Lemmy, and I’m sure I would have had to deal with cleaning it up on my server too. Just something to keep in mind.

    Otherwise though, the setup process isn’t too complex.



  • I’ve personally been quite pleased with the combination of Frigate and some Amcrest POE cameras. Just make sure the cameras you are getting support RTSP though and you should be able to use them with Frigate.

    Also make sure you block the cameras from reaching the public internet using your firewall, and only make them reachable from your Frigate host. Personally I use a VLAN with no internet access and enforce tagging at the switch level (i.e. don’t trust the cameras to maintain their own VLAN) settings.