This forum is in permanent archive mode. Our new active community can be found here.

Linux Server Config Advice Wanted

edited July 2011 in Technology
So I recently decided to drop the safety net of the GUI and switch my Ubuntu server at school from Ubuntu Desktop to Ubuntu Server 11.04. I have two extra hdds in the machine; the first for webserver content (Apache's www folder, mysql databases, etcetera), the second as storage space. I had them all set up in fstab mounting to /media/Web and /media/storage, respectively. The storage drive was also supposed to be shared via samba, but I could never get it to play nice with Windows..

Now that I'm starting anew, I'd like you guy's advice on it's configuration. Should I keep the drives mounted in media or should I mount them elsewhere? Our resident linux admin suggested /opt/

Comments

  • It doesn't matter where you mount things. You can mount them wherever you want. Locations in the file system in *NIX rarely have any special meaning other than just obeying convention. You'll see lots of people doing lots of different things. Personally I leave /media/ for the automatically mounted things like USB sticks and CD-ROMS. I use /mnt/ for the permanently mounted things, like say the windows partition of a dual boot machine. /opt/ is usually just a place to install stupid applications that want to install in a binary blob and treat Linux like Windows.

    It sounds to me like your /media/Web is actually /var/. I would backup your /media/Web drive. Then during the installation process of Ubuntu Server mount /var/ to that drive. Then you can put your www folder in /var/www/, and your MySQL databases in /var/data/mysql or whever they usually go. I'm not sure what's on your storage, but if it's music and shit, you might just want to put it in /home/you/Music. Or you could even mount it to /home/you or even /home/.
  • edited July 2011
    Personally I leave /media/ for the automatically mounted things like USB sticks and CD-ROMS. I use /mnt/ for the permanently mounted things, like say the windows partition of a dual boot machine. /opt/ is usually just a place to install stupid applications that want to install in a binary blob and treat Linux like Windows.
    This is pretty true for me also. For permanent mounts, I usually create my own directories, such as /storage1/. This allows one to install an operating system over the file system without overwriting mount points somehow (because you never really know). Permissions are usually much more important than locations.

    Something I've found with the Ubuntu server version is that the firewalls aren't really set up, and most people don't know how to use console for iptables. I always lock down my internet systems really tightly.

    If you'd like, I can grab an example iptables file and show how to load it up on boot. If you have an internet facing server, firewalls are mucho importante. The Chinese and Eastern Bloc have hit every server I've ever had up. They just brute force IP addies and do scans. I like to check my logs and find the location of IP addresses that hit my systems in suspicious ways.

    Also, I disable the repositories except for all security repos and the basic one (whose name I forgot). I will only install something from the universe repo after doing research. Forcing myself to enable universe repo to grab a package basically ensures I don't get lazy about checking up on the packages before installing them.
    Post edited by Byron on
  • If you'd like, I can grab an example iptables file and show how to load it up on boot. If you have an internet facing server, firewalls are mucho importante. The Chinese and Eastern Bloc have hit every server I've ever had up. They just brute force IP addies and do scans. I like to check my logs and find the location of IP addresses that hit my systems in suspicious ways.
    I find it's often, but obviously not always, pointless to have firewalls.

    Let's say you have standard LAMP server with SSH.

    SSH runs on port 22 by default. Plenty of bots will be trying to brute force it with passwords. You could move SSH to a different port, like 2022. Then all their requests to 22 will time out and your security log will be much less noisy. But let's say you leave it on 22. You should not have password auth enabled in the first place. Even if you did, your password should be awesome enough that a bot won't be guessing it. If your SSH is setup properly a firewall or changing ports on it won't make any difference. Now, you might want to add some host-based authentication on there. For example. People can only SSH into the server with the right key AND if they are currently at my house or at my office. That's not a bad use of a firewall, but overkill for anything that's not a business. Just make sure your business has a VPN so you can still access the server when you are on vacation.

    Apache runs on port 80. You're going to leave it open to the world, that's the point. If you get DDoS'd there's not much you can do. If you get DOS'd from one IP, you can block it in Apache, but iptables can do it also. Doesn't matter which you choose.

    MySQL should be running on a local socket because this is a single server configuration. People won't be able to access it over the network anyway. Firewalling the port makes no difference.

    Anything else you are running, such as memcached, should be bound to localhost, so none of them will accept traffic over any network interface besides the loopback. Whether you firewall 11211 or not with iptables will make no difference. It doesn't hurt to block all those ports you aren't using, but it's not something you have to worry about.

    The main thing to use a firewall for is when you have multiple machines together. For example if you have MySQL on a separate machine you want port 3306 to be locked off for the entire world except for the web servers that are going to be querying that database.

    Of course, all this assumes that you have each individual service configured properly. If you don't know how to make sure your MySQL is bound to localhost only, then blocking the port in iptables can't hurt as another level to prevent a configuration file mistake. But if you're not smart enough to configure your MySQL properly, you aren't going to be able to configure the iptables properly either.
  • edited July 2011
    SSH runs on port 22 by default. Plenty of bots will be trying to brute force it with passwords. You could move SSH to a different port, like 2022. Then all their requests to 22 will time out and your security log will be much less noisy.
    Yeeeup. Did that.
    If your SSH is setup properly a firewall or changing ports on it won't make any difference. Now, you might want to add some host-based authentication on there. For example. People can only SSH into the server with the right key AND if they are currently at my house or at my office. That's not a bad use of a firewall, but overkill for anything that's not a business.
    My personal belief is that, if it takes you less than 10 minutes, you might as well overkill your security. I definitely treat my servers like they are, as you say, "a business".
    Apache runs on port 80. You're going to leave it open to the world, that's the point. If you get DDoS'd there's not much you can do. If you get DOS'd from one IP, you can block it in Apache, but iptables can do it also. Doesn't matter which you choose.
    iptables actually has some amount of traffic limiting rules to prevent DDoS. I have, in fact, set these up to prevent DoS of my web server from any one site, which will reduce the overall effectiveness of a DDoS (but might not ultimately prevent a DDoS's success).
    -A INPUT -p tcp --dport 80 -m limit --limit 100/minute -j ACCEPT
    -A INPUT -p tcp --dport 443 -m limit --limit 100/minute -j ACCEPT
    I also added this same kind of limitation to my SSH port, to reduce brute force attempts from non-coordinated, single points of origin.
    The main thing to use a firewall for is when you have multiple machines together.
    Actually, I don't trust code I didn't write. Firewalls prevent poorly written code, or maliciously written code, from being too easily accessible. My firewalls tend to be two way. Things come in only as I allow them, and I also write the same going out. If someone manages to install some script that goes out from my server to tell the world where to go, it's going to have to pick the right way to do it.

    I find it odd that you'd argue so feverishly against firewalls. Again, they don't cost much time to configure and setup, but they gain in security. You can diminish that gain all you'd like, but it is still non-zero positive.

    I also like the ability to have a firewall DROP packets. If you don't setup the firewall to default drop, then it will send back a response saying denied. This is a mild deterrent, but it creates a difficulty in determining whether there was a packet transmission mistake or whether the remote end simply isn't responding to that port. I find it amusing to grief in this way.
    Post edited by Byron on
  • Coincidentally, I have to teach myself how to deal with web servers in order to manage one of my school club's websites. Can you advise me on where to go to find useful tips and or tell me things I probably wouldn't otherwise know? I'm planning on setting up an apache server in desktop Ubuntu to mess around with before I start setting things up in the wild.
  • Can you advise me on where to go to find useful tips and or tell me things I probably wouldn't otherwise know?
    Well you can always ask specific questions here. There are at least two active users that have experience administrating Apache servers.

    Otherwise I don't know. The Apache site itself is good for reference, but blindingly confusing if you don't know what you're looking for going into it.
  • edited July 2011
    My personal belief is that, if it takes you less than 10 minutes, you might as well overkill your security. I definitely treat my servers like they are, as you say, "a business".
    You're too paranoid. I can do a lot in ten minutes.
    iptables actually has some amount of traffic limiting rules to prevent DDoS. I have, in fact, set these up to prevent DoS of my web server from any one site, which will reduce the overall effectiveness of a DDoS (but might not ultimately prevent a DDoS's success).
    I'm well aware of these iptables rate limiting rules, but I have not implemented them because they have their problems. I see you have set your rate limit to 100/minute. Personally, that rate is too high. If someone comes in with a DDoS at 50/minute per attacker, it will definitely take you out. Such a rule will only prevent a dumb attacker who is all alone. A smart attacker will figure it out very quickly and hit you at 99/minute. Can you handle 99/minute? You can lower the rate, but then you have other troubles. You start blocking legitimate users, like the Google crawler. I have never found this iptables rate limiting to be an effective measure. A better way to avoid DDoS is to just have a service that can handle the traffic with aggressive caching.
    Actually, I don't trust code I didn't write.
    You didn't write iptables. Now you are trusting one more piece of code you didn't write. I trust SSH more than iptables. And I trust MySQL and apache equally to iptables. When MySQL is set to bind 127.0.0.1 that's good enough for me.
    I find it odd that you'd argue so feverishly against firewalls. Again, they don't cost much time to configure and setup, but they gain in security. You can diminish that gain all you'd like, but it is still non-zero positive.
    I'm not arguing against them. I'm just saying that they don't really help all that much unless you are for super have. I have iptables configured on this very server. I'm just saying it's not an absolute must, as you are suggesting.
    I also like the ability to have a firewall DROP packets. If you don't setup the firewall to default drop, then it will send back a response saying denied. This is a mild deterrent, but it creates a difficulty in determining whether there was a packet transmission mistake or whether the remote end simply isn't responding to that port. I find it amusing to grief in this way.
    I don't think it really makes a difference. A dead port is a dead port.

    Also, the firewall does use a very tiny amount of CPU that you will not use if you don't use it. There is an insignificantly small performance gain from not using a software firewally on an application server.
    Coincidentally, I have to teach myself how to deal with web servers in order to manage one of my school club's websites. Can you advise me on where to go to find useful tips and or tell me things I probably wouldn't otherwise know? I'm planning on setting up an apache server in desktop Ubuntu to mess around with before I start setting things up in the wild.
    Ok people, seriously. Not picking on you specifically, but all the people who bring their tech questions here. Use Google to get your tutorials and howtos. Then actually do the thing. Then use Stack Overflow and its related sites for your specific questions. FRC Forum != tech help forum just because there are tech people here. You are all being your parents who ask you to fix the computer when you visit home. Don't be that.

    One piece of advice for anyone. Do not try to setup public servers that do anything real, important, or have any important information on them, unless you are an expert. There are so many different factors you have to worry about to get everything just right, you will get owned if you aren't an expert. You even see all those so-called experts getting owned by lulzsec and shit? Even experts don't dot all the i's and cross all the t's. You will fuck it up, I guarantee it. If you need it done, get a professional to do it for you.
    Post edited by Apreche on
  • Personally, that rate is too high.
    Actually, the number is heuristic. I had set it to something low and had to keep cranking it higher and higher. My web pages would only half load. CSS wouldn't load, or a picture wouldn't load, or something else wouldn't load. It turns out the rate limiter was nailing it. 100/minute isn't literal, and I had to tweak up to that level to get it so pages would load without hitch.
    Also, the firewall does use a very tiny amount of CPU that you will not use if you don't use it. There is an insignificantly small performance gain from not using a software firewally on an application server.
    Fair enough.
  • Actually, the number is heuristic. I had set it to something low and had to keep cranking it higher and higher. My web pages would only half load. CSS wouldn't load, or a picture wouldn't load, or something else wouldn't load. It turns out the rate limiter was nailing it. 100/minute isn't literal, and I had to tweak up to that level to get it so pages would load without hitch.
    Exactly. Rate too high means that it provides little to no DDoS protection. Rate too low, legitimate traffic is blocked. It is extremely difficult to find a safe number that will allow all legitimate traffic through while having any significant effect on an attack. The only real defenses are to have an application that can handle the load and/or a separate smarter firewall that can recognize an attack pattern and dynamically mark IPs for blockage and unblock them after the attack is long over.
Sign In or Register to comment.