Via SSH:
/usr/local/psa/admin/bin/autoinstaller
12 Ottobre 2017
di admin@admin
Commenti disabilitati su Plesk installer/upgrade from CLI
Via SSH:
/usr/local/psa/admin/bin/autoinstaller
15 Settembre 2017
di admin@admin
Commenti disabilitati su Installazione Certificato SSL Comodo da Enom
Lo zip inviato per email contiene quattro file:
In “Invia certificato come testo”, nella textarea “Certificato” incollare il contenuto di nomedominio_it.crt. Nella textarea “Certificato CA” incollare nell’ordine il contenuto di: COMODORSADomainValidationSecureServerCA.crt, AddTrustExternalCARoot.crt, COMODORSAAddTrustCA.crt
Procedere al riavvio dei servizi:
service httpd restart
service nginx restart
Si può verificare la corretta installazione al link https://www.digicert.com/help/
4 Agosto 2017
di admin@admin
Commenti disabilitati su Logwatch
Logwatch is a customizable log analysis system. Logwatch parses through your system’s logs and creates a report analyzing areas that you specify. Logwatch is easy to use and will work right out of the package on most systems.
Applications create what are called “log files” to keep track of activities taking place at any given time. These files, which are far from being simple text outputs, can be very complex to go through, especially if the server being managed is a busy one.
When the time comes to refer to log files (e.g. in case of failure, loss of data etc.), making use of all the available help becomes vital. Being able to quickly understand (parse) what they can tell regarding the past events and analyzing what exactly has happened then becomes exceptionally important for coming up with a solution.
Following in the footsteps of our previous articles on Linux system hardening, security monitoring and emailing alerts, in this DigitalOcean article we will talk about Logwatch: a very powerful log parser and analyzer which can make any dedicated system administrator’s life a little bit easier when tackling application related tasks and issues.
Much like the black boxes of starships from Startrek, to keep the systems (i.e. servers) running, administrators even today rely on logs. Jokes aside, these application-generated files play a decisive role in tracking back and understanding what has happened in the past [at a given time] for the purposes of full / partial data recovery (i.e. from transaction logs), performance or strategy related analyses (e.g. from server logs) or amendments for the future (e.g. from access logs).
Simply put, log files will consist of actions and events taking place within a given time range.
A good log file should be as detailed as possible in order to help the administrator, who have the responsibility of maintaining the system, find the exact information needed for a certain purpose. Because of this very reason, log files are usually NOT concise and they contain loads of repetitions and loads of (mostly) redundant entries which need thorough analyses and filtering to make sense to a human.
This is where Logwatch, a computer application designed for this job, comes into play.
Log management is an area consisting mostly of search, log rotation / retention and reporting. Logwatch is an application that helps with simple log management by daily analyzing and reporting a short digest from activities taking place on your machine.
Reports created by Logwatch are categorised by services (i.e. applications) running on your system, which can be configured to consist of the ones you like or all of them together by modifying its relatively simple configuration file. Furthermore, Logwatch allows the creation of custom analysis scripts for specific needs.
Please note: Logwatch is a harmless application which should not interfere with your current services or workload. However, as always, it is recommended that you first try it on a new system and make sure to take backups.
It is very simple to have Logwatch installed on a RHEL based system (e.g. CentOS). As it is an application consisting of various Perl scripts, certain related dependencies are required. Since we are going to be using yum package manager, this will be automatically taken care of. Unless you have mailx installed already, Logwatch will download it for you during the process as well.
To install Logwatch on CentOS / RHEL, run the following:
$ yum install -y logwatch
Getting Logwatch for Debian based systems (e.g. Ubuntu) is very similar to the process explained above, apart from the differences in package managers (aptitude v. yum).
To install Logwatch on Ubuntu / Debian, run the following:
$ aptitude install -y logwatch
Although its settings can be overridden during each run manually, in general, you will want to have Logwatch running daily, using common configuration.
The default configuration file for Logwatch is located at:
/usr/share/logwatch/default.conf/logwatch.conf
Let’s open up this file using the nano text editor in order to modify its contents:
$ nano /usr/share/logwatch/default.conf/logwatch.conf
Upon running the command above, you will be met with a long list of variables the application uses each time it runs, whether automatically or manually.
In order to begin using it, we will need to make a few changes to these defaults.
Please remember in the future, you might want to come back to modify certain settings defined here. All services (applications) that are analyzed by Logwatch are listed on this file, as explained above (Configuration #5). As you install or remove applications from your virtual server, you can continue to receive reports on all of them or some of them by changing the settings here (see below*).
The important options which we need to set:
Please note: You will need to use your arrow keys to go up or down the lines when you will be making the following changes on the document. Once you are done going through the changes (items 1 – 6), you will need to press CTRL+X and then confirm with Y to save and close. Changes will come into effect automatically the next time logwatch runs.
1. The e-mail address to which daily digest (reports) are sent:
MailTo = root
Replace root with your email address.
Example: MailTo = sysadmin@mydomain.com
2. The e-mail address from which these reports originate:
MailFrom = Logwatch
You might wish to replace Logwatch with your own again.
Example: MailFrom = sysadmin@mydomain.com
3. Setting the range for the reports:
Range = yesterday
You have options of receiving reports for All (all available since the beginning), Today (just today) or Yesterday (just yesterday).
Example: Range = Today
4. Setting the reports’ detail:
Detail = Low
You can modify the reports’ detail here. Options are: Low, Medium and High.
Example: Detail = Medium
5. Setting services (applications) to be analysed:
By default, Logwatch covers a really wide range of services. If you would like to see a full list, you can query the contents of the file
scripts/serviceslocated at/usr/share/logwatch/.Example:
ls -l /usr/share/logwatch/scripts/services
Service = All
You can choose to receive reports for all services or some specific ones.
For all services, keep the line as: Service = All
If you wish to receive reports for specific ones, modify it similar to the following example, listing each service on a new line (e.g. Service = [name]).
Example:
Service = sendmail
Service = http
Service = identd
Service = sshd2
Service = sudo
..
6. Disabling daily reports:
# DailyReport = No
If you do not wish to have daily repots generated, you should uncomment this line.
Example: DailyReport = No instead of # DailyReport = No
And that’s it! After making these changes, you will receive daily reports based on log files from your server automatically.
To learn more about Logwatch, and creating custom services to receive reports on, you can visit its full documentation by clicking here.
It should be mentioned that you have the option to run Logwatch manually whenever you need through the command line.
Here are the available options [from the documentation]:
logwatch [--detail level ] [--logfile log-file-group ] [--service service-name ] [--print]
[--mailto address ] [--archives] [--range range ] [--debug level ] [--save file-name ]
[--logdir directory ] [--hostname hostname ] [--splithosts] [--multiemail] [--output output-
type ] [--numeric] [--no-oldfiles-log] [--version] [--help|--usage]
Unless you specify an option, it will be read from the configuration file.
Example:
$ logwatch --detail Low --mailto email@address --service http --range today
25 Luglio 2017
di admin@admin
Commenti disabilitati su Configurazione Apache corrotta. Vhosts: siti irraggiungibili.
The following error is shown for a subscription:
Error: New configuration files for the Apache web server were not created due to the errors in configuration templates: Destination directory '/etc/nginx/plesk.conf.d/vhosts' not exist.
The following error may be shown on Plesk homepage:
New files of configuration for Apache web server were not built due to errors in configuration templates. The detailed error message was e-mailed to you, so please check the e-mail, fix the errors, and click here to retry generating configuration
There are a lot of entries in “error” status in Configurations table in psa database:
mysql> select id,objectId,status,description from Configurations where status="error";
+------+----------+--------+-------------------------------------------------------------------+
| id | objectId | status | description |
+------+----------+--------+-------------------------------------------------------------------+
| 9 | 1 | error | Destination directory '/etc/nginx/plesk.conf.d/vhosts' not exist |
| 13 | 2 | error | Destination directory '/etc/nginx/plesk.conf.d/vhosts' not exist |
| 17 | 3 | error | Destination directory '/etc/nginx/plesk.conf.d/vhosts' not exist |
| 25 | 5 | error | Destination directory '/etc/nginx/plesk.conf.d/vhosts' not exist |
Rebuilding of web server configuration files fails with an error:
# /usr/local/psa/admin/bin/httpdmng --reconfigure-all
2016-09-23T13:51:14-07:00 ERR (3): Apache config (14746638730.91104700) generation failed:
Execution failed.
Command: httpdmng
Arguments: Array
(
[0] => --reconfigure-server
[1] => -no-restart
)
Details: Empty error message from utility.
Domains’ configuration is broken
Log in to Plesk database and check which domains’ configuration is corrupted:
mysql> select id,objectId,status,description from Configurations where status="error";
Depending on the quantity of domains in the output apply one of the following two solutions.
I. Solution to fix a few domains
Reconfigure domains one by one:
# /usr/local/psa/admin/sbin/httpdmng --reconfigure-domain example.com
II. Solution to fix all domains in bulk
psa.Configurations :
# MYSQL_PWD=`cat /etc/psa/.psa.shadow` mysql -u admin psa -e"delete from Configurations"
httpdmng utility:
~# /usr/local/psa/admin/bin/httpdmng --reconfigure-all
2 Giugno 2017
di admin@admin
Commenti disabilitati su Installing Laravel (any version)
To complete today’s tutorial, you’re only going to need one prerequisite, which is:
Right, so what do we need to do to set it up? Thanks to the modern UI, and intelligent layout available in Plesk 12, it’s a real no brainer, requiring only a few steps to have it live; which are as follows:
Let’s work through the process now.
When you first login to your account, click the Domains in the left-hand side navigation bar, under Hosting Services. To keep this simple, we’re going to create a new sub-domain, specifically under conetix.com.
On the far right of that domain’s row in the table, click “Manage Hosting“. This will display the domain and sub-domain entries, which you can see in the screenshot below.
Above the first entry, you’ll see three buttons: “Add New Domain“, “Add New Subdomain” and “Add New Domain Alias“. Click “Add New Subdomain”, and on the form which appears, enter “laravel” for the subdomain name. The Document root field will be pre-populated, based on the Subdomain name you entered. But at the end, add in “/laravel/public”.
The reason for this is that the command we’ll run to install Laravel creates a sub-directory, called laravel, and the application’s bootstrap file is located in the public directory beneath that.
When it’s finished you’ll be back on the domains page you started at, with your new domain available at the bottom of the list. In the entry, you’ll see all the pertinent details, along with links to more, as well as to make changes.
Now we need to check that SSH access is setup properly, so that we can login and run a few command line scripts. If you’re not too familiar with command line scripts, that’s ok. The one’s I’ve listed here are quite simple and have sufficient documentation, should it be needed.
In the second section, titled “System user“, make sure that the last option, “Access to the server over SSH“, is set to /bin/bash or /bin/sh and click OK at the bottom.
The reason for this is because these scripts need that environment to run in. sh may work, but bash works best for this example. Now you’re ready to login and begin installing the application.
Note: if you’re not sure of the user’s password, either regenerate it or check with your systems administrator.
Now we have one last pre-install step to do, create a database. Back in the sub-domain settings, above the “Show Less” tab, on the right you’ll see a Databases option.
Next to it, click “Add New Database“, where you’ll be directed to create new database form, as in the screenshot below.
On that page, set “Database name” to “admin_laravel“, leave “Create a new database user” checked and set “Database user name“ to “laravel_user“. Set a secure password under “NewPassword/Confirm Password“. Make a note of these details as you will need them later in the article. For “Access control“, choose the first option, “Allow local connections only“, then click OK.
You’ll then be redirected to the databases list, where you’ll see the new database last in the list (or the only one if this is the first). Feel free to inspect it if you like, but there’s no need to to complete this tutorial.
To install composer, login as the Plesk System User we created earlier and cd to ‘/var/www/vhosts/conetix.com/laravel.conetix.com’ substituting ‘conetix.com’ based on your domain setup; then run the command below:
curl -s https://getcomposer.org/installer | php --
This downloads Composer, piping it through to PHP, in the process creating a self-contained file we can use, called composer.phar.
N.B.: quando il sistema operativo supporta di default una versione php per esempio 5.6 ma sono isntallate più versioni (p.e. 7.0 e 7.1) l’interprete php da utilizzare va puntato direttamente e si trova in
/opt/plesk/php/[versione]/bin/php
Per esempio:
/opt/plesk/php/[versione]/bin/php ./composer.phar create-project laravel/laravel --prefer-dist
N.B.: se è necessario utilizzare npm una volta installato node.js nel dominio, il path da utilizzare è:
/opt/plesk/node/XX/bin/npm
dove XX è il numero di versione node.js.
Now we need to install Laravel. From your current directory, run the command below:
php ./composer.phar create-project laravel/laravel --prefer-dist
Assuming that there are no errors, timeouts or permission issues, now we need to configure the setup. So cd to laravel the new directory and using vim or your editor of choice edit app/config/app.php. In the file, set the following options:
| Setting | Option |
|---|---|
| debug | true |
| url | ‘http://laravel.conetix.com’ |
Save the changes, then edit app/config/database.php and find the section marked ‘connections’ and make sure the settings are as below for the mysql sub-option (these are the same details used when setting up the database earlier):
| Setting | Option |
|---|---|
| host | ‘localhost’ |
| database | ‘admin_laravel’ |
| username | ‘laravel_user’ |
For the password, insert your generated password. Saving that, you’re now ready to run the default, Laravel, application.
php artisan key:generate
Result is a kind of :
Application key [base64:k/XGJzTR0vCEP/nVmU866vXAjzYbQoA452AXn5cjIOU=] set successfully.
Open /config/app.php file and set the ‘key’ field as generated before:
‘key’ => env(‘APP_KEY’,’base64:k/XGJzTR0vCEP/nVmU866vXAjzYbQoA452AXn5cjIOU=’),
UPDATE
Può capitare che dopo l’installazione e all’avvio dell’applicazione appaia un messaggio del tipo:
UnexpectedValueException The stream or file “/var/www/html/upsecurit/storage/logs/laravel.log” could not be opened: failed to open stream: Permission denied
In questo caso il problema dipende dai diritti di scrittura su alcune cartelle e da SElinux.
Procedere quindi in questo modo (fonte: https://stackoverflow.com/questions/30306315/laravel-5-laravel-log-could-not-be-opened-permission-denied)
Three things need to be done:
semanage fcontext -a -t httpd_sys_rw_content_t "/var/www/<Laravel Site>/storage(/.*)?"
semanage fcontext -a -t httpd_sys_rw_content_t "/var/www/<Laravel Site>/bootstrap/cache(/.*)?"
restorecon -Rv "/var/www/<Laravel Site>/storage"
restorecon -Rv "/var/www/<Laravel Site>/bootstrap/cache"
setfacl -R -m u:apache:rwX storage/
setfacl -R -m u:apache:rwX bootstrap/cache/
The last thing you need to do is enabling SELinux again.
1 Aprile 2017
di admin@admin
Commenti disabilitati su Install fail2ban on CentoOS 6
Fonte: https://www.digitalocean.com/community/tutorials/how-to-protect-ssh-with-fail2ban-on-centos-6
UPDATE
Nuova guida su: https://www.vultr.com/docs/how-to-setup-fail2ban-on-centos
Servers do not exist in isolation, and those servers with only the most basic SSH configuration can be vulnerable to brute force attacks. fail2ban provides a way to automatically protect the server from malicious signs. The program works by scanning through log files and reacting to offending actions such as repeated failed login attempts.
Because fail2ban is not available from CentOS, we should start by downloading the EPEL repository:
rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
Follow up by installing fail2ban:
yum install fail2ban
The default fail2ban configuration file is location at /etc/fail2ban/jail.conf. The configuration work should not be done in that file, however, and we should instead make a local copy of it.
cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
After the file is copied, you can make all of your changes within the new jail.local file. Many of possible services that may need protection are in the file already. Each is located in its own section, configured and turned off.
Open up the the new fail2ban configuration file:
vi /etc/fail2ban/jail.local
The first section of defaults covers the basic rules that fail2ban will follow. If you want to set up more nuanced protection for your virtual private server, you can customize the details in each section.
You can see the default section below.
[DEFAULT] # "ignoreip" can be an IP address, a CIDR mask or a DNS host. Fail2ban will not # ban a host which matches an address in this list. Several addresses can be # defined using space separator. ignoreip = 127.0.0.1 # "bantime" is the number of seconds that a host is banned. bantime = 3600 # A host is banned if it has generated "maxretry" during the last "findtime" # seconds. findtime = 600 # "maxretry" is the number of failures before a host get banned. maxretry = 3
Write your personal IP address into the ignoreip line. You can separate each address with a space. IgnoreIP allows you white list certain IP addresses and make sure that they are not locked out from your VPS. Including your address will guarantee that you do not accidentally ban yourself from your own virtual private server.
The next step is to decide on a bantime, the number of seconds that a host would be blocked from the server if they are found to be in violation of any of the rules. This is especially useful in the case of bots, that once banned, will simply move on to the next target. The default is set for 10 minutes—you may raise this to an hour (or higher) if you like.
Maxretry is the amount of incorrect login attempts that a host may have before they get banned for the length of the ban time.
Findtime refers to the amount of time that a host has to log in. The default setting is 10 minutes; this means that if a host attempts, and fails, to log in more than the maxretry number of times in the designated 10 minutes, they will be banned.
The SSH details section is just a little further down in the config, and it is already set up and turned on. Although you should not be required to make to make any changes within this section, you can find the details about each line below.
[ssh-iptables]
enabled = true
filter = sshd
action = iptables[name=SSH, port=ssh, protocol=tcp]
sendmail-whois[name=SSH, dest=root, sender=fail2ban@example.com]
logpath = /var/log/secure
maxretry = 5
Enabled simply refers to the fact that SSH protection is on. You can turn it off with the word “false”.
The filter, set by default to sshd, refers to the config file containing the rules that fail2banuses to find matches. The name is a shortened version of the file extension. For example, sshd refers to the /etc/fail2ban/filter.d/sshd.conf.
Action describes the steps that fail2ban will take to ban a matching IP address. Just like the filter entry, each action refers to a file within the action.d directory. The default ban action, “iptables” can be found at /etc/fail2ban/action.d/iptables.conf .
In the “iptables” details, you can customize fail2ban further. For example, if you are using a non-standard port, you can change the port number within the brackets to match, making the line look more like this:
eg. iptables[name=SSH, port=30000, protocol=tcp]
You can change the protocol from TCP to UDP in this line as well, depending on which one you want fail2ban to monitor.
If you have a mail server set up on your virtual private server, Fail2Ban can email you when it bans an IP address. In the default case, the sendmail-whois refers to the actions located at /etc/fail2ban/action.d/sendmail-whois.conf.
log path refers to the log location that fail2ban will track.
The max retry line within the SSH section has the same definition as the default option. However, if you have enabled multiple services and want to have specific values for each one, you can set the new max retry amount for SSH here.
After making any changes to the fail2ban config, always be sure to restart Fail2Ban:
sudo service fail2ban restart
You can see the rules that fail2ban puts in effect within the IP table:
iptables -L
vim /etc/fail2ban/fail2ban.conf
logtarget = /var/log/fail2ban.log
17 Dicembre 2016
di admin@admin
Commenti disabilitati su Installazione ed esecuzione di findbot.pl
Spostarsi sulla directory /home del server:
cd /home
Creare una cartella “findbot” ed accedervi:
mkdir findbot cd findbot
Scaricare il file con wget e cambiarne i diritti:
wget www.abuseat.org/findbot.pl chmod 700 findbot.pl
nohup ./findbot.pl -c /directory_da_scansionare &
Seguire l’esecuzione con:
tail -f nohup.out
14 Novembre 2016
di admin@admin
Commenti disabilitati su Enable TLS/ssl proftpd Ubuntu/Fedora/Centos
Benefits of TLS/SSL
TLS/SSL provides numerous benefits to clients and servers over other methods of authentication, including:
– Strong authentication, message privacy, and integrity
– Interoperability
– Algorithm flexibility
– Ease of deployment
– Ease of use
1- Install Proftpd and openssl
apt-get install proftpd openssl yum install proftpd openssl
2- Create SSL Certificates
mkdir /opt/ssl/ cd /opt/ssl
3- Generate ssl certificate with
openssl req -new -x509 -days 365 -nodes -out proftpd.cert.pem -keyout proftpd.key.pem
Generating a 2048 bit RSA private key
…………………+++
……….+++
writing new private key to ‘proftpd.key.pem’
—–
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
—–
Country Name (2 letter code) [XX]:NL
State or Province Name (full name) []:Adam
Locality Name (eg, city) [Default City]:Adam
Organization Name (eg, company) [Default Company Ltd]:Unixmen
Organizational Unit Name (eg, section) []:Unixmen
Common Name (eg, your name or your server’s hostname) []:Unixmen-test
Email Address []:@unixmen.com
4- Enable TLS In ProFTPd
Edit /etc/proftpf/proftpd.conf or /etc/proftpd.conf (Ubuntu/Centos)
TLSEngine on
TLSLog /var/log/proftpd/tls.log
TLSProtocol SSLv23
TLSOptions NoCertRequest
TLSRSACertificateFile /opt/ssl/proftpd.cert.pem
TLSRSACertificateKeyFile /opt/ssl/proftpd.key.pem
TLSVerifyClient off
TLSRequired on
5- Check if proftpd ready with:
# proftpd -vv ProFTPD Version: 1.3.3g (maint) Scoreboard Version: 01040003 Built: Thu Nov 10 2011 16:20:47 UTC Loaded modules: mod_lang/0.9 mod_ctrls/0.9.4 mod_cap/1.0 mod_vroot/0.9.2 mod_tls/2.4.2 mod_auth_pam/1.1 mod_readme.c mod_ident/1.0 mod_dso/0.5 mod_facts/0.1 mod_delay/0.6 mod_site.c mod_log.c mod_ls.c mod_auth.c mod_auth_file/0.8.3 mod_auth_unix.c mod_xfer.c mod_core.c
6- Now start proftpd
/etc/init.d/proftpd start Starting proftpd: [ OK ]
and is done!
10 Novembre 2016
di admin@admin
Commenti disabilitati su Utilizzo breve di wget per download sito web intero
If you ever need to download an entire Web site, perhaps for off-line viewing, wget can do the
job—for example:
$ wget --recursive --no-clobber --page-requisites --html-extension --convert-links --restrict-file-names=windows --domains website.org --no-parent ./path_destinazione http://nomesito.tld
The options are:
15 Giugno 2016
di admin@admin
Commenti disabilitati su rsync: Sincronizzare file e cartelle con rsync
Una delle principali caratteristiche di rsync è la possibilità di eseguiretrasferimenti cifrati tramite protocollo ssh grazie al quale è possibile gestire sincronizzazioni files e cartelle di due sistemi connessi alla Rete.
La grande potenzialità di Rsync over SSH consiste, appunto, in questo: nella possibilità di eseguire sincronizzazioni tra macchine remote in modo sicuro ed efficente.
Una simile procedura si rivela molto utile, ad esempio, per eseguire dei backup su server remoti piuttosto che per trasferire grandi moli di dati in vista della migrazione di un sito web da un server ad un altro.
Vediamo di seguito la sintassi per importare dati da un server remoto:
rsync -auvz user@host:/path/to/source /path/to/destination
Viceversa per esportare dati da locale verso un server remoto:
rsync -auvz /path/to/source user@host:/path/to/destination
Come potete vedere le opzioni sono le medesime viste per la sincro in locale, l’unica differenza tra le sintassi riguarda l’indicazione di un’accoppiata user@host a precedere la cartella sorgente o di destinazione.
E’ evidente che una volta lanciato il comando il sistema remoto richiederà l’inserimento della password prima di procedere all’esecuzione dei compiti impostati.
Si noti che con questa sintassi rsync utilizza una connessione SSH agganciandosi alla porta di default (TCP 22); se invece il server remoto utilizzasse una porta diversa da quella di default sarà necessario specificare gli argomenti da passare ad SSH direttamente nel flag –rsh in questo modo:
rsync -auvz --rsh="ssh -p PORTA" user@host:/path/to/source /path/to/destination
Dove PORTA, ovviamente, può essere qualsiasi porta dalla quale ci sia un server ssh in ascolto.
Per vedere lo stato di avanzamento della sincro è sufficiente aggiungere il flag –progressin questo modo:
rsync -auvz --progress user@host:/path/to/source /path/to/destination
E’ possibile impostare un filtro sulle dimensioni dei file in modo da escludere dalla sincronizzazione, ad esempio, i file troppo grandi o troppo piccoli. Per farlo si utilizzano i flag–max-size e –min-size in questo modo:
rsync -auvz --max-size='500K' user@host:/path/to/source /path/to/destination
Chiunque si trovi ad avere a che fare, anche non abitualmente, con i sistemi informatici deve conoscere rsync: il suo utilizzo, infatti, semplifica tutte le operazioni di backup e trasferimento dati rendendo estremamente semplici e sicure quelle effettuate da o verso postazioni remote.