Category Archives: Linux

Ubuntu 12.04 create desktop shortcut [desktop entry]

Method 1:

This simple tutorial will show you how to add application shortcut to Desktop in Ubuntu 12.04.In earlier versions of Ubuntu it’s not that easy,but in 12.04 you can just Drag the application’s Icon to your desktop then chmod the permission of the shortcut.here we go

1.Go to Dash and find the application you want to create shortcut on Desktop,then DRAG the icon to your desktop

2.Open a terminal by pressing CTL+ALT+t ,then run below command

sudo chmod +x ~/Desktop/*.desktop

Without this step you may get error like “Untrusted application launcher”

3.You are done.Enjoy!

To do it via commands:

ln -s /usr/share/applications/your-app-name ~/Desktop

sudo chmod +x ~/Desktop/*.desktop

****

Method 2:

Create Desktop Entry manually,
Simple

Take eclipse as an example, the simplest Desktop Entry:

[Desktop Entry]  
    Exec=/opt/eclipse/eclipse  
    Type=Application

General content:

[Desktop Entry]  
    Version=1.0  
    Encoding=UTF-8  
    Name=Eclipse  
    Comment=Eclipse IDE  
    Exec=eclipse  
    Icon=/opt/eclipse/icon.xpm  
    Terminal=false  
    Type=Application  
    Categories=GNOME;Application;Development;  
    StartupNotify=true

 

Git Push hint: Updates were rejected because the remote contains work that you do hint: not have …

Push to remote warehouse

git push -u origin master

The following error occurred

! [rejected]        master -> master (fetch first)
error: failed to push some refs to '[email protected]:qiyuebuku/WxRobot.git'
hint: Updates were rejected because the remote contains work that you do
hint: not have locally. This is usually caused by another repository pushing
hint: to the same ref. You may want to first integrate the remote changes
hint: (e.g., 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.

The solution is to connect the changes of the local warehouse to the main branch of the remote warehouse before pushing to the remote server

git pull origin master 

git push -u origin master //Push files from local repository to remote repository wen

Done!

Git push Updates were rejected because the tip of your current branch is behind

git push Error Updates were rejected because the tip of your current branch is behind

DannideMacBook-Pro:connect-cas2-client danni$ git push origin master
To https://gitee.com/danni3/connect-cas2-client.git
 ! [rejected]        master -> master (non-fast-forward)
error: failed to push some refs to 'https://gitee.com/danni3/connect-cas2-client.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.

Solution: git pull origin master

DannideMacBook-Pro:connect-cas2-client danni$ git pull origin master
From https://gitee.com/danni3/connect-cas2-client
 * branch            master     -> FETCH_HEAD
fatal: refusing to merge unrelated histories

git pull origin master—allow-unrelated-histories

 

Nginx an upstream response is buffered to a temporary file,nginx502 Error

1.Error: warn:an upstream response is buffered to a temporary file

Solution: Add fastcgi_buffers 8 4K; fastcgi_buffer_size 4K;

2. a client request body is buffered to a temporary file

Solution: Add client_max_body_size 2050m; client_body_buffer_size 1024k;

Buffer mechanism of nginx:

For the response from the fastcgi server, nginx buffers it into memory and sends it to the client browser in turn. The size of the buffer is determined by fastcgi_ Buffers and fastcgi_ buffer_ Size is controlled by two values

For example, the configuration is as follows:

fastcgi_buffers      8 4K; fastcgi_buffer_size 4K;

fastcgi_ Buffers control nginx to create up to eight 4K buffers, while fastcgi controls nginx to create up to eight 4K buffers_ buffer_ Size is the size of the first buffer in response processing, which is not included in the former. So the total maximum memory buffer size that can be created is 8 * 4K + 4K = 36K. These buffers are generated dynamically according to the actual response size and are not created at one time. For example, for an 8K page, nginx will create 2 * 4K buffers

When the response is less than or equal to 36K, all data will be processed in memory. What if the response is greater than 36K?fastcgi_ That’s what temps do. The extra data will be temporarily written to the file and placed under this directory. At the same time, you will see a warning like this in error. Log

2010/03/13 03:42:22 [warn] 3994#0: *1 an upstream response is buffered to a temporary file /usr/local/nginx/fastcgi_temp/1/00/0000000001 while reading upstream, client: 192.168.1.111, server: www.xxx.cn, request: "POST /test.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.cn", referrer: "http://xxx.cn/test.php"

Obviously, if the buffer is set too small, nginx will read and write to the hard disk frequently, which has a great impact on the performance, but it is not that the larger the better, which is meaningless

Fix Nginx 502 Error:upstream sent too big header while reading response header from upstream

The value of cookies is out of range I mean

Looked at the logs

Error 502 upstream sent too big header while reading response header from upstream

 

sudo gedit /var/log/nginx/error.log

check the error log

 

 

upstream sent too big header while reading response header from upstream

If you search for this error, the explanation on the Internet is similar, but it’s just that the cookie carries too much header and makes you set.

fastcgi_buffer_size 128k;
fastcgi_buffers 8 128k;

Try it step by step. The phrase fastcgi_buffers 8 128k, fastcgi_buffers 32 32k is better, the memory is allocated and released as a whole, reduce the number of units k can be used as much as possible.

In addition, if you use nginx for load balancing, it is useless to change the above parameters, to be in the configuration of forwarding, such as the following settings.

 

location @to_other {

                proxy_buffer_size  128k;

                proxy_buffers   32 32k;

                proxy_busy_buffers_size 128k;

add_header X-Static transfer;

proxy_redirect off;

proxy_set_header Host $host;

proxy_set_header X-Real-IP  $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_pass http://backend;

}

The three bolded lines are the only ones that will work.

fastcgi_* can be interpreted as the response used by nginx when accepting client requests. proxy is used by nginx when forwarding as a client, if the header is too big and exceeds the default 1k, it will trigger the above upstream sent too big header.

 

location ~ \.php$ {

fastcgi_buffer_size 128k;

fastcgi_buffers 32 32k;

include /etc/nginx/fastcgi_params;

fastcgi_pass   127.0.0.1:9000;

fastcgi_index index.php;

fastcgi_param SCRIPT_FILENAME /host/web/$fastcgi_script_name;

}

Nginx Timeout Error: upstream timed out (110: Connection timed out) while reading response header from ups…

Error content

We can see it in error. Log

Error: upstream timed out (110: Connection timed out) while reading response header from upstream

Cause of error

From the error log, we can know that the error is due to the timeout of nginx agent to get the return value of upstream server. What causes this problem

It takes a long time for the back-end to process the request

It may also be a network problem between the proxy server and the upstream server

We find out the problem by locating the wrong URL, and finally determine that the problem is due to the long time of back-end processing of the request. Then the solution can be that developers optimize the interface, or we can set the timeout time longer through nginx

Error resolution

nginx timeout setting

Official website link: http://nginx.org/en/docs/http/ngx_ http_ proxy_ module.html#proxy_ read_ timeout

Syntax: proxy_ read_ timeout time;
Default: proxy_ read_ timeout 60s;
Context: http,server,location
Defines a timeout for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response. If the proxied server does not transmit anything within this time, the connection is closed.

proxy_ read_ Timeout parameter. This instruction refers to the time-out for two successful read operations from the upstream server. It means that after 60 seconds of successful read operation from the upstream server, if there is no successful read operation from the upstream server, the connection will be closed

The default value is 60s. We can set it to 240s or 300s. To deal with the problem of slow processing of requests by upstream servers

In the configuration file of nginx, add

proxy_read_timeout 240s; 

Linux userdel: user xiaoming is currently used by process 4713

Beginners learning Linux will certainly encounter some puzzling problems. For example, when I learn to delete a user, I encounter the above error

1 userdel: user xiaomingiscurrently usedbyprocess 4713

Not only that, it was successful when I created this directory, but if I found this error when I su – Xiaoming

1 No directory, logginginwith HOME=/

The solution found in the Linux community can not be solved (maybe my level of data search needs to be improved)

This is the answer of the community. Finally, I found a similar problem on CSDN and solved it

The method is as follows:

My personal guess is that under the root user, Su switches to the Xiaoming user, and then under the Xiaoming user, it switches back to root. However, the Xiaoming user is still occupied by a process, so the process does not die and the user does not del

So we use Ctrl + D in the command line to exit the current login, and then press Ctrl + d once to exit the login of Xiaoming user. At this time, we go back to the root user and use the

1 userdel -r xiaoming

You can delete Xiaoming successfully

[Solved] “Inconsistency detected by ld.so: dl-deps.c: 622:….. Assertion `nlist > 1′ failed!”

When running the application program on the ARM embedded development board, the following error appears: “inconsistency detected by LD. So: DL DEPs. C: 622:_ dl_ map_ object_ deps: Assertion `nlist > 1′ failed!”, The reason for this error is that the third-party library is not used correctly. I used the libpthread Library in the program, but I used the dynamic link library method. When I add the – static option in the compilation parameters and change it to static link libpthread library, this problem is solved and the program runs normally.

My makefile is posted below for reference:

src = $(wildcard ./*.c)
obj = $(patsubst ./%.c, ./%.o, $(src))

target = can_test
CROSS_COMPILE = arm-xilinx-linux-gnueabi-gcc
FLAGS = -lpthread -static                    

$(target):$(obj)
        $(CROSS_COMPILE) $^ -o $@ $(FLAGS)

%.o:%.c
        $(CROSS_COMPILE) -c $< -o $@ $(FLAGS)

.PHONY:clean
clean:
        rm $(obj) $(target) -f

Centos Service Cannot Start: Failed to start Login Service Failed to start Install ABRT coredump hook

Error information:

Failed to start Install ABRT coredump hook
Failed to start Login Service

Solution:

Use the U disk system, enter the system files, find the server’s hard disk, create a new folder new (if you don’t have permission, use sudo MKDIR new), mount the files to new mount/dev/mapper/CentOS root new , enter new, modify the config file VI etc/SELinux/config , and change SELinux = enforcing to SELinux = disable , Save the file and restart

[root@ny01 ~]# vi /etc/selinux/config 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disable
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Nginx Error when installing the startup service failed to start a high performance web server and a reverse proxy serve

Nginx Error when installing the startup service failed to start a high performance web server and a reverse proxy serve
ubuntu16.04
apt update
apt install -y nginx
service nginx start
Error:

root@zabbix:/home/appliance# systemctl status nginx.service

nginx.service - A high performance web server and a reverse proxy server
   Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Wed 2018-07-25 18:33:26 UTC; 1min 27s ago
  Process: 30040 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=1/FAILURE)
  Process: 30037 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)

Jul 25 18:33:25 zabbix nginx[30040]: nginx: [emerg] listen() to [::]:80, backlog 511 failed (98: Address already in use)
Jul 25 18:33:25 zabbix nginx[30040]: nginx: [emerg] listen() to 0.0.0.0:80, backlog 511 failed (98: Address already in use)
Jul 25 18:33:25 zabbix nginx[30040]: nginx: [emerg] listen() to [::]:80, backlog 511 failed (98: Address already in use)
Jul 25 18:33:26 zabbix nginx[30040]: nginx: [emerg] listen() to 0.0.0.0:80, backlog 511 failed (98: Address already in use)
Jul 25 18:33:26 zabbix nginx[30040]: nginx: [emerg] listen() to [::]:80, backlog 511 failed (98: Address already in use)
Jul 25 18:33:26 zabbix nginx[30040]: nginx: [emerg] still could not bind()
Jul 25 18:33:26 zabbix systemd[1]: nginx.service: Control process exited, code=exited status=1

Jul 25 18:33:26 zabbix systemd[1]: *******Failed to start A high performance web server*** and a reverse proxy server.****

Jul 25 18:33:26 zabbix systemd[1]: nginx.service: Unit entered failed state.
Jul 25 18:33:26 zabbix systemd[1]: nginx.service: Failed with result 'exit-code'.

Error reason: you already have a process bound to HTTP port 80. You can run the command sudo lsof - I: 80 to get the list of processes using this port, and then stop/disable the web server
solution: you can run the command to stop the process using port 80 sudo fuser - K 80/TCP