Category Archives: Linux

[Solved] SELinux intercepts vsftpd for CentOS (without shutting down SELinux)

Vsftpd is an FTP server program, and SELinux is the firewall component of CentOS. Since vsftpd is intercepted by SELinux by default, the following FTP problems are encountered:

226 transfer done (but failed to open directory)

550 failed to change directory

550 create directory operation failed

553 Could not create file.

Or simply after sending the list command, the server does not respond and disconnects after timeout (500 oops: vsftpd: chroot)

In case of such a problem, usually vsftpd does not have sufficient permissions, which is likely to be blocked by SELinux. The popular solution on the network is to turn off SELinux directly, which will cause other security problems, so there are other better ways

To determine if this is the problem, we need to try to turn off SELinux to see if it is the cause

setenforce 0 #Temporarily put SELinux into Permissive mode

Try again after running. If FTP can get the directory, upload and download, it is proved that SELinux is the cause

Solution: we can run getsebool – a | grep ftpd to determine the view permissions

getsebool -a | grep ftp

#The following is the displayed permissions, off is off permissions, on is open permissions, has been set, not set when all is off
ftpd_anon_write --> off
ftpd_connect_all_unreserved --> off
ftpd_connect_db --> on
ftpd_full_access --> on
ftpd_use_cifs --> off
ftpd_use_fusefs --> off
ftpd_use_nfs --> off
ftpd_use_passive_mode --> off
httpd_can_connect_ftp --> off
httpd_enable_ftp_server --> off
tftp_anon_write --> off
tftp_home_dir --> on

Among them, FTP_ home_ Dir and allow_ ftpd_ full_ Access must be on to enable vsftpd to access the FTP root directory and transfer files

Run the following command:

setsebool -P ftp_home_dir 1
setsebool -P allow_ftpd_full_access 1

Note that these two commands usually take more than ten seconds to run

After running, we will resume SELinux and enter the forcing mode

setenforce 1 #Entering Enforcing Mode

If there is no accident, we can access the FTP directory, and vsftpd can upload and download files normally

But if this problem has not been solved, it may be that the directory attribute of FTP access is not enough. It is recommended to use Chmod – R 777 path to set the read-write property of the path to 777, and then try again, which can usually solve the problem

Linux shell script execution error: bad substitution [How to Solve]

the script content is as follows:

#!/bin/sh
string="This is a string!"
echo ${string:1:4}

Bad substitution: after adding executable permission

cause analysis:

It has to do with whether the linux shell uses /bin/sh or /bin/bash.
My script specifies sh, and ubuntu's sh is connected to point to dash, not bash, so debugging leads to error messages.

Solution:

#!/bin/bash
string="This is a string!"
echo ${string:1:4}

Read More:

The most commonly used shells in Linux are the Bourne shell (sh), the C shell (csh), and the Korn shell (ksh).

The Bourne shell is the original shell used by Unix and is available on every Unix. the Bourne shell is quite good at shell programming, but does not do as good a job as several other shells at handling interaction with the user.
The default shell for Linux operating systems is the Bourne Again shell, which is an extension of the Bourne shell, or Bash for short, and is fully backward compatible with the Bourne shell and adds and enhances many features to the Bourne shell. bash is placed in /bin/bash and it has many features that provide functions such as command completion, command editing, and command history tables. It also contains many of the advantages of the C shell and the Korn shell, and has a flexible and powerful programming interface, while having a very user-friendly interface.

/bin/sh in the GNU/Linux operating system is a symbolic link to bash (Bourne-Again Shell), but given that bash is too complex, someone ported bash from NetBSD to Linux and renamed it to dash (Debian Almquist Shell), and suggested that / Ubuntu claims that since they did this in 6.10, there has been a significant increase in boot speed.

Linux: How to Check the Status of Page

Recently, we encountered a problem of page release exception. The stack is as follows:

[ 1000.691858] BUG: Bad page state in process server.o  pfn:309d22
[ 1000.691859] page:ffffea000c274880 count:0 mapcount:0 mapping:ffff880279688308 index:0x0
[ 1000.691860] page flags: 0x2fffff00020000(mappedtodisk)
[ 1000.691862] page dumped because: non-NULL mapping
[ 1000.691863] Modules linked in: stap_11fa48f04897d7244c07086623507d9_14185(OE) xfs libcrc32c tcp_diag inet_diag xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT nf_reject_ipv4 tun ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter bridge stp llc dm_mirror dm_region_hash dm_log dm_mod intel_powerclamp snd_hda_intel coretemp ppdev kvm_intel snd_hda_codec snd_hda_core iTCO_wdt gpio_ich iTCO_vendor_support snd_hwdep ioatdma snd_seq parport_pc kvm shpchp parport nfsd snd_seq_device snd_pcm pcspkr sg irqbypass ntb i2c_i801 snd_timer intel_ips snd lpc_ich soundcore auth_rpcgss nfs_acl lockd grace sunrpc ip_tables ext4 mbcache jbd2 sd_mod crc_t10dif crct10dif_generic crct10dif_common
[ 1000.691895]  amdkfd amd_iommu_v2 radeon i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm ixgbe ahci libahci libata tg3 mdio crc32c_intel dca serio_raw ptp i2c_core pps_core fjes floppy [last unloaded: stap_be77ad5fa9d5c22c253e09b1d6390ba4__1921]
[ 1000.691908] CPU: 3 PID: 29178 Comm: server.o Tainted: G    B      OE  ------------   3.10.0+ #10
[ 1000.691910] Hardware name: To be filled by O.E.M. To be filled by O.E.M./To be filled by O.E.M., BIOS 4.6.3 01/14/2011
[ 1000.691911]  ffffea000c274880 000000001df7af73 ffff88050ee37d08 ffffffff81688527
[ 1000.691913]  ffff88050ee37d30 ffffffff81683751 ffffea000c274880 0000000000000000
[ 1000.691915]  000fffff00000000 ffff88050ee37d78 ffffffff81188d6d fff00000fe000000
[ 1000.691918] Call Trace:
[ 1000.691920]  [<ffffffff81688527>] dump_stack+0x19/0x1b
[ 1000.691922]  [<ffffffff81683751>] bad_page.part.75+0xdf/0xfc
[ 1000.691925]  [<ffffffff81188d6d>] free_pages_prepare+0x16d/0x190
[ 1000.691927]  [<ffffffff811897e4>] free_hot_cold_page+0x74/0x160
[ 1000.691930]  [<ffffffff8118e6a3>] __put_single_page+0x23/0x30
[ 1000.691932]  [<ffffffff8118e6f5>] put_page+0x45/0x60
[ 1000.691934]  [<ffffffff8122cd25>] page_cache_pipe_buf_release+0x15/0x20
[ 1000.691937]  [<ffffffff8122d7a4>] splice_direct_to_actor+0x134/0x200
[ 1000.691940]  [<ffffffff8122d9f0>] ?do_splice_from+0xf0/0xf0
[ 1000.691942]  [<ffffffff8122d8d2>] do_splice_direct+0x62/0x90
[ 1000.691944]  [<ffffffff811fe7c8>] do_sendfile+0x1d8/0x3c0
[ 1000.691947]  [<ffffffff811ffb2e>] SyS_sendfile64+0x5e/0xb0
[ 1000.691949]  [<ffffffff81698b49>] system_call_fastpath+0x16/0x1b
[ 1000.691951] BUG: Bad page state in process server.o  pfn:309d23

It can be seen that the reason for page release failure is: non null mapping, that is, when releasing, page – > Mapping is not null. Let’s look at the check function

static inline int free_pages_check(struct page *page)
{
    char *bad_reason = NULL;
    unsigned long bad_flags = 0;

    if (unlikely(page_mapcount(page)))
        bad_reason = "nonzero mapcount";
    if (unlikely(page->mapping != NULL))-------------------The mapping of the page is not NULL and is considered an exception.
        bad_reason = "non-NULL mapping";
    if (unlikely(page_ref_count(page) != 0))
        bad_reason = "nonzero _count";
    if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_FREE)) {
        bad_reason = "PAGE_FLAGS_CHECK_AT_FREE flag(s) set";
        bad_flags = PAGE_FLAGS_CHECK_AT_FREE;
    }
    if (unlikely(mem_cgroup_bad_page_check(page)))
        bad_reason = "cgroup check failed";
    if (unlikely(bad_reason)) {
        bad_page(page, bad_reason, bad_flags);
        return 1;
    }
    page_cpupid_reset_last(page);
    if (page->flags & PAGE_FLAGS_CHECK_AT_PREP)
        page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
    return 0;
}

According to the principle, if it is an anonymous page, the mapping of the page will be set to null when it is released, as follows:

static bool free_pages_prepare(struct page *page, unsigned int order)
{
    int i;
    int bad = 0;

    trace_mm_page_free(page, order);
    kmemcheck_free_shadow(page, order);

    if (PageAnon(page))
        page->mapping = NULL;
    for (i = 0; i < (1 << order); i++)
        bad += free_pages_check(page + i);
    if (bad)
        return false;

    if (!PageHighMem(page)) {
        debug_check_no_locks_freed(page_address(page),PAGE_SIZE<<order);
        debug_check_no_obj_freed(page_address(page),
                       PAGE_SIZE << order);
    }
    arch_free_page(page, order);
    kernel_map_pages(page, 1 << order, 0);

    return true;
}

Now that the bad count is entered, the page is not anonymous when it is released. In my code, the mapping of page refers to the address of the file_ Space, so mapping is not null

The reason for the problem is that when I manage the page in my own memory pool, there is a process that does not count normally. As a result, when it is released abnormally, the pointer has not been cleaned up

Linux Error: apt-get 404 not found [How to Solve]

When doing Linux work, I accidentally removed the vi environment and reinstalled it with various 404s, probably because ubuntu had not been updated for a long time and the software source had moved.

So.

sudo apt-get update

sudo apt-get install vim

At this point, I found that the installation was not successful

Maybe it’s a problem with sources.list, /etc/apt/sources.list is a file that stores the server addresses of third-party software sources. After this problem, I did some operations according to the online tutorials, and it seems that I accidentally lost the sources.list, and then I couldn’t even see the details page after finding vim in the software center.

So re-check.

Check the contents of your sources.list file sudo gedit/etc/apt/sources.list (you can only use gedit because you don’t have a vi environment) because there is no such file so the contents are empty.

Replace (or add) the following and save

 

deb http://archive.ubuntu.com/ubuntu/ trusty main restricted universe multiverse

deb http://archive.ubuntu.com/ubuntu/ trusty-security main restricted universe multiverse

deb http://archive.ubuntu.com/ubuntu/ trusty-updates main restricted universe multiverse

deb http://archive.ubuntu.com/ubuntu/ trusty-proposed main restricted universe multiverse

deb http://archive.ubuntu.com/ubuntu/ trusty-backports main restricted universe multiverse

deb-src http://archive.ubuntu.com/ubuntu/ trusty main restricted universe multiverse

deb-src http://archive.ubuntu.com/ubuntu/ trusty-security main restricted universe multiverse

deb-src http://archive.ubuntu.com/ubuntu/ trusty-updates main restricted universe multiverse

deb-src http://archive.ubuntu.com/ubuntu/ trusty-proposed main restricted universe multiverse

deb-src http://archive.ubuntu.com/ubuntu/ trusty-backports main restricted universe multiverse

 

5. sudo apt-get update

6.sudo apt-get install vim

Done!

YARN Restart Issue: RM Restart/RM HA/Timeline Server/NM Restart

ResourceManger Restart

Resource manager is responsible for resource management and application scheduling, and it is the core component of yard. There may be a single point of failure. ResourceManager restart is a feature that enables RM to make yarn cluster work normally when it restarts, and the failure of RM is not known by users

ResourceManager Restart feature is divided into two phases:

ResourceManager Restart Phase 1 (Non-work-preserving RM restartsince hadoop2.4.0): Enhance RM to persist application/attempt state and other credentials information in a pluggable state-store. RM will reload this information from state-store upon restart and re-kick the previously running applications. Users are not required to re-submit the applications.

ResourceManager Restart Phase 2 (Work-preserving RM restart, since hadoop2.6.0): Focus on re-constructing the running state of ResourceManager by combining the container statuses from NodeManagers and container requests from ApplicationMasters upon restart. The key difference from phase 1 is that previously running applications will not be killed after RM restarts, and so applications won’t lose its work because of RM outage.

ResourceManager High Availability

Before Hadoop 2.4.0, resource manager had the problem of single point failure. Yarn’s ha (high availability) uses the actice/standby structure. At any time, there is only one active RM and one or more standby RMS. In fact, the ResourceManager is backed up so that active RM and standby RM exist in the system

Manual transitions and failover

Enter yarn rmadmin

Automatic failover

When RM fails or no longer responds, a new active RM is elected based on zookeeper’s activestandbyelector (it has been embedded in RM, and there is no need to start a separate zkfc daemon)

Client, ApplicationMaster and NodeManager on RM failover

If there are multiple RMS, the yarn-site.xml file on all nodes needs to list all RMS. Clients, AMS and NMS connect to RMS in round robin mode until an active RM is encountered. If the active RM fails, find the new active RM again in the round robin way

The YARN Timeline Server

Yard solves the storage and retrieval of apps current information and historical information through timeline server . Timelineserver has two responsibilities:

Persisting Application Specific Information

The collection and retrieval of information is related to a specific app or framework. For example, the information of MapReduce framework can include number of map tasks, reduce tasks, counters… Etc. Users can send the special information of APP through the timelineclient included in application master

Or app container

Persisting Generic Information about Completed Applications

Generic information is the information of APP level, such as queue name, user info, etc. The general data is released to the timeline store by yarn’s RM, which is used to display the completed apps of Web UI

NodeManager Restart

Nodemanager restart mechanism can keep the active containers of the node where nodemanager is located. When nm processes the container management request, it stores the necessary states in the local state store. When NMS restarts, first load the state for different subsystems, and then let the subsystems use the loaded state to recover

enabling NM Restart:

(1) Set yarn.nodemanager.recovery.enabled in/conf/yarn-site.xml to true. The default is false

(2) Configure a path to the local file-system directory where the NodeManager can save its run state.

(3) Configure a valid RPC address for the NodeManager.

(4) Auxiliary services.

 

[Solved] Git Error: “Another git process seems to be running in this repository…”

Git shows: Another git process seems to be running in this repository, egan editor opened by’git commit’. Please make sure all processes are terminated then try again. If it still fails, a git process may have crashed in this repository earlier: remove the file manually to continue.

 

Cause Analysis:

According to what we have learned, windows has a resource locking mechanism for the synchronization and mutual exclusion management of processes. It is guessed that there must be a process that locked a certain resource, but because the process suddenly crashed, there was no time to unlock it, causing other processes to be unable to access it. This is because Git encountered a crash during use, and some of the locked resources were not released.

solution:

Go to the .git file under the project folder (show hidden folders or rm .git/index.lock) and delete the index.lock file.

[Solved] Git stash pop Error: Another git process seems to be running in this repository……

 

summary

When developing new functions in the dev development branch, there is an emergency bug in the master branch that needs to be fixed. The command git stash can temporarily store the code changes of the current dev branch and switch to the master branch to solve the emergency bug. Today, I encountered a git error message that I restored the temporary code after switching back to the dev development branch, but the code could not be submitted after restoration, resulting in Git process conflict. The specific error information is as follows:

 Another git process seems to be running in this repository, e.g.an editor opened by ‘git commit’. Please make sure all processes
are terminated then try again. If it still fails, a git process
may have crashed in this repository earlier:
remove the file manually to continue.

After the research on the solution of error information, we found a specific and effective solution

solutions

Main idea: git process has been opened in an editor. Please make sure all processes are finished before trying. If it still fails, git in the last run may crash. Please remove this file manually before continuing

1. Nature of the problem: 

Windows has a lock mechanism for process management. Under normal circumstances, the process runs = = > Process lock = = > Process end = = > The process is unlocked. Maybe I accidentally shut down git while switching branches, which led to git crash, so the locked index.lock didn’t unlock, resulting in conflict

2. Solution: 

Open the project folder and find the index. Lock file in the. Git file. The file that needs to be manually deleted in the error message is the index. Lock file. After deleting, go back to GIT and continue to operate the command, which perfectly solves the conflict problem of GIT process

summary

When operating git command, we need to pay attention to the status of GIT process. The most important thing is that we need to be careful when operating git command. Even when we encounter urgent tasks, we need to be calm to solve them. We can’t eat hot tofu if we are anxious

Linux Error: -bash: !“: event not found [How to Solve]

When executing the following code in Linux environment, the example is as follows:

Return result:
– bash:! xxxxxxxxx: event not found

The reason is:
the input command contains!, Exclamation mark, can’t constitute command, should be! It can be solved by escaping and adding the reverse symbol of “\”, and similar problems in other shell commands can be solved in the same way

In vmware10, centos7 mounts the shared folder of windows host, and prompts: error: cannot mount filesystem: no such device

1. Set sharing permissions

Write picture description here

2. Install VMware tools

Write picture description here

  • Click on the virtual machine
  • Click to install VMware tools
  • Copy VMwareTools-9.6.2-1688356.tar.gz in the /run/media/zhaojq/VMware\ Tools directory to the home directory
  • Generate vmware-tools-distrib after decompression
  1. [zhaojq @localhost vmware-tools-distrib] $ ls
  2. bin doc etc FILES INSTALL installer lib vmware-install.pl

Run ./vmware-install.pl

[zhaojq@localhost vmware-tools-distrib]$ ./vmware-install.pl 

Keep enter

  1. The path “” is not valid path to the gcc binary.
  2. Would you like to change it? [yes] no
  3. Enter no
  4. At The path “” IS not A! Valid path to at The 3.10 .0 – 514.26 .2.el7.x86_64 Kernel
  5. headers.
  6. Would you like to change it? [yes] no
  7. Enter no

Return after successful installation

  1. Enjoy ,
  2. – – at The VMware Team

2. Mount the shared folder of the Windows host

mnt/hgfs directory

  1. [zhaojq @localhost ~] cd /mnt/hgfs/
  2. [zhaojq @localhost hgfs] pwd
  3. /mnt/hgfs

vmware-hgfsclient command to view the current shared directory

  1. [zhaojq @localhost hgfs] $ vmware-hgfsclient
  2. E

mount error

  1. [zhaojq @localhost hgfs] $ sudo mount -t vmhgfs. host:/ E /mnt/hgfs
  2. Error : cannot mount filesystem: No such device

 

vmhgfs-fuse, need to install toolkit

  1. [zhaojq @localhost hgfs] $ yum install open-vm-tools-devel -y
  2. [zhaojq @localhost hgfs] $ vmhgfs-fuse. host:/ E /mnt/hgfs

 

3. View the mount

Note: Root privileges are required to view.

Disk E of the Windows host

Write picture description here

Mount situation under Centos virtual machine

  1. [root@localhost hgfs] # ls
  2. jashkenas-coffeescript- 1.12 . 6- 0-gf0e9837 .tar .gz LeaRun agile background development framework_6 . 3 .4 $RECYCLE .BIN System Volume Information
  3. LeaRun_6 . 3 .4 .zip node-v6 . 11 .1-linux-x64 .tar .xz redis- 3.2 . 9 .tar .gz

 

Mounted successfully