avatar

The long long journey...

Service Listening Address

Problem DescriptionSet up a MySQL server on ECS, firewall and security group are all normal, but cannot be accessed remotely. Troubleshooting Check ConnectivityScanned the server ports using a local computer, results as follows: 1 2 3 4 5 6 7 8 9 10 ⚡yangz ❯❯ nmap -sS MD Starting Nmap 7.93 ( https://nmap.org ) at 2023-01-26 19:23 China Standard Time Nmap scan report for MD Host is up (0.045s latency). Not shown: 996 filtered tcp ports (no-response) PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 443/tcp closed https 3306/tcp closed mysql The result indicates port 3306 is open, but the server is not functioning properly. Check PortChecked all port usage on ECS: 1 2 3 4 root@minedl:~# netstat -lntp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 24735/mysqld MySQL is bound to the localhost loopback address, thus unable to provide external services. That’s where the problem lies. Binding Address ConfigurationMySQL’s default configuration listens to the service address on 127.0.0.1, which does not provide external services. To support remote access, the binding address should be changed to 0.0.0.0 by making the following configuration: 1 vim /etc/mysql/mysql.conf.d/mysqld.cnf Change the bind-address to 0.0.0.0, note that you cannot just comment it out, otherwise, it will lead to the following second outcome. 1 2 3 tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 24735/mysqld #Only listen to localhost tcp6 0 0 :::3306 :::* LISTEN 24794/mysqld #Only listen to ipv6, not ipv4 tcp 0 0 0.

HSV Debugging Tool

OpenCV often reads images in the HSV color space, and using a color extractor on the image often cannot yield accurate results; if there are multiple different targets with multiple colors in the image, the color extraction work is also very troublesome. For convenience, I developed a small tool that can import an image and achieve the setting of the upper and lower limits of the three HSV values through dragging operations of six progress bars, and display the results in real-time on the mask and result layers, thus alleviating the aforementioned problems. Simply by dragging the progress bars, one can quickly locate the HSV range of multiple targets, and even be precise to a specific value. The code is as follows: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 import cv2 import numpy as np path = r'D:\PlayGround\CVP\return.png' # The location of the image, only need to modify this attribute when using # Callback function for the slider, getting the value at the position of the slider def empty(a): h_min = cv2.getTrackbarPos("Hue Min", "TrackBars") h_max = cv2.getTrackbarPos("Hue Max", "TrackBars") s_min = cv2.getTrackbarPos("Sat Min", "TrackBars") s_max = cv2.getTrackbarPos("Sat Max", "TrackBars") v_min = cv2.getTrackbarPos("Val Min", "TrackBars") v_max = cv2.getTrackbarPos("Val Max", "TrackBars") print(h_min, h_max, s_min, s_max, v_min, v_max) return h_min, h_max, s_min, s_max, v_min, v_max # Create a window and place 6 sliders cv2.

LaTeX Formula Syntax

For practicing LaTeX syntax, while testing the rendering result of the katex engine Greek Letters and Split Formulas$$\begin{split} \alpha \qquad alpha \ \beta \qquad beta \ \gamma \qquad gamma \ \delta \qquad delta \ \epsilon \qquad epsilon \ \varepsilon \qquad varepsilon\ \zeta \qquad zeta \ \eta \qquad eta \theta \qquad theta\ \end{split}$$ bye. MatrixesAdd prefixes b, v, p, m in front of matrix for brackets, vertical bars, parentheses; none for no brackets $$\begin{matrix}1\quad0 \ 0\quad 1\end{matrix}$$ Combine with dots $$\begin{bmatrix} 1 & 0 & 0 & \cdots \ 0 & 1 &0 & \cdots \ \vdots & \vdots & \vdots & \ddots \end{bmatrix}$$ VectorsUse vec for single letters, overrightarrow for multiple letters, and there is another thing $\rightarrow$ $$\vec{a} \cdot \overrightarrow{AB}$$ Horizontal Braces$$\overbrace{x_1+x_2+…+x_i}^{n items}$$ $$\underbrace{a+b+\cdots +z}_{26 letters of the alphabet}$$ Underline, Overline, and HatsFirst, the two standard notations, which cannot be previewed normally within the editor $$\underline{a+b+c}$$ $$\overline{x+y+z}$$ Next, two notations that can be normally previewed within the editor, both using the over command, one as a prefix and the other as a suffix $$\over{over}$$ $${over\quad behind}\over$$ $$x\quad\bar x \quad \hat x \quad \tilde x$$ Square Roots$$\sqrt{x}+\sqrt[3]{y_{i}}$$ FractionsA particularly unique syntax, first type \frac{x}{y}, with the command indicator at the very beginning $$\frac{x}{y}$$ Subscripts and Superscripts$$x^{2/3}\tag{1.1}$$ $$x_{i+1}$$ Multiplication$$y=x\cdot z$$ InequalitiesStandard notation $$1\neq2$$ Abbreviation, not sure if it can be rendered $$\begin{cases} 1\equiv1\ 1 \quad x\bmod2\ \end{cases}$$ Product\prod $\prod$ \sim $\sim \mathbb $\mathbb E x \prime $x \prime$ Mathematical FormulasFixIt Supports mathematical formulas based on [$\KaTeX$][katex]. In [params.math] under the theme configuration, set the property enable = true, and set the property math: true in the front matter of the article to enable automatic rendering of mathematical formulas.

HDFS WebUI Access Issues

Problem DescriptionA Hadoop cluster composed of 3 datanodes, 1 namenode, and 1 secondary namenode. Entering the following command to check the status, everything appears normal. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 (base) root@node4:~# hdfs dfsadmin -report Configured Capacity: 168292061184 (156.73 GB) Present Capacity: 128058142720 (119.26 GB) DFS Remaining: 128058011648 (119.26 GB) DFS Used: 131072 (128 KB) DFS Used%: 0.00% Replicated Blocks: Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 Low redundancy blocks with highest priority to recover: 0 Pending deletion blocks: 0 Erasure Coded Block Groups: Low redundancy block groups: 0 Block groups with corrupt internal blocks: 0 Missing block groups: 0 Low redundancy blocks with highest priority to recover: 0 Pending deletion blocks: 0 ------------------------------------------------- Live datanodes (3): Yes, it seems there are no problems. However, the HDFS web page on port 50070 cannot be accessed, nor can the HDFS file port 9000. InvestigationAfter ruling out the firewall and security group issues, check the usage of all service ports: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 (base) root@node4:~# netstat -lntp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 516/systemd-resolve tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1460/sshd: /usr/sbi tcp 0 0 192.168.0.165:8088 0.0.0.0:* LISTEN 111907/java tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 107217/sshd: root@p tcp 0 0 127.

Practice of Rewriting Watermark in Flink

Apache Flink is a powerful streaming processing framework capable of handling real-time data streams. A key component in handling real-time data is the Watermark. Watermarks are special timestamps used for processing event-time stream data to address the problems of out-of-order events and late data. However, sometimes we may need to customize the generation of Watermarks based on specific business logic. This article will explore how to rewrite Watermarks in Flink and provide some practical tips and examples. What is WatermarkIn Flink, a Watermark is a marker of event time, indicating that data up until this timestamp has been processed. Flink uses Watermarks to determine when event-time-based window operations can be triggered. If an event’s timestamp precedes the current Watermark, then the event is considered “late” and might be discarded or placed into a special side output stream. Why RewriteIn practical applications, the original stream’s Watermarks may not meet our needs. For instance, we might need to adjust the generation strategy of Watermarks based on business logic, or handle special circumstances, such as data delays or system failures. At this point, we need to customize the logic for generating Watermarks. How to Rewrite Watermarks in FlinkIn Flink, we can customize the generation of Watermarks by implementing the WatermarkStrategy interface. Generally, we need to do the following four things: Define Watermark Strategy: Create a class that extends WatermarkStrategy and implement the createTimestampAssigner and createWatermarkGenerator methods. Implement TimestampAssigner: In the createTimestampAssigner method, return a TimestampAssigner instance responsible for assigning timestamps to each event.

Big Data Architecture Course Review Notes

IntroductionThe requirements of big data systems include data requirements, functional requirements, performance requirements (high performance, high availability, high scalability, high fault tolerance, security, etc.), and computational scenario requirements. The goal requirements of distributed systems/clusters or big data processing: high performance, high availability, fault tolerance, scalability, where high performance includes three metrics: response time (latency), throughput, resource utilization; high availability metrics: MTTF, MTTR, availability=MTTF/(MTTF+MTTR) The relationship between big data and cloud computing: Cloud computing can provide abundant computing resources for big data processing. Big data is a typical application of cloud computing services. Big data can be processed without using cloud computing. Typical scenarios of big data computation are Batch processing Stream computing Interactive querying Static data is bounded, persistently stored, with large capacity, suitable for batch processing. Stream data is unbounded, continuously generated, requires timely processing with data windows, and has no end in sight. Overview of Cloud Computing Definition of Cloud Computing Cloud computing is a business computing model. It distributes computing tasks across a resource pool composed of a large number of computers, allowing various application systems to obtain computing power, storage space, and information services as needed. It provides dynamically scalable, inexpensive computing services on demand through the network, and represents a universally applicable resource management mindset and model. Cloud computing compares computing resources to omnipresent clouds and is the result of the development and evolution of technologies such as virtualization, distributed computing, utility computing, load balancing, parallel computing, network storage, hot backup redundancy, etc. Characteristics of Cloud Computing Unified management of resource virtualization and pooling Massive scale, high availability, high scalability Elasticity, on-demand, self-service provision Ubiquitous access, accurate billing, low cost Three Service Models Infrastructure as a Service (IaaS) Provides computing resources services such as servers, storage, and networking.
0%