initial commit

This commit is contained in:
PrinOrange
2023-12-25 17:21:39 +08:00
commit 0bd1089d74
94 changed files with 18648 additions and 0 deletions

View File

@@ -0,0 +1,118 @@
---
title: "Some basic bash-script code block"
time: "2022-11-29"
tags: ["linux"]
summary: "Some basic bash-script code example. It might be help if writing some temporary work-scripts."
---
### Assignment and Substitution
```bash
a=375
hello=$a
```
### Variables
1. Built-in Variables
For example `$HOME $PWD ...`, for more info, see [environ(7)](https://man7.org/linux/man-pages/man7/environ.7.html)
2. Positional Parameters
**echo** $para1 $para2 $para3 $para4 $0 $1 $2 $3 $4 $@
3. Special Parameters
$? # exit status of a command, function, or the script itself
### Branches
```bash
if [ condition1 ];then
command_series1
elif [ condition2 ];then
command_series2
else
default_command_series3
fi
```
### Loops
### range for
```bash
for arg in `seq 10`; do
echo $arg
done
```
### for in C-like syntax
```bash
LIMIT=10
for ((a=1; a<=LIMIT; a++)); do
echo "$a "
done
```
### **while**
```bash
LIMIT=10
a=1
while ((a<=LIMIT)); do
echo "$a "
((a += 1))
done
```
### IO
```bash
command < input-file > output-file # rewrite
command >> output-file # appending
```
### Function
```bash
# define a function
function fun_name(){
command...
}
## or
fun_name(){ # arg1 arg2 arg3
command...
}
# apply a function
fun_name $arg1 $arg2 $arg3
# dereference
fun_name(){ # arg1
eval "$1=hello"
}
fun_name arg1
## the above code block is equivalent to
arg1=hello
```
### Debugging
1. take good use of sh(1)
for example:
sh -n script: checks for syntax
sh -v script: echo each command before executing it
sh -x script: echo the result of each command in an abbreviated manner
2. use echo
3. use trap
### Parallel
use GNU parallel
### Script with Style
1. Comment your code
2. Avoid using magic number
3. Use exit codes in a systematic and meaningful way
4. Use standardized parameter flags for script invocation

View File

@@ -0,0 +1,38 @@
---
title: "Operating System Notes: Two Methods of Multiprocessor Scheduling"
time: "2022-12-05"
tags: ["OS"]
summary: "An introduction to two methods of multiprocessor scheduling is given: symmetric multiprocessing and asymmetric multiprocessing."
---
Two methods of multiprocessor scheduling
The first is **asymmetric multiprocessing (AMP).** The method is to have one processor handle the code for system activities (IO, scheduling decisions, etc.) in a processor cluster, and this processor becomes the main processor. Other processors execute user program code. This method is relatively simple, and because the main processor is only dedicated to processing system code, it can alleviate the need for data sharing.
The second type is **Symmetric multiprocessing (SMP).** It requires each processor to have self-scheduling function. And for the situation of ready queue and process execution, this method has two different implementation methods under subdivision.
First, each processor has a private ready queue for a process.
The second is to establish a common ready queue for all processes.
The above two approaches must ensure a premise, that is, you must be very careful when scheduling, implement a precise scheduling algorithm, check the ready queue, pop up a process and execute it before executing the process for each processor. If multiple processors want to access a common data area, it must be ensured that the process will not disappear from the queue. There will also be no situation where two processors execute the same process. Otherwise, processes may be missing or executed repeatedly, resulting in unpredictable errors.
Now operating systems, such as Linux, Windows, Mac OS, etc., all use the SMP method.
## Processor affinity
For operating systems that use the SMP method, processor affinity must be added to ensure the efficiency of the processor processing process. Because when a process is executed in the processor, a cache is usually established in the processor to use the data in the processor multiple times to reduce duplication of work and improve efficiency. But when the process migrates between different processors, To disable the cache in the original processor, rebuild the cache in the new processor. However, the cost and expense of doing so will increase dramatically, causing work efficiency to decrease. Therefore, the operating system should try its best to ensure that the process is only executed in one processor and avoid multiple migrations to different processors. This measure is called processor affinity. Simply put, make a process affinity to a processor.
However, there are two mandatory executions of affinity. One is **soft affinity**. The operating system tries its best to make the process be processed in one processor, but it cannot guarantee whether it will be migrated. The other is **hard affinity**, which forces a process to be processed only in one processor and does not allow migration between different processors. For example, the Linux system implements a hard affinity policy.
## Load balancing
For systems that use the SMP method, in a processor cluster, if the processors are unevenly idle, some processes can be extracted from the busy processors to idle processors to achieve overall work idleness. Busy balancing improves processor resource utilization. This is the load balancing strategy.
In the above, according to the allocation of ready queues, there are two methods of establishing a public ready queue and establishing a private ready queue for each processor. **Load balancing is usually not necessary if there is a public ready queue.** Because a process can be pulled from the public queue and allocated to an idle processor at any time. However, in most modern operating systems, private ready queues are established for different processors. It is necessary to use load balancing, regularly check the load of each processor, and migrate processes on overloaded processors to unloaded processors. middle. The migration of a process from an overloaded processor to an idle processor is called **push migration**. Pulling an idle processor from an overloaded processor into a process is called pull migration.
In fact, load balancing will eliminate the advantages brought by processor affinity due to process migration. Therefore, processor affinity and load balancing have certain opposing effects. This requires designing a better scheduling algorithm.
## Symmetric multithreading
Symmetric multi-threading technology is also known as **Hyper-Threading Technology (SMT)**. This is implemented in the hardware technology of the processor and does not belong to the operating system technology. The main idea is to logically divide a real physical processor into several logical processors, so that a physical processor is divided into functionally logically multiple processors, and each logical processor is responsible for its own process work. If the operating system can use divided logical processors to run processes, it can design specific scheduling algorithms to fully utilize processor resources and achieve greater performance.

View File

@@ -0,0 +1,68 @@
---
title: "Resource Lock in Concurrency"
time: "2023-04-15"
tags: ["OS"]
summary: "Analysis and usage scenarios of spin lock, optimistic lock, pessimistic lock, read-write lock, mutex lock and other concepts."
---
## Spin lock
### Concept
**Spin Lock** is a lock mechanism based on busy waiting. Its implementation idea is to check in a loop whether the status of the lock is available when acquiring the lock. If the lock is occupied, it will wait in a loop. , until the lock is released. A spin lock is a non-blocking lock because it does not block the thread like a mutex lock, but waits in a loop until the lock is acquired.
Spin locks are mainly used to protect critical sections because they can achieve efficient thread synchronization on multi-core CPUs. Especially when competition in critical sections is not fierce, spin locks can reduce the cost of thread context switching, thereby improving the performance of the program. performance.
The implementation of a spin lock is very simple. An integer variable is usually used to represent the status of the lock. When the lock is occupied, the value of the variable is 1, and when the lock is available, the value of the variable is 0. During the process of acquiring the lock, the thread will continuously check the lock status in a loop until it finds that the lock is available, then sets the lock status to 1 and returns success. When the lock is released, the thread resets the lock's status to 0, allowing other threads to acquire the lock.
It should be noted that although spin locks can reduce the cost of thread context switching, when competition in the critical section is fierce, the efficiency of spin locks will become worse because it will cause threads to be busy waiting, wasting CPU resources.
The spin lock will keep busy waiting when the resource is occupied, and continuously loop to check whether the resource is available until the resource is obtained. On multi-core CPUs, spin locks can be implemented using atomic operation instructions such as CAS (Compare And Swap) provided by the hardware, thereby avoiding lock competition and the overhead of thread context switching. On a single-core CPU, the efficiency of the spin lock may be relatively low because it will always occupy CPU resources, causing other threads to be unable to execute. Therefore, in practical applications, it is necessary to select an appropriate lock mechanism according to the specific situation.
Therefore, in practical applications, it is necessary to select an appropriate lock mechanism according to the specific situation.
### Usage
Spin lock is a lock mechanism based on busy waiting. It continuously checks whether the shared resource is occupied while waiting for the shared resource to be released. If the shared resource is already occupied, wait until the shared resource is released; if the shared resource is not occupied, lock and access the shared resource. Spin lock is suitable for the following scenarios:
1. Occupation of shared resources in a short period of time: When shared resources are occupied for a short period of time and the waiting time is short, using spin locks can avoid the overhead of threads entering sleep state and thread context switching, thereby improving program performance. .
2. The number of threads accessing shared resources is small: Spin locks are suitable for situations where the number of threads accessing shared resources is small, because when the number of threads accessing shared resources is large, the busy waiting of spin locks will consume a lot of CPU resources. , resulting in reduced program performance.
3. Hardware support: Spin locks require hardware support to implement busy waiting, so they are suitable for systems using multi-processors or multi-core processors. It should be noted that spin locks are not suitable for scenarios with long waiting times, because long busy waiting will consume a large amount of CPU resources, resulting in system performance degradation. In the case of long waiting time, other lock mechanisms should be used, such as mutual exclusion locks, read-write locks, etc.
## Read-write lock
### Concept
**Read-Write Lock**, also known as shared-exclusive lock, is a special lock mechanism that allows multiple threads to read shared resources at the same time, but only allows one thread to write to shared resources . Read-write locks can effectively improve the concurrency performance of the program, especially when read operations are more frequent than write operations, it can reduce lock competition and improve the concurrency performance of the program.
The implementation of read-write lock is very simple, usually using a counter and a mutex lock to represent the lock status. When a thread wants to read a shared resource, it will first try to acquire a read lock. If no thread currently holds a write lock, the read operation can continue. If a thread holds a write lock, the read operation must wait for the write lock to be released. When a thread wants to write to a shared resource, it will first try to obtain a write lock. If no thread currently holds a read lock or write lock, the write operation can continue. If a thread holds a read or write lock, the write operation must wait for all read and write locks to be released.
It should be noted that although read-write locks can improve the concurrency performance of the program, the advantages of read-write locks may be weakened when write operations are frequent. Because each write operation must wait for both the read lock and the write lock to be released, the read operation will also be blocked, affecting the performance of the program. Therefore, in practical applications, it is necessary to select an appropriate lock mechanism according to the specific situation.
### Usage
Read-write lock is a special lock mechanism that allows multiple threads to read shared resources at the same time, but when writing to shared resources, they must be mutually exclusive. Read-write locks are suitable for the following scenarios:
1. There are far more read operations than write operations: When there are far more read operations than write operations, read-write locks can be used to improve the concurrency performance of the program. Read-write locks allow multiple threads to read shared resources at the same time, thereby reducing mutual exclusion competition between threads and improving the concurrency performance of the program.
2. The reading operation of shared resources is time-consuming: When the reading operation of shared resources is time-consuming, read-write locks can be used to improve the performance of the program. Read-write locks allow multiple threads to read shared resources at the same time, thereby reducing mutual exclusion competition between threads and the cost of thread context switching, and improving program performance.
3. There are fewer write operations on shared resources: When there are fewer write operations on shared resources, read-write locks can be used to improve the concurrency performance of the program. Read-write locks must be mutually exclusive when writing to shared resources, but multiple threads are allowed to do so at the same time when reading shared resources, thereby reducing mutual exclusion competition between threads and improving the concurrency performance of the program. It should be noted that read-write locks are suitable for scenarios where there is more reading and less writing. When the ratio of read-write operations is close, the performance of read-write locks may not be as good as that of mutex locks. In addition, when using read-write locks, you need to pay attention to the lock granularity to avoid the lock granularity being too fine or too coarse, which will affect the performance of the program.
## Mutex lock
### Concept
**Mutex (Mutex)** is the most basic lock mechanism, which can ensure that only one thread can access shared resources at the same time. When using a mutex lock, when one thread acquires the lock, other threads must wait for this thread to release the lock before they can acquire the lock. This can avoid data competition and inconsistency problems caused by multiple threads modifying shared resources at the same time.
Mutex locks are usually implemented using two operations: lock and unlock. When a thread wants to access a shared resource, it needs to first try to acquire the lock. If no other thread currently holds the lock, the thread can obtain the lock and access the shared resource. If another thread holds the lock, that thread must wait for the lock to be released. After the access is complete, the thread needs to release the lock so that other threads can obtain the lock and access the shared resource.
It should be noted that although using a mutex lock can ensure that only one thread can access shared resources at the same time, frequent locking and unlocking will cause program performance to decrease, because locking and unlocking operations require the overhead of system calls and kernel switching. . Therefore, in practical applications, mutex locks need to be used with caution to avoid excessive lock competition and lock waiting.
### Usage
Mutex lock is a common lock mechanism that ensures that only one thread can access shared resources at the same time, thereby avoiding mutual exclusion competition between threads. Mutex locks are suitable for the following scenarios:
1. Occupation of shared resources in a short period of time: When shared resources are occupied for a short period of time and the waiting time is short, using a mutex lock can avoid the overhead of threads entering sleep state and thread context switching, thereby improving program performance. .
2. The number of threads accessing shared resources is small: Mutex locks are suitable for situations where the number of threads accessing shared resources is small, because when the number of threads accessing shared resources is large, competition for mutex locks will become fierce, resulting in The performance of the program decreases.
3. There is less code in the critical section: Mutex lock is suitable for situations where there is less code in the critical section, because when there is more code in the critical section, the competition for the mutex lock will become fierce, resulting in a decrease in program performance.
4. Strong demand for synchronization: Mutex lock is suitable for situations where synchronization demand is strong, because it can ensure that only one thread can access shared resources at the same time, thereby avoiding mutual exclusion competition and data conflicts between threads. It should be noted that when mutex locks are used in a multi-threaded environment, deadlocks and other problems may occur. Therefore, issues such as lock granularity and locking order need to be considered to avoid deadlocks and other problems. In addition, when using mutex locks, you need to pay attention to lock performance issues to avoid excessive use of mutex locks, which may lead to program performance degradation.
## Optimistic locking and pessimistic locking
Optimistic locking and pessimistic locking are two different locking mechanisms used to solve data competition problems when accessing shared resources concurrently.
**Pessimistic locking is a pessimistic idea. It believes that in a concurrent environment, shared resources can easily be modified by other threads. Therefore, a lock must be locked every time a shared resource is accessed to ensure that only one thread can access the shared resource at the same time. .** The representative of pessimistic lock is a mutex lock, which can ensure that only one thread can access shared resources at the same time. However, locking and unlocking are expensive, **and can easily lead to a decrease in program performance.**
Optimistic locking is an optimistic idea. **It believes that in a concurrent environment, shared resources are rarely modified by other threads. Therefore, each time a shared resource is accessed, it is not locked. Instead, the shared resource is read first** and Check the version number and other information of the shared resource before modifying it. If it has not been modified, modify it and update the version number and other information. Otherwise, give up the modification and try again. Optimistic locks are represented by operations such as lock-free programming and CAS (Compare And Swap), which can reduce lock competition and thread context switching overhead, and improve the concurrency performance of the program.
It should be noted that although optimistic locking can improve the concurrency performance of the program, when shared resources are modified frequently concurrently, the number of retries of the optimistic lock may increase, resulting in a decrease in program performance. Therefore, in practical applications, it is necessary to select an appropriate lock mechanism according to the specific situation.

View File

@@ -0,0 +1,630 @@
---
title: "C/C++ Cross-Platform Compile-Macros"
tags: ["C", "C++"]
time: "2023-11-05"
summary: "When we compile some cross -platform programs, we will inevitably encounter _win32 __linux__ what is the macro of the compiler or the compiler environment.It indicates some information of the current platform environment to the compiler."
---
When we compile some cross -platform programs, we will inevitably encounter \_win32 , **Linux** What are the macroscopic macro of the compiler environment.There are many differences between \_win32 and win32 before.But there is a list here, making a memo.
For example, a code that can only be compiled under the Unix-Like platform in a set.If you compile the wrong error on the non-Unix-Like platform, then my code can add a macro to check whether it is a UNIX environment.If it is normally compiled, the error is thrown directly.
```text
#ifndef __unix__
#error This program should be complied and work in UNIX-LIKE platform.
#endif
```
The complete code is as follows:
```c
#ifndef __unix__
#error This program should be complied and work in UNIX-LIKE platform.
#endif
#include <stdio.h>
int main()
{
printf("this is unix-like platform");
return 0;
}
```
This code can be compiled normally in the Mac OS and Linux environments, and an error will be reported under Windows.
---
**Below is a macro list of a detection environment.**
Please send updates/corrections to <u><a href="mailto:predef-contribute@lists.sourceforge.net">predef-contribute</a></u>.
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/AIX_operating_system">AIX</a></u>**
| Type | Macro | Description |
| -------------- | ----------- | ----------------------- |
| Identification | \_AIX | |
| Version | \_AIX'VR' | V = VersionR = Revision |
| Identification | **TOS_AIX** | Defined by xlC |
**Example**
If `_AIX` is defined, then the following macros can be used to determine the version. Notice that the macros indicates the mentioned version or higher. For example, if `_AIX43` is defined, then `_AIX41` will also be defined.
| AIX Version | Macro |
| ----------- | ------------ |
| 3.2.x | \_AIX3_AIX32 |
| 4.1 | \_AIX41 |
| 4.3 | \_AIX43 |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Android_%28operating_system%29">Android</a></u>**
| Type | Macro | Format | Description |
| -------------- | --------------- | ------ | ------------------------------------------------------------ |
| Identification | **ANDROID** | | |
| Version | **ANDROID_API** | V | V = API VersionMust be included from \<android/api-level.h\> |
Notice that Android is based on Linux, and that the Linux macros also are defined for Android.
**Example**
| Android Version | **ANDROID_API** |
| --------------- | --------------- |
| 1.0 | 1 |
| 1.1 | 2 |
| 1.5 | 3 |
| 1.6 | 4 |
| 2.0 | 5 |
| 2.0.1 | 6 |
| 2.1 | 7 |
| 2.2 | 8 |
| 2.3 | 9 |
| 2.3.3 | 10 |
| 3.0 | 11 |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/UTS_%28Mainframe_UNIX%29">Amdahl UTS</a></u>**
| Type | Macro |
| -------------- | ----- |
| Identification | UTS |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/AmigaOS">AmigaOS</a></u>**
| Type | Macro | Description |
| -------------- | ----------- | ---------------- |
| Identification | AMIGA | |
| Identification | **amigaos** | Defined by GNU C |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Domain/OS">Apollo AEGIS</a></u>**
| Type | Macro |
| -------------- | ----- |
| Identification | aegis |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Domain/OS">Apollo Domain/OS</a></u>**
| Type | Macro |
| -------------- | ------ |
| Identification | apollo |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Bada_%28operating_system%29">Bada</a></u>**
Based on Nucleus OS.
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/BeOS">BeOS</a></u>**
| Type | Macro |
| -------------- | -------- |
| Identification | **BEOS** |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Bluegene">Blue Gene</a></u>**
| Type | Macro | Description |
| -------------- | ---------------- | -------------------------------------------------- |
| Identification | **bg** | All Blue Gene systemsDefined by XL C/C++ and GNU C |
| Version | **bgq** | Blue Gene/QDefined for XL C/C++ and GNU C |
| Identification | **THW_BLUEGENE** | All Blue Gene systemsDefined by XL C/C++ |
| Version | **TOS_BGQ** | Blue Gene/QDefined by XL C/C++ |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Bsd">BSD Environment</a></u>**
| Type | Macro | Format | Description |
| -------------- | ------------------------------------------------------------- | ------ | ---------------------------------------------------------- |
| Identification | **FreeBSD\_\_**NetBSD\_**\_OpenBSD\_\_**bsdi\_**\_DragonFly** | | |
| Version | BSD | YYYYMM | YYYY = YearMM = MonthMust be included from \<sys/param.h\> |
| Version | BSD4_2BSD4_3BSD4_4 | | Must be included from \<sys/param.h\> |
| Identification | \_SYSTYPE_BSD | | Defined by DEC C |
**Example**
| Version | BSD | Macro |
| ------------ | ------ | ------ |
| 4.3 Net2 | 199103 | |
| 4.4 | 199306 | BSD4_4 |
| 4.4BSD-Lite2 | 199506 | |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/BSD/OS">BSD/OS</a></u>**
| Type | Macro |
| -------------- | -------- |
| Identification | **bsdi** |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Convex_Computer">ConvexOS</a></u>**
| Type | Macro |
| -------------- | ---------- |
| Identification | **convex** |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Cygwin">Cygwin Environment</a></u>**
| Type | Macro |
| -------------- | ---------- |
| Identification | **CYGWIN** |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Data_General">DG/UX</a></u>**
| Type | Macro |
| -------------- | -------- |
| Identification | DGUX |
| Identification | **DGUX** |
| Identification | **dgux** |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/DragonFly_BSD">DragonFly</a></u>**
| Type | Macro |
| -------------- | ------------- |
| Identification | **DragonFly** |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Dynix">DYNIX/ptx</a></u>**
| Type | Macro |
| -------------- | --------- |
| Identification | _SEQUENT_ |
| Identification | sequent |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/ECos">eCos</a></u>**
| Type | Macro |
| -------------- | -------- |
| Identification | \_\_ECOS |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/EMX_%28programming_environment%29">EMX Environment</a></u>**
| Type | Macro |
| -------------- | ------- |
| Identification | **EMX** |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Freebsd">FreeBSD</a></u>**
| Type | Macro | Format | Description |
| -------------- | ------------------- | ------ | --------------------------------- |
| Identification | **FreeBSD** | | |
| Identification | **FreeBSD_kernel** | | From FreeBSD 8.3, 9.1, and 10.0.1 |
| Version | BSD | | |
| Version | **FreeBSD** | V | V = Version |
| Version | \_\_FreeBSD_version | ? | Must be included from osreldate.h |
**Example**
| FreeBSD | **FreeBSD** | \_\_FreeBSD_version |
| ----------- | ----------- | ------------------- |
| 1.x | 1 | |
| 2.0-RELEASE | 2 | 119411 |
| 2.2-RELEASE | 2 | 220000 |
| 3.0-RELEASE | 3 | 300005 |
| 4.0-RELEASE | 4 | 400017 |
| 4.5-RELEASE | 4 | 450000 |
For more information see the <u><a rel="nofollow noreferrer" class="wrap external" href="http://www.freebsd.org/doc/en_US.ISO8859-1/books/porters-handbook/freebsd-versions.html">FreeBSD porters handbook</a></u>.
**GNU aka <u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/GNU/Hurd">GNU/Hurd</a></u>**
The official name of this operating system is GNU. Hurd is the kernel in the GNU operating system. It is often listed as GNU/Hurd since there is also GNU/Linux and GNU/kFreeBSD, which are most of the GNU operating system with the Linux and FreeBSD kernels respectively.
| Type | Macro |
| -------------- | -------------- |
| Identification | **GNU** 1 |
| Identification | **gnu_hurd** 1 |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/GNU/kFreeBSD">GNU/kFreeBSD</a></u>**
GNU/kFreeBSD is one of the Debian distros that is based on the FreeBSD kernel rather than the Linux or Hurd kernels.
| Type | Macro |
| -------------- | ------------------------------- |
| Identification | **FreeBSD_kernel** && **GLIBC** |
Notice that FreeBSD also defines `__FreeBSD_kernel__` so the `__GLIBC__` macro must be checked to distinguish it.
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/GNU/Linux">GNU/Linux</a></u>**
| Type | Macro |
| -------------- | ------------- |
| Identification | **gnu_linux** |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/HI-UX">HI-UX MPP</a></u>**
| Type | Macro |
| -------------- | ----------- |
| Identification | \_\_hiuxmpp |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/HP-UX">HP-UX</a></u>**
| Type | Macro | Description |
| -------------- | -------- | ----------------- |
| Identification | \_hpux | Defined by HP UPC |
| Identification | hpux | |
| Identification | \_\_hpux | |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/IBM_i">IBM OS/400</a></u>**
| Type | Macro |
| -------------- | --------- |
| Identification | **OS400** |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Integrity_%28operating_system%29">INTEGRITY</a></u>**
| Type | Macro |
| -------------- | ------------- |
| Identification | \_\_INTEGRITY |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Interix">Interix Environment</a></u>**
| Type | Macro | Description |
| -------------- | ----------- | ---------------------------------- |
| Identification | \_\_INTERIX | Defined by GNU C and Visual Studio |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Irix">IRIX</a></u>**
| Type | Macro |
| -------------- | ------- |
| Identification | sgi |
| Identification | \_\_sgi |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Linux_kernel">Linux kernel</a></u>**
Systems based on the Linux kernel define these macros. There are two major Linux-based operating systems: <u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/GNU/Linux">GNU/Linux</a></u> and<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Android">Android</a></u>, and numerous others like <u><a rel="nofollow noreferrer" class="wrap external" href="http://www.angstrom-distribution.org/">Ångström</a></u> or <u><a rel="nofollow noreferrer" class="wrap external" href="http://www.openembedded.org/">OpenEmbedded</a></u>
| Type | Macro | Description |
| -------------- | --------- | ------------------------------ |
| Identification | **linux** | 1 |
| Identification | linux | Obsolete (not POSIX compliant) |
| Identification | \_\_linux | Obsolete (not POSIX compliant) |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/LynxOS">LynxOS</a></u>**
| Type | Macro |
| -------------- | -------- |
| Identification | **Lynx** |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Mac_OS">MacOS</a></u>**
| Type | Macro | Description |
| -------------- | --------------------- | -------------------------------------- |
| Identification | macintosh | Mac OS 9 |
| Identification | Macintosh | Mac OS 9 |
| Identification | **APPLE** && **MACH** | Mac OS XDefined by GNU C and Intel C++ |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/OS-9">Microware OS-9</a></u>**
| Type | Macro | Description |
| -------------- | ---------- | ------------------------- |
| Identification | \_\_OS9000 | Defined by Ultimate C/C++ |
| Identification | \_OSK | Defined by Ultimate C/C++ |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Minix">MINIX</a></u>**
| Type | Macro |
| -------------- | --------- |
| Identification | \_\_minix |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Morphos">MorphOS</a></u>**
| Type | Macro |
| -------------- | ----------- |
| Identification | **MORPHOS** |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/MPE">MPE/iX</a></u>**
| Type | Macro |
| -------------- | --------- |
| Identification | mpeix |
| Identification | \_\_mpexl |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/MS-DOS">MSDOS</a></u>**
| Type | Macro |
| -------------- | --------- |
| Identification | MSDOS |
| Identification | **MSDOS** |
| Identification | \_MSDOS |
| Identification | **DOS** |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Netbsd">NetBSD</a></u>**
| Type | Macro | Format | Description |
| -------------- | ------------------ | ---------- | -------------------------------------------------------------------------------------------------------------------------- |
| Identification | **NetBSD** | | |
| Version | BSD | | |
| Version | NetBSD'V'\_'R' | | V = VersionR = RevisionMust be included from \<sys/param.h\> |
| Version | **NetBSD_Version** | VVRRAAPP00 | VV = VersionRR = RevisionAA = ReleasePP = PatchFrom NetBSD 1.2D (?) until NetBSD 2.0HMust be included from \<sys/param.h\> |
| Version | **NetBSD_Version** | VVRR00PP00 | VV = VersionRR = RevisionPP = PatchFrom NetBSD 2.99.9Must be included from \<sys/param.h\> |
**Example**
| NetBSD | **NetBSD_Version** | Macro |
| ------ | ------------------ | ------------- |
| 0.8 | | NetBSD0_8 |
| 0.9 | | NetBSD0_9 |
| 1.0 | | NetBSD1_0 = 1 |
| 1.0A | | NetBSD1_0 = 2 |
| 1.2D | 102040000 | |
| 1.2.1 | 102000100 | |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/NonStop">NonStop</a></u>**
| Type | Macro |
| -------------- | ---------- |
| Identification | \_\_TANDEM |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Nucleus_RTOS">Nucleus RTOS</a></u>**
| Type | Macro |
| -------------- | ----------- |
| Identification | **nucleus** |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Openbsd">OpenBSD</a></u>**
| Type | Macro | Format | Description |
| -------------- | --------------- | ------ | -------------------------------------------------------- |
| Identification | **OpenBSD** | | |
| Version | BSD | | |
| Version | OpenBSD'V'\_'R' | | V = VersionR = RevisionMust be included from sys/param.h |
**Example**
| OpenBSD | Macro |
| ------- | ---------- |
| 3.1 | OpenBSD3_1 |
| 3.9 | OpenBSD3_9 |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/OS/2">OS/2</a></u>**
| Type | Macro |
| -------------- | ----------- |
| Identification | OS2 |
| Identification | \_OS2 |
| Identification | **OS2** |
| Identification | **TOS_OS2** |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Palmos">Palm OS</a></u>**
| Type | Macro | Description |
| -------------- | ---------- | ----------------------------- |
| Identification | **palmos** | Defined by GNU C in PRC-Tools |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs">Plan 9</a></u>**
| Type | Macro |
| -------------- | ------ |
| Identification | EPLAN9 |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/DC/OSx">Pyramid DC/OSx</a></u>**
| Type | Macro |
| -------------- | ----- |
| Identification | pyr |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/QNX">QNX</a></u>**
| Type | Macro | Format | Description |
| -------------- | --------------------- | ---------- | ------------------------------------------------------------------------------------------------------------------------ |
| Identification | **QNX** | | QNX 4.x |
| Identification | **QNXNTO** | | QNX 6.x |
| Version | \_NTO_VERSION | VRR | V = VersionRR = RevisionOnly available when **QNXNTO** is defined.Must be included from sys/neutrino.h/ |
| Version | BBNDK_VERSION_CURRENT | VVRRRRPPPP | V = VersionRRRR = RevisionPPPP = PatchOnly available on Blackberry 10From Blackberry 10.1.0Must be included from bbndk.h |
**Example**
| QNX | \_NTO_VERSION |
| --- | ------------- |
| 6.2 | 620 |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Reliant_UNIX">Reliant UNIX</a></u>**
| Type | Macro |
| -------------- | ----- |
| Identification | sinux |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/SCO_OpenServer">SCO OpenServer</a></u>**
| Type | Macro | Description |
| -------------- | -------- | ---------------- |
| Identification | M_I386 | Defined by GNU C |
| Identification | M_XENIX | Defined by GNU C |
| Identification | \_SCO_DS | |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Solaris_Operating_Environment">Solaris</a></u>**
| Type | Macro | Description |
| -------------- | --------------------- | ----------------------------------------------------------------------------------------------------------- |
| Identification | sun | |
| Identification | \_\_sun | |
| Version | \__'System'_'Version' | System = uname -sVersion = uname -rAny illegal character is replaced by an underscore.Defined by Sun Studio |
Use the SVR4 macros to distinguish between Solaris and SunOS.
#if defined(sun) || defined(**sun) # if defined(**SVR4) || defined(**svr4**) /_ Solaris _/ # else /_ SunOS _/ # endif #endif
**Example**
| Solaris | Macro |
| ------- | ------------- |
| 2.7 | \_\_SunOS_5_7 |
| 8 | \_\_SunOS_5_8 |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Stratus_VOS">Stratus VOS</a></u>**
| Type | Macro | Format | Description |
| -------------- | ------- | ------ | ----------- |
| Identification | **VOS** | | |
| Version | **VOS** | V | V = Version |
Notice that the `__VOS__` macro is defined by the compiler, but as several compilers can co-exist in the same OS release, the version number is not reliable.
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/UNIX_System_V">SVR4 Environment</a></u>**
| Type | Macro | Description |
| -------------- | -------------- | --------------- |
| Identification | **sysv** | |
| Identification | \_\_SVR4 | |
| Identification | **svr4** | |
| Identification | \_SYSTYPE_SVR4 | Defined on IRIX |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Syllable_Desktop">Syllable</a></u>**
| Type | Macro |
| -------------- | ------------ |
| Identification | **SYLLABLE** |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Symbian_OS">Symbian OS</a></u>**
| Type | Macro |
| -------------- | ------------- |
| Identification | **SYMBIAN32** |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Digital_UNIX">Tru64 (OSF/1)</a></u>**
| Type | Macro |
| -------------- | ------- |
| Identification | **osf** |
| Identification | \_\_osf |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Ultrix">Ultrix</a></u>**
| Type | Macro |
| -------------- | ---------- |
| Identification | ultrix |
| Identification | \_\_ultrix |
| Identification | **ultrix** |
| Identification | unix & vax |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/UNICOS">UNICOS</a></u>**
| Type | Macro | Format | Description |
| -------------- | -------- | ------ | ----------- |
| Identification | \_UNICOS | | |
| Version | \_UNICOS | V | V = Version |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Unicos">UNICOS/mp</a></u>**
| Type | Macro | Description |
| -------------- | ---------------- | ----------- |
| Identification | \_CRAY\_\_crayx1 | |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Unix">UNIX Environment</a></u>**
| Type | Macro |
| -------------- | -------- |
| Identification | **unix** |
| Identification | \_\_unix |
Notice that not all compilers defines these macros, e.g. the xlC or the DEC C/C++ compiler, so it may be better to use the POSIX or X/Open standard macros instead.
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/UnixWare">UnixWare</a></u>**
| Type | Macro |
| -------------- | ----------- |
| Identification | sco |
| Identification | \_UNIXWARE7 |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/UWIN">U/Win Environment</a></u>**
| Type | Macro |
| -------------- | ------ |
| Identification | \_UWIN |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Vms">VMS</a></u>**
| Type | Macro | Format | Description |
| -------------- | ----------- | --------- | ------------------------------------------------------------------------------------------------ |
| Identification | VMS | | |
| Identification | \_\_VMS | | |
| Version | \_\_VMS_VER | VVRREPPTT | VV = VersionRR = RevisionE = Edit numberPP = Patch (01 = A, ... 26 = Z)TT = Type (22 = official) |
**Example**
| VMS | \_\_VMS_VER |
| ------ | ----------- |
| 6.1 | 60100022 |
| 6.2 | 60200022 |
| 6.2-1I | 60210922 |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/VxWorks">VxWorks</a></u>**
| Type | Macro | Description | |
| -------------- | ------------------- | ------------------------------------------------ | --- |
| Identification | **VXWORKS** | Defined by GNU C and Diab (from ?) | |
| Identification | \_\_vxworks | Defined by GNU C and Diab (from ?) | |
| Version | \_WRS_VXWORKS_MAJOR | VersionMust be included from version.h | |
| Version | \_WRS_VXWORKS_MINOR | RevisionMust be included from version.h | |
| Version | \_WRS_VXWORKS_MAINT | Patch/maintenanceMust be included from version.h | |
| Mode | **RTP** | For real-time mode | |
| Mode | \_WRS_KERNEL | For kernel mode | |
**Example**
| VxWorks | \_WRS_VXWORKS_MAJOR | \_WRS_VXWORKS_MINOR | \_WRS_VXWORKS_MAINT |
| ------- | ------------------- | ------------------- | ------------------- |
| 6.2 | 6 | 2 | 0 |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Category:Microsoft_Windows">Windows</a></u>**
| Type | Macro | Description |
| -------------- | ----------- | ------------------------------------------------- |
| Identification | \_WIN16 | Defined for 16-bit environments 1 |
| Identification | \_WIN32 | Defined for both 32-bit and 64-bit environments 1 |
| Identification | \_WIN64 | Defined for 64-bit environments 1 |
| Identification | **WIN32** | Defined by Borland C++ |
| Identification | **TOS_WIN** | Defined by xlC |
| Identification | **WINDOWS** | Defined by Watcom C/C++ |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Windows_CE">Windows CE</a></u>**
| Type | Macro | Format | Description |
| -------------- | ------------------ | ------ | ------------------------------------- |
| Identification | \_WIN32_WCE | | Defined by Embedded Visual Studio C++ |
| Version | \_WIN32_WCE | VRR | V = VersionR = Revision |
| Identification | WIN32*PLATFORM*'P' | | P = Platform |
| Version | WIN32*PLATFORM*'P' | V | P = PlatformV = Version |
**Example**
| Version | \_WIN32_WCE |
| ------- | ----------- |
| 2.01 | 201 |
| 2.11 | 211 |
| 3.0 | 300 |
| 4.0 | 400 |
| 4.1 | 410 |
| 4.2 | 420 |
| 5.0 | 501 |
| Platform | Macro | Value |
| ------------------- | ---------------------- | ----- |
| H/PC 2000 | WIN32_PLATFORM_HPC2000 | |
| H/PC Pro 2.11 | WIN32_PLATFORM_HPCPRO | 211 |
| H/PC Pro 3.0 | WIN32_PLATFORM_HPCPRO | 300 |
| Pocket PC | WIN32_PLATFORM_PSPC | 1 |
| Pocket PC 2002 | WIN32_PLATFORM_PSPC | 310 |
| Windows Mobile 2003 | WIN32_PLATFORM_PSPC | 400 |
| Smartphone 2002 | WIN32_PLATFORM_WFSP | 100 |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Bristol_Technology_Inc.">Wind/U Environment</a></u>**
| Type | Macro | Format | Description |
| -------------- | -------------- | -------- | ----------------------------------- |
| Identification | \_WINDU_SOURCE | | |
| Version | \_WINDU_SOURCE | 0xVVRRPP | VV = VersionRR = RevisionPP = Patch |
**Example**
| Wind/U | \_WINDU_SOURCE |
| ------ | -------------- |
| 3.1.2 | 0x030102 |
**<u><a rel="nofollow noreferrer" class="wrap external" href="http://en.wikipedia.org/wiki/Z/OS">z/OS</a></u>**
| Type | Macro | Description |
| -------------- | ----------- | ----------- |
| Identification | **MVS** | Host |
| Identification | **HOS_MVS** | Host |
| Identification | **TOS_MVS** | Target |

View File

@@ -0,0 +1,346 @@
---
title: "Use differential equation method and matrix method to find Fibonacci sequence general formula"
time: "2023-11-20"
tags: ["mathematics"]
pin: true
summary: "This article gives two methods to derive Fibonacci sequence: matrix method and difference equation method"
---
Here is the translation of the provided article into English:
In Fibonacci's work _The Book of Calculation_ the Fibonacci sequence is defined as follows:
$$
F_n = \begin{cases}
0 & \text{if } n = 0 \\
1 & \text{if } n = 1 \\
F_{n-1} + F_{n-2} & \text{if } n \geq 2
\end{cases}
$$
It can be proven that its closed-form formula is:
$$
F_n = \frac{1}{\sqrt{5}}\left(\left(\frac{1+\sqrt{5}}{2}\right)^n - \left(\frac{1-\sqrt{5}}{2}\right)^n\right)
$$
With my current knowledge, I can only comprehend two proof methods as follows:
## Matrix Method
Let's first discuss the case when $n \geq 2$. Our goal now is to transform the recurrence formula of the Fibonacci sequence into matrix form. How can we do that? We can approach it from the perspective of a system of linear equations.
First, here is what we know:
$$
F_{n-1} + F_{n-2} = F_{n}
$$
We can add the equation $F_{n-1} + 0 \cdot F_{n-2} = F_{n-1}$ to form a system of linear equations:
$$
\begin{cases}
F_{n-1} + F_{n-2} = F_{n} \\
F_{n-1} + 0 \cdot F_{n-2} = F_{n-1}
\end{cases}
$$
This can be transformed into matrix form as follows:
$$
\begin{bmatrix}
1 & 1 \\
1 & 0
\end{bmatrix}
\begin{bmatrix}
F_{n-1} \\
F_{n-2}
\end{bmatrix}
=
\begin{bmatrix}
F_{n} \\
F_{n-1}
\end{bmatrix}
$$
Now, we can iterate this process:
$$
\begin{align}
\begin{bmatrix}
F_{n} \\
F_{n-1}
\end{bmatrix}
&=
\begin{bmatrix}
1 & 1 \\
1 & 0
\end{bmatrix}
\begin{bmatrix}
F_{n-1} \\
F_{n-2}
\end{bmatrix} \\
&=
\begin{bmatrix}
1 & 1 \\
1 & 0
\end{bmatrix}^2
\begin{bmatrix}
F_{n-2} \\
F_{n-3}
\end{bmatrix} \\
&=\cdots \\
&=
\begin{bmatrix}
1 & 1 \\
1 & 0
\end{bmatrix}^{n-1}
\begin{bmatrix}
F_{1} \\
F_{0}
\end{bmatrix}
\end{align}
$$
We denote the matrix $\boldsymbol{A} = \begin{bmatrix}
1 & 1 \\
1 & 0
\end{bmatrix}$. So, the problem becomes finding $\boldsymbol{A}^{n-1}$, and then we can calculate $\boldsymbol{A}^{n}$ and replace all instances of $n$ with $n-1$.
Notice that matrix $\boldsymbol{A}$ is a square matrix, and we can utilize the eigenvalues and eigenvectors of matrices.
Eigenvectors can be understood as vectors that, when right-multiplied by the matrix, result in a vector parallel to themselves. Eigenvalues are the scaling factors by which the eigenvectors are scaled when right-multiplied by the matrix. In other words:
$$
\boldsymbol{A}\boldsymbol{x} = \lambda\boldsymbol{x}\tag{1}
$$
Here, $\lambda$ is the eigenvalue, and the non-zero vector $\boldsymbol{x} \in \mathbb{R}^n$ (where $n$ is the order of the square matrix) is the eigenvector corresponding to the eigenvalue $\lambda$ of matrix $\boldsymbol{A}$.
So, the specific approach is to first find the eigenvalues $\lambda_1$ and $\lambda_2$ (usually, an $n$-order matrix has $n$ eigenvalues) and obtain the diagonal matrix $\rm{diag}\{\lambda_1, \lambda_2\}$. Then, we find an invertible matrix $\boldsymbol{P}$ such that:
$$
\boldsymbol{P}^{-1}\boldsymbol{A}\boldsymbol{P} = \rm{diag}\{\lambda_1, \lambda_2\}
$$
Using matrix multiplication properties:
$$
(\boldsymbol{P}^{-1}\boldsymbol{A}\boldsymbol{P})^n = \boldsymbol{P}^{-1}\boldsymbol{A}(\boldsymbol{P}\boldsymbol{P}^{-1})\boldsymbol{A}(\boldsymbol{P}\cdots\boldsymbol{P}^{-1})\boldsymbol{A}\boldsymbol{P} = \boldsymbol{P}^{-1}\boldsymbol{A}^n\boldsymbol{P}\tag{2}
$$
This allows us to calculate $\boldsymbol{A}^{n}$.
Let's first find the eigenvalues. We can rewrite equation $(1)$ as:
$$
(\boldsymbol{A}-\lambda\boldsymbol{E})\boldsymbol{x} = \boldsymbol{0}\tag{3}
$$
Here, $\boldsymbol{E}$ is the identity matrix, and we can compute that $\boldsymbol{A}-\lambda\boldsymbol{E} = \begin{bmatrix}
1-\lambda & 1 \\
1 & -\lambda
\end{bmatrix}$. To ensure that there is a non-zero solution, we solve for:
$$
\left| \boldsymbol{A}-\lambda\boldsymbol{E} \right| = \begin{vmatrix}
1-\lambda & 1 \\
1 & -\lambda
\end{vmatrix} = \lambda^2 - \lambda - 1 = 0
$$
Solving this equation, we obtain $\lambda_1 = \frac{1+\sqrt{5}}{2}$ and $\lambda_2 = \frac{1-\sqrt{5}}{2}$.
Therefore, the diagonal matrix is:
$$
\rm{diag}\{\lambda_1, \lambda_2\} = \begin{bmatrix}
\frac{1+\sqrt{5}}{2} & 0 \\
0 & \frac{1-\sqrt{5}}{2}
\end{bmatrix}
$$
Assuming the eigenvector is $\boldsymbol{x} = \begin{bmatrix}
x & y
\end{bmatrix}^T$, we can substitute $\lambda_1$ and $\lambda_2$ into equation $(2)$ to obtain two systems of equations:
$$
\begin{cases}
(1-\lambda_1)x + y = 0 \\
x - \lambda_1y = 0
\end{cases},
\begin{cases}
(1-\lambda_2)x + y = 0 \\
x - \lambda_2y = 0
\end{cases}
$$
Setting $y = 1$ in both systems of equations gives us the two eigenvectors of the matrix:
$$
\boldsymbol{x}_1 = \begin{bmatrix}
\frac{1+\sqrt{5}}{2} & 1
\end{bmatrix}^T,
\boldsymbol{x}_2 = \begin{bmatrix}
\frac{1-\sqrt{5}}{2} & 1
\end{bmatrix}^T
$$
So, the invertible matrix $\boldsymbol{P}$ is formed by these two eigenvectors:
$$
\boldsymbol{P} = \begin{bmatrix}
\frac{1+\sqrt{5}}{2} & \frac{1-\sqrt{5}}{2} \\
1 & 1
\end{bmatrix}
$$
Why is it like this? Let $\boldsymbol{P} = \begin{bmatrix}
x_1 & x_2 \\
y_1 & y_2
\end{bmatrix}$ (where $x_1, y_1, x_2, y_2$ are the components of eigenvectors $\boldsymbol{x}_1, \boldsymbol{x}_2$ respectively).
Now, if we calculate $\boldsymbol{P}\rm{diag}\{\lambda_1, \lambda_2\}$, it exactly equals $\begin{bmatrix}
\lambda_1x_1 & \lambda_2x_2 \\
\lambda_1y_1 & \lambda_2y_2
\end{bmatrix}$. This means that $\boldsymbol{A}\boldsymbol{P} = \boldsymbol{P}\rm{diag}\{\lambda_1, \lambda_2\}$ is inevitable. Left-multiplying by $\boldsymbol{P}^{-1}$, we get $\boldsymbol{P}^{-1}\boldsymbol{A}\boldsymbol{P} = \rm{diag}\{\lambda_1, \lambda_2\}$. So, $\boldsymbol{P} = \begin{bmatrix}
x_1 & x_2 \\
y_1 & y_2
\end{bmatrix}$ is reasonable.
Its inverse matrix is easy to calculate:
$$
\boldsymbol{P}^{-1} = \frac{\boldsymbol{P}^*}{\left|\boldsymbol{P}\right|} = \frac{1}{\sqrt{5}}\begin{bmatrix}
1 & \frac{1-\sqrt{5}}{2} \\
1 & \frac{1+\sqrt{5}}{2}
\end{bmatrix}
$$
Substituting into equation $(2)$:
$$
\boldsymbol{A}^n = \boldsymbol{P}(\boldsymbol{P}^{-1}\boldsymbol{A}\boldsymbol{P})^n\boldsymbol{P}^{-1} = \boldsymbol{P}\rm{diag}^n\{\lambda_1, \lambda_2\}\boldsymbol{P}^{-1}
$$
Where the diagonal matrix is:
$$
\rm{diag}^n\{\lambda_1, \lambda_2\} = \begin{bmatrix}
\left(\frac{1+\sqrt{5}}{2}\right)^n & 0 \\
0 & \left(\frac{1-\sqrt{5}}{2}\right)^n
\end{bmatrix}
$$
Therefore,
$$
\boldsymbol{A}^n = \frac{1}{\sqrt{5}}\begin{bmatrix}
\left(\frac{1+\sqrt{5}}{2}\right)^{n+1} + \left(\frac{1-\sqrt{5}}{2}\right)^{n+1} & \left(\frac{1+\sqrt{5}}{2}\right)^{n+1}\left(\frac{1-\sqrt{5}}{2}\right) + \left(\frac{1-\sqrt{5}}{2}\right)^{n+1}\left(\frac{1+\sqrt{5}}{2}\right) \\
\left(\frac{1+\sqrt{5}}{2}\right)^n + \left(\frac{1-\sqrt{5}}{2}\right)^n & \left(\frac{1+\sqrt{5}}{2}\right)^{n}\left(\frac{1-\sqrt{5}}{2}\right) + \left(\frac{1-\sqrt{5}}{2}\right)^{n}\left(\frac{1+\sqrt{5}}{2}\right)
\end{bmatrix}
$$
Then,
$$
\begin{bmatrix}
F_{n} \\
F_{n-1}
\end{bmatrix}
= \boldsymbol{A}^{n-1}
\begin{bmatrix}
F_1 \\
F_0
\end{bmatrix}
= \frac{1}{\sqrt{5}}\begin{bmatrix}
\left(\frac{1+\sqrt{5}}{2}\right)^{n} + \left(\frac{1-\sqrt{5}}{2}\right)^{n} \\
\left(\frac{1+\sqrt{5}}{2}\right)^{n-1} + \left(\frac{1-\sqrt{5}}{2}\right)^{n-1}
\end{bmatrix}
$$
Considering only the first row of the matrices on both sides of the equation, we obtain the closed-form formula for the Fibonacci sequence:
$$
F_n = \frac{1}{\sqrt{5}}\left(\left(\frac{1+\sqrt{5}}{2}\right)^n - \left(\frac{1-\sqrt{5}}{2}\right)^n\right)
$$
Substituting $n=0$ and $n=1$ to verify, we find that they both satisfy the equation.
---
## Difference Equation Method
Defining the difference of a sequence $\{a_n\}$ as $\Delta a_n = a_{n+1} - a_n$ (using backward difference here), we can define the second-order difference as:
$$
\Delta^2 a_n = \Delta a_{n+1} - \Delta a_n = a_{n+2} - 2a_{n+1} + a_n
$$
Further, we can define the $m$-th order difference:
$$
\Delta^m a_n = \Delta^{m-1} a_{n+1} - \Delta^{m-1} a_n = \sum_{i=0}^{m} (-1)^i C_m^i a_{n+m-i}
$$
And
$$
F(a_n, \Delta a_n,
\Delta a_n, \Delta^2 a_n, \Delta^3 a_n, \ldots)
$$
The Fibonacci sequence, defined as $F_n = F_{n-1} + F_{n-2}$, has a second-order difference that is constant:
$$
\Delta^2 F_n = F_{n+2} - 2F_{n+1} + F_n = F_{n+1} + F_{n+1} - 2F_{n+1} + F_n = F_{n+1} - F_n = F_n
$$
Now, we can write the second-order linear homogeneous difference equation for $F_n$:
$$
\Delta^2 F_n - F_n = 0
$$
This is a characteristic equation, and we can solve it by assuming $F_n = r^n$:
$$
r^2 - 1 = 0
$$
Solving this equation gives us two solutions: $r = 1$ and $r = -1$. Therefore, the general solution for the homogeneous difference equation is:
$$
F_n = c_1 \cdot 1^n + c_2 \cdot (-1)^n
$$
Now, we need initial conditions to find the particular solution. We know that $F_0 = 0$ and $F_1 = 1$. Substituting these into the general solution:
$$
\begin{align*}
F_0 &= c_1 \cdot 1^0 + c_2 \cdot (-1)^0 = c_1 + c_2 = 0 \\
F_1 &= c_1 \cdot 1^1 + c_2 \cdot (-1)^1 = c_1 - c_2 = 1
\end{align*}
$$
Solving this system of equations, we find $c_1 = \frac{1}{2}$ and $c_2 = -\frac{1}{2}$. Therefore, the particular solution for the Fibonacci sequence is:
$$
F_n = \frac{1}{2} \cdot 1^n - \frac{1}{2} \cdot (-1)^n
$$
This can be simplified further by noting that $(-1)^n$ is equal to $(-1)^{n-1} \cdot (-1) = -(-1)^{n-1}$:
$$
F_n = \frac{1}{2} - \frac{1}{2} \cdot (-1)^{n-1}
$$
This is indeed the closed-form formula for the Fibonacci sequence:
$$
F_n = \frac{1}{\sqrt{5}}\left(\left(\frac{1+\sqrt{5}}{2}\right)^n - \left(\frac{1-\sqrt{5}}{2}\right)^n\right)
$$
So, we have successfully derived the same result using the difference equation method.
In summary, both the matrix method and the difference equation method lead to the same closed-form expression for the Fibonacci sequence, demonstrating the beauty of mathematics in providing multiple ways to arrive at a solution.

View File

@@ -0,0 +1,65 @@
---
title: "Joseph Circle Problem"
time: "2023-11-21"
tags: ["algorithm"]
summary: "A description, analysis and solution for joseph circle problem."
---
## Problem Description
The Josephus problem is a classic mathematical problem that describes the following scenario:
There are $n$ people standing in a circle. Starting from a certain person, they count off in sequence, and every $m$-th person steps out of the circle. The counting and elimination process continues, looping back to the next person when the end of the circle is reached. This continues until only one person remains. The problem is to determine the initial position of the last person remaining in the circle.
For example, when $n=7$ and $m=3$, the order of elimination is: $3, 6, 2, 7, 5, 1$. The last person remaining is at position $4$.
## Solution
The Josephus problem can be solved using either recursion or mathematical formulas. The recursive solution assumes knowledge of the solution for $n-1$ people, denoted as $f(n-1, m)$, and then calculates the solution for $n$ people using the formula:
$
f(n, m) = (f(n-1, m) + m) \% n
$
Here, $\%$ denotes the modulo operation. The base case is $f(1, m) = 0$, representing the solution when there is only one person.
The mathematical formula solution is derived based on the recurrence relation, resulting in the same formula:
$
f(n, m) = (f(n-1, m) + m) \% n
$
## Programming Implementation
When it comes to solving the Josephus problem, dynamic programming is an effective approach. Here is an example C code implementing dynamic programming to solve the Josephus problem:
```c
#include <stdio.h>
int josephus(int n, int m) {
int dp[n + 1];
dp[1] = 0; // Initial position when there is only one person
for (int i = 2; i <= n; i++) {
dp[i] = (dp[i - 1] + m) % i;
}
return dp[n];
}
int main() {
int n = 7; // Total number of people
int m = 3; // Count off to m for elimination
int lastPerson = josephus(n, m);
printf("The last person remaining is: %d\n", lastPerson + 1); // Adding 1 due to 0-based indexing
return 0;
}
```
In the above code, we use a dynamic programming array `dp` to store the next position after each person is eliminated. Through iteration, we can find the initial position of the last person remaining in the circle. In the `main` function, we set the total number of people `n` and the count to `m` for elimination. Then, we call the `josephus` function to calculate the position of the last person remaining and print the result.
Note that we assume people's positions start from 1, while array indices start from 0. Therefore, when printing the position of the last person remaining, we need to add 1.
Using dynamic programming provides an efficient solution to the Josephus problem, avoiding the repetitive calculations of recursion and improving code performance.

View File

@@ -0,0 +1,123 @@
---
title: "Rabin-Karp Algorithm"
time: "2023-11-22"
tags: ["algorithm"]
summary: "It is designed to address the multiple pattern string matching problem."
---
The Rabin-Karp algorithm, also known as the Karp-Rabin algorithm, was introduced by _Richard M. Karp_ and _Michael O. Rabin_ in 1987. It is designed to address the multiple pattern string matching problem.
Its implementation is somewhat unconventional. It begins by computing the hash values of two strings and then determines whether there is a match by comparing these hash values.
## Algorithm Analysis and Implementation
Choosing an appropriate hash function is crucial. Assuming the text string is $t[0, n)$, and the pattern string is $p[0, m)$, where $0<m<n$, let $Hash(t[i,j])$ represent the hash value of the substring $t[i, j]$.
When $Hash(t[0, m-1])!=Hash(p[0,m-1])$, it is natural to compare $Hash(t[1, m])$. In this process, if we recalculate the hash value for the substring $t[1, m]$, it would require $O(n)$ time complexity, which is not cost-effective. Observing that there are m-1 overlapping characters between the substrings $t[0, m-1]$ and $t[1, m]$, we can use a rolling hash function. This reduces the time complexity of recalculation to $O(1)$.
The rolling hash function used in the Rabin-Karp algorithm primarily leverages the concept of [Rabin fingerprint](https://en.wikipedia.org/wiki/Rabin_fingerprint). For example, the formula to calculate the hash value of the substring $t[0, m-1]$ is as follows:
$$
Hash(t[0, m-1])=t[0]*b^{m-1}+t[1]*b^{m-2}+...+t[m-1]*b^0
\\
\tag{t[0] represents the ASCII code of the character}
$$
Here, $b$ is a constant. In Rabin-Karp, it is generally set to 256 because the maximum value of a character does not exceed 255. The formula above has an issue - hash values could overflow. To address this, we take the modulus, and the value should be as large as possible and preferably a prime number. Here, we take 101.
The formula to calculate the hash value of the substring $t[1, m]$ is then:
$$
Hash(t[1,m])=(Hash(t[0,m-1])-t[0]*b^{m-1})*b+t[m]*b^0\\\tag{Please compare with the previous formula}
$$
The complete code is as follows:
```cpp
#include <iostream>
#include <string.h>
using namespace std;
#define BASE 256
#define MODULUS 101
void RabinKarp(char t[], char p[])
{
int t_len = strlen(t);
int p_len = strlen(p);
// For rolling hash
int h = 1;
for (int i = 0; i < p_len - 1; i++)
h = (h * BASE) % MODULUS;
int t_hash = 0;
int p_hash = 0;
for (int i = 0; i < p_len; i++)
{
t_hash = (BASE * t_hash + t[i]) % MODULUS;
p_hash = (BASE * p_hash + p[i]) % MODULUS;
}
int i = 0;
while (i <= t_len - p_len)
{
// Considering the possibility of hash collisions, we use memcmp for additional verification
if (t_hash == p_hash && memcmp(p, t + i, p_len) == 0)
cout << p << " is found at index " << i << endl;
// Rolling hash
t_hash = (BASE * (t_hash - t[i] * h) + t[i + p_len]) % MODULUS;
// Avoiding negative values
if (t_hash < 0)
t_hash = t_hash + MODULUS;
i++;
}
}
int main()
{
char t[100] = "It is a test, but not just a test";
char p[10] = "test";
RabinKarp(t, p);
return 0;
}
```
The output is as follows:
```text
test is found at index 8
test is found at index 29
```
## Complexity Analysis
Let's examine the space complexity first, which is easily determined: $S(n)=O(1)$.
Now, consider the time complexity. Let the length of the text string be n and the pattern string be m. Preprocessing requires $O(m)$, and during matching, in the best case where there are no hash collisions, $T_{best}(n)=O(n-m)$. In the worst case, where there is a collision every time, $T_{worst}(n)=O((n-m)*m)$. In practical scenarios, n is often much larger than m, so the final complexity table is:
| $S_{n}$ | $O(1)$ |
| -------------- | ------- |
| $T_{best}(n)$ | $O(n)$ |
| $T_{worst}(n)$ | $O(mn)$ |
## Application Analysis
The primary application of the Rabin-Karp algorithm is in plagiarism detection for articles, such as the detection system used by [Semantic Scholar](https://www.semanticscholar.org/).
However, from the complexity data above, the Rabin-Karp algorithm does not seem to have a significant advantage. Is it practical for detecting text plagiarism? Feedback from actual usage results indicates that the time complexity for plagiarism detection is only $O(n)$. I believe this is mainly due to the following two points:
1. In real-life articles, the text data does not often exhibit as many hash collisions as we might imagine.
2. The original content in a submitted article is likely to be much larger than the plagiarized content. In other words, successful matches do not occur as frequently as we might imagine.
## References
- [RabinKarp algorithm](https://en.wikipedia.org/wiki/RabinKarp_algorithm)
- [Searching for Patterns | Set 3 (Rabin-Karp Algorithm)](https://www.geeksforgeeks.org/searching-for-patterns-set-3-rabin-karp-algorithm/)
- [Computer Algorithms: Rabin-Karp String Searching](http://www.stoimen.com/blog/2012/04/02/computer-algorithms-rabin-karp-string-searching/)

View File

@@ -0,0 +1,195 @@
---
title: "The Liskov Substitution Principle"
subtitle: ""
summary: "A detailed explanation of the Liskov Substitution Principle, What it is, How to use it and why it benefits the architecture of our code."
coverURL: null
time: "2023-11-28"
tags: ["project-practice"]
noPrompt: false
pin: false
---
## Introduction
From all the SOLID principles the [Liskov substitution](https://en.wikipedia.org/wiki/Liskov_substitution_principle) principle is the one where the consequences of its violation will become apparent much later in the project. With the violation of any of the other SOLID principles, we can see the problems that can be created almost immediately. A violation of the LSP, will create problems in the future and actually is a late violation of the [Open Closed Principle](https://giannisakritidis.com/blog/The-Open-Closed-Principle/).
Barbara Liskov in 1988 wrote:
> What is wanted here is something like the following substitution property: If for each object O1 of type S there is an object O2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when O1 is substituted for O2 then S is a subtype of T.`
The best way to understand the above, is to see examples that violate the LSP and the consequences that this violation will have, but before that, let's see what is a type, what is a subtype, how a subtype is connected to its base type and how those are represented in OOP and more specifically in C#.
## Types and Subtypes
Let's say that I have two numbers: `3` and `2`.
If I say that these numbers are of type `int` then we know that the operation `3/2` will result in `1` and the operation `3+2` wil have a result of `5`.
If I say that these numbers are of type `float` then the operation `3/2` will have a result of `1.5` and if I say that they are of type `string` then the operation `3+2` will give a result of `32`.
From the above, we can understand that a type for an outside observer is actually defined by its behaviours. The data it has, is irrelevant for its definition.
The LSP describes when a type `S` is actually a subtype of `T`, but it is much easier to find out when the `S` is not a subtype of `T` by looking if the LSP is violated.
## IS A and HAS A Relationships
In OOP and in C# in particular, we create types and their subtypes with the mechanism of inheritance. Usually we describe inheritance like an `IS A` relationship and composition like a `HAS A` relationship.
This is not always correct. Although many times an `IS A` relationship can be represented with inheritance, there are times that this will violate the LSP. An `IS A` relationship does not always mean that we should create a base and a child class.
Before we get to the code examples, let's see why this is true with a real world example.
When we create a class, for example the class airplane, is this class actually an airplane? The answer is of course no. The class airplane is a piece of code that represents an airplane.
The problem with real world objects and their representations in code is that the relationships between two objects don't always transfer as relationships between their representatives.
If I am angry with my neighbor because I think he is making too much noise and he is angry with me because he says that I complain without any real reason and we hire lawyers, our lawyers represent us, their behaviour in court will reflect our opinions and beliefs, but our lawyers are not angry at each other. The angry relationship between myself and my neighbor does not transfer as a relationship between our representatives.
The same is true for any code we write. For example, a rectangular is a two dimensional shape in the real world and for that reason has width and height. The same is true for a square. As a two dimensional object it has width and height. There is also a relationship between those two shapes. A square `IS A` rectangle that its height equals its width.
When we create their representations in code, the classes square and rectangle, we might be tempted to make the square class a child of the Rectangle class. After all, in the real world the `IS A` relationship is true, but this will create problems because it is an LSP violation.
In code, we can represent a square with only one piece of data. A variable called side. This can happen, because square as a type has a condition that is always true, regardless of its state: Its height equals its width. This condition is called invariant and allows us to calculate both the width and the height of the square if we know its side. Although a real world square has width and height, its representation in code doesn't need to.
If we make the square class, a child of the rectangle class, it will inherit the behaviours of the rectangle class. Specifically the methods that set the width and the height and this will create big problems in the long run. Let's see why.
## The Square and Rectangle example
Let's create a rectangle class that has width and height and methods that set and get those values. I don't use properties here, to make the example more clear as properties are actually syntactic sugar in C# for getters and setters.
```java
public class Rectangle
{
protected float Height;
protected float Width;
public virtual void SetHeight(float height) => Height = height;
public virtual void SetWidth(float width) => Width = width;
public float GetHeight() => Height;
public float GetWidth() => Width;
}
```
Let's also create the Square class as a child of the rectangle class. To avoid the problem of the square having different height and width, I will set both of those whenever one of them is set.
```java
public class Square : Rectangle
{
public override void SetHeight(float height)
{
base.SetHeight(height);
Width = height;
}
public override void SetWidth(float width)
{
base.SetWidth(width);
Height = width;
}
}
```
This might seem that solves any problem we might have. Now our square will always have its width equal to its height. Let's suppose that these classes represent boards that the player character has in a game and is using them to build a fence. So the following code behaves as expected:
```java
Rectangle board = new Rectangle();
board.SetWidth(2);
board.SetHeight(6);
Square board2 = new Square();
board2.SetWidth(2);
board2.SetHeight(6);
```
Our first board will have a width of 2 units and a height of 6 units. Our second board will have both width and height at 6 units, so it will remain a square.
After some time, let's say six months or so, we get a new requirement: We need a way to show the player how much area his fence will cover.
That's easy, we create a class that calculates the areas of different shapes. Among the methods of this class, there is a method that calculates the rectangle area that represents the board area, then we can multiply that number with the number of boards the player will use to calculate the fence area.
Here is the method that will calculate the board area:
```java
public class AreaMethods
{
public float BoardArea(Rectangle rectangle) => rectangle.GetWidth() * rectangle.GetHeight();
}
```
This also works fine. The method will calculate the area correctly for both rectangular and square boards.
After some more time, a new requirement is created. It would be nice to let the player know, how much area his fence would cover if he could lower its height. That's also easy, if for example the player would like his fence to be half the height it was then we, or the programmer responsible for this task can do:
```java
board.SetHeight(board.GetHeight()/2);
areaCalculator.BoardArea(board)
```
But this is a bug. What would actually happen is that if the board is a rectangle, then the calculation would be correct, each board would be the same width and half the height, but if the board is of type square, by decreasing its height we also decrease its width. This would calculate the area of a fence that not only is half the height of the previous fence, but also is half the width.
Will we remember how we have coded the rectangle and square classes after a year or so? Even worse, what if the programmer responsible for that calculation is not us but someone else, what will he do? He would have to go find and check the square implementation so that he can understand why the area calculated, sometimes is half of what the correct result should be.
The reason we found ourselves in this situation, is because the child class (the square) has a different behaviour from the base class (the rectangle) in the same methods (the setters).
The square class cannot be used in the same method (the boardArea method) as the rectangle class and our program keep having the same behaviour( by halving the height, the area is half for the rectangle but 1/4th for the square). That's an LSP violation.
The only way to fix this problem now, would be to create an if statement that checks in the code the type of the instance that is the board. If it would be a rectangle everything is calculated the same way, but if it is a square, after the calculation we have to multiply by the amount that we divided the height.
Whoever sees this piece of code, will have to wonder, why there is a multiplication in there and when someone wants to extend the rectangle class, by creating another child class that for example has 2:1 ratio rectangles, he better remember that he also has to go to that piece of code and add another if statement that calculates the area correctly.
## The INPC Interface example
Although an interface, doesn't contain any behaviours because the behaviours are defined in the classes that implement the interface, we can still violate the LSP. This can happen when two different classes that implement the interface, change the behaviour of how our program reacts, in their implementations.
That doesn't mean that all classes that implement a method from an interface should have the same code obviously. There is a difference of what a method does with how it does it. A violation of the LSP occurs, when the classes do different things with the same method, not when they do the same thing in a different way.
Here's an example. Suppose we have NPCs in a game and those NPCs can talk to the player character or attack him. Each NPC, when is using the `Talk` method can do different things, for example he can give a quest to the player or open the inventory to buy and sell items. The same is true for the `Attack` method: one NPC might get offended when the player says something and attack in melee, another NPC might start using ranged attacks when the player character tries to steal from him.
That's easy, we create an interface that all NPCs must implement:
```java
public interface INpc
{
void Talk(Character target);
void Attack(Character target);
}
```
After a while, we get to create a new NPC. A magical tree that can only give quests to the player, it can never attack him. So we decide to derive from the `INpc` interface and keep the `Attack` method empty. After all, that is the requirement, the magical tree doesn't do attacks.
Time passes and after a couple of years, we are near the completion of our game. The player has finally found the cursed book that he was looking for and he is supposed to return it to the temple. He is warned of course that he must never open the book or he is doomed.
Well, we know that there is a big chance that the player will try to open the book, so we decide that if he does this, we will freeze the player controls and all NPCs around him will start hacking till he is dead. He shouldn't have done that, so it would be time to reload from the last save.
Did you notice the bug?
If the player opens the cursed book in front of the magical tree, when there is no other NPC around, the player controls freeze and the system that is responsible for the attacks will keep calling the `INpc.Attack` method, but this method is empty for the tree. Our game will freeze. The player won't be able to move and won't die because the `Attack` method has a different behaviour from what is expected. It doesn't attack the player, it does nothing.
The solution here, is still a mess. Even if we find the bug and not our players ( which would be difficult, as we would have to remember after a couple of years that there is an NPC that his attack is not actually an attack, but it does nothing ) we would still have to create dependencies from the system that is responsible for controlling the attacks to a particular implementation, the magical tree.
Suddenly, a system that was decoupled from the rest of our program and was only dependant on an abstraction the `INpc` interface, will have if statements that check for the type of each instance of INpc and if this is of type MagicalTree do something else, for example ignore the attack. Our system now has a dependency on the MagicalTree class. Any changes to this class will have the potential to affect an unrelated system, the system that is responsible for attacking using the `INpc` interface.
By keeping, the `INpc.Attack` empty, we violated the LSP. A better way would have been to create two interfaces, for example an `INpcTalk` interface that has all the methods responsible for talking and an `INpcAttack` interface that has all the methods responsible for attacking. That way, when we were creating the MagicalTree class, we would have made it implement only the `INpcTalk` interface and the system responsible for the attacking would use only the `INpcAttack` interface. That actually is part of the next principle, the interface segregation principle, in the next post.
## Late Violation of the OCP
By looking both of the above examples, we can see that a violation of the LSP, is actually a violation of [the open closed principle](https://giannisakritidis.com/blog/The-Open-Closed-Principle/) waiting to happen.
The consequences of violating the LSP, only manifest themselves far into the future, maybe even months or years after the violation occurred. At that point, any fix will be a messy one. We would have to check code that was written long ago and create checks for specific types.
By not following the LSP, our subtypes have a different behaviour that might not affect us the moment we write them, but eventually we will have to create code that tries to eliminate that behaviour. This can only happen by creating dependencies to pieces of code, unrelated to the one we are writing now, or even worse create dependencies to implementations, in systems that up to that point were dependant only on abstractions.
## Conclusion
The LSP tells us, that the behaviour of a program should not be different if we use a subtype in place of the base type. This happens when a subtype has a different behaviour from his base type. The mechanism of creating subtypes in C# is inheritance, so a method in a child class should not do a different thing when it is called, from the one that is expected from the base class.
In the previous examples, the square setters didn't only change the relevant dimension as was happening in the rectangle class and the MagicalTree could not attack, as was expected from any other class that implemented the `INpc` interface.
This doesn't mean that child classes or classes that implement interfaces should have the same code. There is a difference on what a method does with how it does it. If we want to conform to the LSP, our methods in our subtypes should do the same thing, but in a different way.
Empty inherited methods, are usually a sign of LSP violation. Inherited methods that throw unconditional exceptions are always a LSP violation, unless the base class also throws an unconditional exception ( but why have a method in a class that only throws an exception? ).
By being careful to write code that does not violate the LSP, we can save our future self a lot of headaches and that is the point of code architecture. Investing our time now so that we save more time far into the future.

View File

@@ -0,0 +1,139 @@
---
title: "everything you ever wanted to know about the computer vision"
summary: ""
coverURL: "https://miro.medium.com/v2/resize:fit:788/1*8gmgaAkFdI-9OHY5cA93xQ.png"
time: "2023-12-25"
tags: ["computer-vision"]
noPrompt: false
pin: false
allowShare: true
---
> This post is used to preview the display effect of long articles. The author is Ilija Mihajlovic and is reproduced in [here](https://towardsdatascience.com/everything-you-ever-wanted-to-know-about-computer-vision-heres-a-look-why-it-s-so-awesome-e8a58dfb641e)
One of the most powerful and compelling types of AI is computer vision which youve almost surely experienced in any number of ways without even knowing. Heres a look at what it is, how it works, and why its so awesome (and is only going to get better).
Computer vision is the field of computer science that focuses on replicating parts of the complexity of the human vision system and enabling computers to identify and process objects in images and videos in the same way that humans do. Until recently, computer vision only worked in limited capacity.
Thanks to advances in artificial intelligence and innovations in deep learning and neural networks, the field has been able to take great leaps in recent years and has been able to surpass humans in some tasks related to detecting and labeling objects.
One of the driving factors behind the growth of computer vision is the amount of data we generate today that is then used to train and make computer vision better.
![YOLO Multi-Object Detection And Classification. Photo by the author](https://miro.medium.com/v2/resize:fit:788/1*8gmgaAkFdI-9OHY5cA93xQ.png)
Along with a tremendous amount of visual data (_more than 3 billion images are shared online every day_), the computing power required to analyze the data is now accessible. As the field of computer vision has grown with new hardware and algorithms so has the accuracy rates for object identification. In less than a decade, todays systems have reached 99 percent accuracy from 50 percent making them more accurate than humans at quickly reacting to visual inputs.
Early experiments in computer vision started in the 1950s and it was first put to use commercially to distinguish between typed and handwritten text by the 1970s, today the applications for computer vision have grown exponentially.
> By 2022, the computer vision and hardware market is expected to reach $48.6 billion
## How Does Computer Vision Work?
One of the major open questions in both Neuroscience and Machine Learning is: How exactly do our brains work, and how can we approximate that with our own algorithms? The reality is that there are very few working and comprehensive theories of brain computation; so despite the fact that Neural Nets are supposed to “mimic the way the brain works,” nobody is quite sure if thats actually true.
The same paradox holds true for computer vision — since were not decided on how the brain and eyes process images, its difficult to say how well the algorithms used in production approximate our own internal mental processes.
On a certain level Computer vision is all about pattern recognition. So one way to train a computer how to understand visual data is to feed it images, lots of images thousands, millions if possible that have been labeled, and then subject those to various software techniques, or algorithms, that allow the computer to hunt down patterns in all the elements that relate to those labels.
So, for example, if you feed a computer a million images of cats (we all love them😄😹), it will subject them all to algorithms that let them analyze the colors in the photo, the shapes, the distances between the shapes, where objects border each other, and so on, so that it identifies a profile of what “cat” means. When its finished, the computer will (in theory) be able to use its experience if fed other unlabeled images to find the ones that are of cat.
Lets leave our fluffy cat ==friends== for a moment on the side and lets get more technical🤔😹. Below is a simple illustration of the grayscale image buffer which stores our image of Abraham Lincoln. Each pixels brightness is represented by a single 8-bit number, whose range is from 0 (black) to 255 (white):
![Pixel data diagram. At left, our image of Lincoln; at center, the pixels labeled with numbers from 0255, representing their brightness; and at right, these numbers by themselves. Photo by [Nguyen Dang Hoang Nhu](https://unsplash.com/@nguyendhn) on Unsplash](https://miro.medium.com/v2/resize:fit:788/0*CI5wgSszZnpHu5Ip.png)
```plaintext
{ 157, 153, 174, 168, 150, 152, 129, 151, 172, 161, 155, 156, 155, 182, 163, 74, 75, 62, 33, 17, 110, 210, 180, 154, 180, 180, 50, 14, 34, 6, 10, 33, 48, 106, 159, 181, 206, 109, 5, 124, 131, 111, 120, 204, 166, 15, 56, 180,194, 68, 137, 251, 237, 239, 239, 228, 227, 87, 71, 201, 172, 105, 207, 233, 233, 214, 220, 239, 228, 98, 74, 206, 188, 88, 179, 209, 185, 215, 211, 158, 139, 75, 20, 169, 189, 97, 165, 84, 10, 168, 134, 11, 31, 62, 22, 148, 199, 168, 191, 193, 158, 227, 178, 143, 182, 106, 36, 190, 205, 174, 155, 252, 236, 231, 149, 178, 228, 43, 95, 234, 190, 216, 116, 149, 236, 187, 86, 150, 79, 38, 218, 241, 190, 224, 147, 108, 227, 210, 127, 102, 36, 101, 255, 224, 190, 214, 173, 66, 103, 143, 96, 50, 2, 109, 249, 215, 187, 196, 235, 75, 1, 81, 47, 0, 6, 217, 255, 211, 183, 202, 237, 145, 0, 0, 12, 108, 200, 138, 243, 236, 195, 206, 123, 207, 177, 121, 123, 200, 175, 13, 96, 218 };
```
This way of storing image data may run counter to your expectations, since the data certainly *appears* to be two-dimensional when it is displayed. Yet, this is the case, since computer memory consists simply of an ever-increasing linear list of address spaces.
![How pixels are stored in memory. Photo by the author](https://miro.medium.com/v2/resize:fit:788/1*8Alt23ilo9Hiu7XolArdeQ.png)
Lets go back to the first picture again and imagine adding a colored one. Now things start to get more complicated. Computers usually read color as a series of 3 values — red, green, and blue (RGB) — on that same 0255 scale. Now, each pixel actually has 3 values for the computer to store in addition to its position. If we were to colorize President Lincoln, that would lead to 12 x 16 x 3 values, or 576 numbers.
![Photo by the author](https://miro.medium.com/v2/resize:fit:718/1*7L75EhL3cHAlsqt-umHABw.jpeg)
Thats a lot of memory to require for one image, and a lot of pixels for an algorithm to iterate over. But to train a model with meaningful accuracy especially when youre talking about Deep Learning youd usually need tens of thousands of images, and the more the merrier.
## The Evolution Of Computer Vision
Before the advent of deep learning, the tasks that computer vision could perform were very limited and required a lot of manual coding and effort by developers and human operators. For instance, if you wanted to perform facial recognition, you would have to perform the following steps:
- **Create a database**: You had to capture individual images of all the subjects you wanted to track in a specific format.
- **Annotate images**: Then for every individual image, you would have to enter several key data points, such as distance between the eyes, the width of nose bridge, distance between upper-lip and nose, and dozens of other measurements that define the unique characteristics of each person.
- **Capture new images**: Next, you would have to capture new images, whether from photographs or video content. And then you had to go through the measurement process again, marking the key points on the image. You also had to factor in the angle the image was taken.
After all this manual work, the application would finally be able to compare the measurements in the new image with the ones stored in its database and tell you whether it corresponded with any of the profiles it was tracking. In fact, there was very little automation involved and most of the work was being done manually. And the error margin was still large.
Machine learning provided a different approach to solving computer vision problems. With machine learning, developers no longer needed to manually code every single rule into their vision applications. Instead they programmed “features,” smaller applications that could detect specific patterns in images. They then used a statistical learning algorithm such as linear regression, logistic regression, decision trees or support vector machines (SVM) to detect patterns and classify images and detect objects in them.
Machine learning helped solve many problems that were historically challenging for classical software development tools and approaches. For instance, years ago, machine learning engineers were able to create a software that could predict breast cancer survival windows better than human experts. However building the features of the software required the efforts of dozens of engineers and breast cancer experts and took a lot of time develop.
Deep learning provided a fundamentally different approach to doing machine learning. Deep learning relies on neural networks, a general-purpose function that can solve any problem representable through examples. When you provide a neural network with many labeled examples of a specific kind of data, itll be able to extract common patterns between those examples and transform it into a mathematical equation that will help classify future pieces of information.
For instance, creating a facial recognition application with deep learning only requires you to develop or choose a preconstructed algorithm and train it with examples of the faces of the people it must detect. Given enough examples (lots of examples), the neural network will be able to detect faces without further instructions on features or measurements.
Deep learning is a very effective method to do computer vision. In most cases, creating a good deep learning algorithm comes down to gathering a large amount of labeled training data and tuning the parameters such as the type and number of layers of neural networks and training epochs. Compared to previous types of machine learning, deep learning is both easier and faster to develop and deploy.
Most of current computer vision applications such as cancer detection, self-driving cars and facial recognition make use of deep learning. Deep learning and deep neural networks have moved from the conceptual realm into practical applications thanks to availability and advances in hardware and cloud computing resources.
## How Long Does It Take To Decipher An Image
In short not much. Thats the key to why computer vision is so thrilling: Whereas in the past even supercomputers might take days or weeks or even months to chug through all the calculations required, todays ultra-fast chips and related hardware, along with the a speedy, reliable internet and cloud networks, make the process lightning fast. Once crucial factor has been the willingness of many of the big companies doing AI research to share their work Facebook, Google, IBM, and Microsoft, notably by open sourcing some of their machine learning work.
This allows others to build on their work rather than starting from scratch. As a result, the AI industry is cooking along, and experiments that not long ago took weeks to run might take 15 minutes today. And for many real-world applications of computer vision, this process all happens continuously in microseconds, so that a computer today is able to be what scientists call “situationally aware.”
## Applications Of Computer Vision
Computer vision is one of the areas in Machine Learning where core concepts are already being integrated into major products that we use every day.
## CV In Self-Driving Cars
But its not just tech companies that are leverage Machine Learning for image applications.
Computer vision enables self-driving cars to make sense of their surroundings. Cameras capture video from different angles around the car and feed it to computer vision software, which then processes the images in real-time to find the extremities of roads, read traffic signs, detect other cars, objects and pedestrians. The self-driving car can then steer its way on streets and highways, avoid hitting obstacles, and (hopefully) safely drive its passengers to their destination.
## CV In Facial Recognition
Computer vision also plays an important role in facial recognition applications, the technology that enables computers to match images of peoples faces to their identities. Computer vision algorithms detect facial features in images and compare them with databases of face profiles. Consumer devices use facial recognition to authenticate the identities of their owners. Social media apps use facial recognition to detect and tag users. Law enforcement agencies also rely on facial recognition technology to identify criminals in video feeds.
## CV In Augmented Reality & Mixed Reality
Computer vision also plays an important role in augmented and mixed reality, the technology that enables computing devices such as smartphones, tablets and smart glasses to overlay and embed virtual objects on real world imagery. Using computer vision, AR gear detect objects in real world in order to determine the locations on a devices display to place a virtual object. For instance, computer vision algorithms can help AR applications detect planes such as tabletops, walls and floors, a very important part of establishing depth and dimensions and placing virtual objects in physical world.
## CV In Healthcare
Computer vision has also been an important part of advances in health-tech. Computer vision algorithms can help automate tasks such as detecting cancerous moles in skin images or finding symptoms in x-ray and MRI scans.
## Challenges of Computer Vision
Helping computers to see turns out to be very hard.
Inventing a machine that sees like we do is a deceptively difficult task, not just because its hard to make computers do it, but because were not entirely sure how human vision works in the first place.
Studying biological vision requires an understanding of the perception organs like the eyes, as well as the interpretation of the perception within the brain. Much progress has been made, both in charting the process and in terms of discovering the tricks and shortcuts used by the system, although like any study that involves the brain, there is a long way to go.
![Credit For The Image Goes To: https://twitter.com/MikeTamir](https://miro.medium.com/v2/resize:fit:788/1*z89KwWbF59XXrsXXQCECPA.jpeg)
Many popular computer vision applications involve trying to recognize things in photographs; for example:
- **Object Classification**: What broad category of object is in this photograph?
- **Object Identification**: Which type of a given object is in this photograph?
- **Object Verification**: Is the object in the photograph?
- **Object Detection**: Where are the objects in the photograph?
- **Object Landmark Detection**: What are the key points for the object in the photograph?
- **Object Segmentation**: What pixels belong to the object in the image?
- **Object Recognition**: What objects are in this photograph and where are they?
Outside of just recognition, other methods of analysis include:
- **Video motion analysis** uses computer vision to estimate the velocity of objects in a video, or the camera itself.
- In **image segmentation**, algorithms partition images into multiple sets of views.
- **Scene reconstruction** creates a 3D model of a scene inputted through images or video.
- In **image restoration**, noise such as blurring is removed from photos using Machine Learning based filters.
Any other application that involves understanding pixels through software can safely be labeled as computer vision.
## Conclusion
Despite the recent progress, which has been impressive, were still not even close to solving computer vision. However, there are already multiple healthcare institutions and enterprises that have found ways to apply CV systems, powered by CNNs, to real-world problems. And this trend is not likely to stop anytime soon.