my net house

WAHEGURU….!

Category Archives: linux

CRI-O(Container Runtime Interface)

CRIO for Kubernetes is what JRE for Java.

1. Kubernetes connects to kubelet to launch a POD.
POD: Pod is single bock for Kubernetes consisting of one or more container, sharing the same IPC, NET and PID namespaces and living in the same cgroup

2. Kubelet forwards the request to CRIO demon via CRI to launch new POD.

3. CRIO uses the container’s image library to pull the image from the container’s registry.  

4. downloaded image gets unpacked in container’s root file-system.(Just like we install some OS)  

5. after the rootfs is created CRI-O generates the OCI(Open Container interface) specification json-file explaining how to run Container.

6. Each container is maintained by a “conmon” process, it does monitoring,logging and handling PTY for the container.  

7. Networking for the pod is setup through use of CNI(Container network interface)

Now What is CNI and why do we need it?

 CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted.

Application containers on Linux are a rapidly evolving area, and within this area networking is not well addressed as it is highly environment-specific.

Sources:
https://github.com/containernetworking/cni
https://cri-o.io/

After reading the above, the following makes more sense.
https://cri-o.io/assets/images/architecture.png

Some more Aspects Kubernetes!!

  1. RepicationControler: it is “kind” of kubernetes which is responsible to take care about number of replications to use for service, container-image, Container-port, Service-name
  2. Service: It is responsible to make your Custer talk to outside world, It could be loadBalance, or Proxy service.

Now something a little more about Networking!!

  1. Cluster_IP-> it is the most basic type of service, An IP is assigned to service so Other services Could use this Service. Outside word can talk to your POD but wil have to go through some sort of PROXY!!
  2. TargetPort-> It allows you to separate the port from where you want to expose your service and where it is actually running inside container!!
  3. Nodeport-> it’s bit special and coulb be little scary as well but just hang-tight, nodeport makes service available to each node via statis port, so assume ou have three nodes, IP1, IP2, IP3 each are having same service running inside, You will assign some port number say 8081 and each node will be accessible on same port using Node’s IP and port. something like:

IP1:8081, IP2:8081, IP3:8081

4. ExternalIP: Another approach to making a service available outside of the cluster is via External IP addresses.

5. LoadBalancer: When running in the cloud, such as EC2 or Azure, it’s possible to configure and assign a Public IP address issued via the cloud provider. This will be issued via a Load Balancer such as ELB. This allows additional public IP addresses to be allocated to a Kubernetes cluster without interacting directly with the cloud provider.

For more one can follow following POST on linkedin!

https://www.linkedin.com/posts/dr-rabi-prasad-padhy-%E2%98%81%E2%98%81%E2%98%81-396804110_service-types-in-kubernetes-when-we-are-activity-6796285869864169472-gXF1

Understanding YAML in-Depth!!

Basically there are three parts of Every YAML file.

  1. MetaData(we define it)
  2. spec (specifications) (we define that too)
  3. state(that depennds on kubernetes brain!! which is “etcd!”)

rest is version and kind, version define which API version we are going to use and kind defines the type of deployment or POD we are going to implement here/!!

When you use the Kubernetes API to create the object (either directly or via kubectl), that API request must include that information as JSON in the request body. Most often, you provide the information to kubectl in a .yaml file. kubectl converts the information to JSON when making the API request.

Source: https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/

Metadata: It helps uniquely to understand an object,  in

cluding a name string, UID, and optional namespace

spec – What state you desire for the object

to be continued –> https://cloudplex.io/tutorial/how-to-write-yaml-files-for-kubernetes/

RUn Nginx in Kubernetes (Deployment and Scaling replicas) Part-2

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

Take above yaml file as an example.

Now run Following command to create Deployment.

kubectl create -f nginx-depoy.yaml

Ways to check your pod has been Created or not.

khangura@metal-machine:~$ kubectl get pod

NAME READY STATUS RESTARTS AGE
hello-minikube-6ddfcc9757-k7qqv 1/1 Running 0 24h
nginx-deployment-5d59d67564-57sk7 1/1 Running 0 2m3s
nginx-deployment-5d59d67564-6d9t8 1/1 Running 0 2m3s
nginx-deployment-5d59d67564-bpsrj 1/1 Running 0 2m3s

We can always Update the deployment like increase/scale number of replicas.

Use following to update the Deployment.

kubectl edit deployment nginx-deployment # it will use our basic editor

Kubernetes Basics(pods creation and understanding)

Create a Cluster Using MiniKube:

https://minikube.sigs.k8s.io/docs/start/

start your Cluster:

$ minikube start # start your Kubernetes cluster

$ kubectl get po -A # get list of all pods and your cluster.

Get to know your cluster.

(base) khangura@metal-machine:~$ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-74ff55c5b-zxngm 1/1 Running 1 66d
kube-system etcd-minikube 1/1 Running 1 66d
kube-system kube-apiserver-minikube 1/1 Running 4 66d
kube-system kube-controller-manager-minikube 1/1 Running 1 66d
kube-system kube-proxy-dgvds 1/1 Running 1 66d
kube-system kube-scheduler-minikube 1/1 Running 1 66d
kube-system storage-provisioner 1/1 Running 14 66d
kubernetes-dashboard dashboard-metrics-scraper-c95fcf479-th7gh 1/1 Running 1 66d
kubernetes-dashboard kubernetes-dashboard-6cff4c7c4f-pvs5r 1/1 Running 10 66d

Remember each different service name has different significance.

etcd-minicube – etcd is configuration management system.

apiserver-minicube – for your cluster to interact with clients.

controller-manager-minikube : – A cluster manager node.

there is much ore deep to each node, Need to follow docs to understand things into more deep dive, Right-Now need to remember that Each unit of POD plays significant role in the structure of Cluster.

$ minikube dashboard # To start minikube Dashboard and to analyse how things are going.

$ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4

Create a Sample deploymen, Remember that, You can’ expose Pods You need to expose deployent to external posrts.

$ kubectl expose deployment hello-minikube --type=NodePort --port=8080

Link with external port.

You can check status of your deployment.

 $ kubectl get services hello-minikube

Use kubectl to forwrd your port.

$ kubectl port-forward service/hello-minikube 7080:8080

Funtionaly Funtions in Pythonistic Python(s) by Pythonista! Part-1

  1. Call in Python:
    Yes I meant to Say __call__()

Every Function Object is invoked using __call__() method which is Dunder for Every Object in Python.

Example:

import socket
def resolve(host):
… return socket.gethostbyname(host)
…
resolve

resolve('google.com')
'172.217.24.238'
resolve('www.google.com')
'172.217.166.228'
resolve('gndec.ac.in')
'202.164.53.112'
resolve.call('gndec.ac.in')
'202.164.53.112'

2. Implement Local Cache for Class.

any function/object/variable start with _ {underScore} Could be used as Local/Private for that class, this could be used globally as well if defined using “global”

import socket

class Resolver:
    def __init__(self):
        self._cache = {}

    def __call__(self, host):
        if host not in self._cache:
            self._cache[host] = socket.gethostbyname(host)
        return self._cache[host]

>>>resolve = Resolver()
>>>resolve('sixty-north.com')
'93.93.131.30'
>>> resolve.__call__('sixty-north.com')
'93.93.131.30'
>>> resolve._cache
{'sixty-north.com': '93.93.131.30'}
>>> resolve('pluralsight.com')
'54.148.56.39'
>>> resolve._cache
{'sixty-north.com': '93.93.131.30', 'pluralsight.com': '54.148.56.39'}

3. Playing with “n” number of Keyword Args

def function(*args)

Imagine You want to Calculate Volume of shape and it could be Square, Cube, Tesseract or anything available even in Marvel universe.

>>> def hypervolume(*args):                                                     
...     print(args)                                                             
...     print(type(args))                                                       
...                                                                             
>>> hypervolume(3, 4)                                                           
(3, 4)                                                                          
<class 'tuple'>                                                                 
>>> hypervolume(3, 4, 5)                                                        
(3, 4, 5)                                                                       
<class 'tuple'>                                                                 
>>> def hypervolume(*lengths):                                                  
...     i = iter(lengths)                                                       
...     v = next(i)                                                             
...     for length in i:                                                        
...         v *= length                                                         
...     return v                                                                
...                                                                             
>>> hypervolume(2, 4)                                                           
8                                                                               
>>> hypervolume(2, 4, 6)                                                        
48                                                                              
>>> hypervolume(2, 4, 6, 8)                                                     
384

 4. Function Enclosing.

Every Function object is Returnable Just like another function. This is also called closures.

>>> def raise_to(exp):
...    def raise_to_exp(x):
...        return pow(x,exp)
...    return raise_to_exp
... 
>>> square = raise_to(2)
>>> square
<function raise_to_exp at 0x7f9d0f6da950>
>>> square(9)
81
>>> qube = raise_to(3)
>>> qube(3)
27
>>> qube(27)
19683
>>> 

Now first time you have set Default value for Expression/Object/Function. Any further cal will apply that value on your data. Very useful while writing default behaviour for API calls, or DB calls.

Some Gitty GITs

When you don’t want to type username an password each time or when you are not allowed to do so as well.

Your repo should look like this:

git@github.com:USERNAME/REPOSITORY.git

you can see config using:


nano .git/config

SSH key can be generated like:

ssh-keygen -t rsa -b 4096 -C “arshpreet.singh@myemail.com”

 

More about Git and adding ssh key can be found here:
https://help.github.com/articles/connecting-to-github-with-ssh/

TPOT Python Example to Build Pipeline for AAPL

This is  just first Quick and Fast Post.

TPOT Research  Paper: https://arxiv.org/pdf/1702.01780.pdf


import datetime
import numpy as np
import pandas as pd
import sklearn
from pandas_datareader import data as read_data
from tpot import TPOTClassifier
from sklearn.model_selection import train_test_split

apple_data = read_data.get_data_yahoo("AAPL")
df = pd.DataFrame(index=apple_data.index)
df['price']=apple_data.Open
df['daily_returns']=df['price'].pct_change().fillna(0.0001)
df['multiple_day_returns'] =  df['price'].pct_change(3)
df['rolling_mean'] = df['daily_returns'].rolling(window = 4,center=False).mean()

df['time_lagged'] = df['price']-df['price'].shift(-2)

df['direction'] = np.sign(df['daily_returns'])
Y = df['direction']
X=df[['price','daily_returns','multiple_day_returns','rolling_mean']].fillna(0.0001)

X_train, X_test, y_train, y_test = train_test_split(X,Y,train_size=0.75, test_size=0.25)

tpot = TPOTClassifier(generations=50, population_size=50, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_aapl_pipeline.py')

The Python file It returned: Which is real Code one can use to Create Trading Strategy. TPOT helped to Selected Algorithms and Value of It’s features. right now we have only provided ‘price’,’daily_returns’,’multiple_day_returns’,’rolling_mean’ to predict Target. One can use multiple features and implement as per the requirement.


import numpy as np
import pandas as pd
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import train_test_split

# NOTE: Make sure that the class is labeled 'target' in the data file
tpot_data = pd.read_csv('PATH/TO/DATA/FILE', sep='COLUMN_SEPARATOR', dtype=np.float64)
features = tpot_data.drop('target', axis=1).values
training_features, testing_features, training_target, testing_target = \
            train_test_split(features, tpot_data['target'].values, random_state=42)

# Score on the training set was:1.0
exported_pipeline = GradientBoostingClassifier(learning_rate=0.5, max_depth=7, max_features=0.7500000000000001, min_samples_leaf=11, min_samples_split=12, n_estimators=100, subsample=0.7500000000000001)

exported_pipeline.fit(training_features, training_target)
results = exported_pipeline.predict(testing_features)

RUN parallel commands on your linux system Using Python

This is really simple Python script that would run on your system if You have to run parallel commands, All you just has to do is Open multiple tabs and run commandson each ab as per your requirement.


#!/usr/bin/env python
import subprocess

command = 'ping .8.8.8.8'
terminal = ['gnome-terminal']

for host in ('server1', 'server2'):
    terminal.extend(['--tab', '-e', '''
        bash -c '
            echo "%(host)s$ %(command)s"
            ssh -t %(host)s "%(command)s"
            read
        '
    ''' % locals()])

subprocess.call(terminal)

Common Regular Expressions

Find line in multiple files:


grep -rnw '/home/ubuntu/workspace/tools-tpn-ops/' -e 'import os'

case insensitive: grep -i "Aditi" helloaditi

case ins and separate existance: grep -iw "is" helloaditi

match and show three lines after that: grep -A 3 -i "am" helloaditi

case of exclusion: grep -v -e "arsh" -e "honey" helloaditi

find position of word in file: grep -o -b "aditi" helloaditi

find line number of matching text: grep -n "sharma" helloaditi

read a specified line number: if you exactly know the line number then do

lets say you want to read line 84
head -84 a.txt| tail -1

%d bloggers like this: