# my net house

WAHEGURU….!

## Graph-Algorithms in Python Part-1

Things required in Graph-Base-Class:

1. Number of vertices.
2. Graph Type-(Directed or un-directed)
4. Method to find adjacent Vertices.
5. Method to Get InDegree
6. Method to Get Edge Weight
7. method to Display the Graph

https://github.com/arshpreetsingh/GraphAlgos/blob/master/graph.py#L11

Node:

A single node in a graph represented by an adjacency set. Every node
has a vertex id, Each node is associated with a set of adjacent vertices

Link for Node Code here: https://github.com/arshpreetsingh/GraphAlgos/blob/master/graph.py#L45

Represents a graph as an adjacency set. A graph is a list of Nodes
and each Node has a set of adjacent vertices.
This graph in this current form cannot be used to represent
weighted edges only unweighted edges can be represented

Inherit from Graph Class,

Number of vertices—Graph Type

1. Add multiple-Nodes in the init() method of the class.

2. Method to add vertices/Edges with some Checks(vertex should not be equal to zero and Vertices value should not be more than nuber of vertices )

``````if v1 >= self.numVertices or v2 >= self.numVertices or v1 < 0 or v2 < 0:
raise ValueError("Vertices %d and %d are out of bounds" % (v1, v2))``````

For specific Location on Vertex List get adjacent value.

``self.vertex_list[v].get_adjacent_vertices()``

4. Get Iindegree of Vertex.

Find all the vertices which are connected with specifc vertex.

```    indegree = 0
for i in range(self.numVertices):
indegree = indegree + 1

return indegree
```

5. Display a Graph:

iterate through list of vertices(self.vertex_list) after that iterate through each Node in the list to get_adjacent_verices!

``````def display(self):
for i in range(self.numVertices):
print(i, "-->", v)``````

https://github.com/arshpreetsingh/GraphAlgos/blob/master/graph.py#L68

Represents a graph as an adjacency matrix. A cell in the matrix has
a value when there exists an edge between the vertex represented by
the row and column numbers.
Weighted graphs can hold values > 1 in the matrix cells
A value of 0 in the cell indicates that there is no edge

Rest of the methods will be Same as AdjacencyGraph. All operaitions will be replaced by matrix operarions insted of set().

## Some Courses to Look into

Mathematics for Machine Learning.

Quantum Physics Full Course.

Linear Algebra Full Course.

## Computational Gemometry Course!

https://www.coursera.org/learn/computational-geometry

## Vanila Implementation of Financial Risk modeling in Python!

For most of .

1. Use historical data of stocks to Calculate historical Portfolio Variance.
2. Use Factor Analysis Model to Calculate historical Portfolio Variance.
3. Setup Scenario using Factor Analysis model.(Stress Testing!)
4. Calculate Worst Case Scenario from Historical Data and Scenarios.
5. Compare VaR(Value at Risk) for these cases.

What is Six Step Approach for Scenario Based Risk Model?

1. Create a Basket of Financial Assests. (Each assest has uncertain “Returns”)
2. Calculate the Standard Deviation of That Bucket of Financial Assests.
3. Find out Systematic and ideo-Syncractic risk on Each asset of the portfolio.
4. Study those risk Factors and also compare with Historical Data.
5. Generate Scenarios. Find-out how Assest may perform in Future.
6. Calculate Worst-Case outcomes.

Scrap Data from Yahoo Finance for Mulitple Stock Symbols.

Assest Symbols are as follows:

```["AAPL","ADBE","CVX","GOOG","IBM","MDLZ","MSFT","NFLX","ORCL","SBUX","^GSPC","VOOG"]

```

Find out code to Download Stock’s Data and Save to CSV files individually.

```
for symbol in symbol_names:
period1=1283040000&period2=1598659200&interval=1d&events=history".format(symbol=symbol)
data = requests.get(dataurl)
with open(dir_path+symbol+".csv", 'w') as f:
for line in data.iter_lines():
f.write(line.decode('utf-8')+ '\n')
return

```

Now combine all CSV files into into one DataFrame.

```
def combine_data(dir_path =""):
data_frames=[]
files = os.listdir(dir_path)
for file in files:
[file.split(".")])
data_frames.append(data_frame)

result = pd.concat(data_frames, axis=1, sort=False)
return result

```

Now we have DataFrame which Looks something like this Now Portfolio assemble part is done.

Let’s calcuate Historical Risk, Understand carefully folllwing two lists. We have stock names on which we want to calculate Historical risk, and FactorNames based on which factors we want to calcuate the Historical Risk.

```
factorNames = ["^GSPC","VOOG","Intercept"]

```

Let’s Calculate StockReturns and FactorReturns. Why?

Because that’s all we need to Callculate to find Historical risk on Financial assests.

Look for following mathematical expression to calcualte historical Risk . Now calcuations!

```
stockReturns = returns[stockNames]
factorReturns = returns[factorNames]
weights = np.array([1.0/len(stockNames)] * len(stockNames))

historicalTotalRisk = np.dot(np.dot(weights,stockReturns.cov()),weights.T)

```

Now we have calculated the historical variance of our portfolio next step is to “Perform Factor Analysis” Systematic and Ideosyntractic.

Factor based model:

Decompose our model into Systemic and ideosyncractic Risks, Use this understanding for Stress-Testing Scenarios.
That would be our Scenario-Based model.

Systemic Risk:
IdeoSyncractic Risk:

Now we have chosen three Risk-Factors,
1. S&P500 – Spread across the market –> Systemic Risk
2. VFISX – Could effect across indiviual Stocks –>>Ideosyncratic Risk

Here assumption is if Interest Rates are going UP Stock investment willl go Low.
so S&P500 and VFISX are somewhat oposite to each other interms of Risk Factors.

Total Var = SystematicVar(p)+IdesyncracticVar(p)
Risk Factor Analysis: Now let’s Express “returns on Every stock” w.r.t “Factor-Returns” in terms
of regression Equation.

**This regression equation will tell us how much change in the RiskFactor wi effect the Returns of
each stock.

Residual = Observed value – predicted value
e = y – ŷ
Alpha = Stock Specific out Performance
Beta_ on factorF1
Beta_ on FactorF2

```
import statsmodels.api as sm

xData = factorReturns

modelCoeffs = []
for oneStockName in stockNames:
yData = stockReturns[oneStockName]
model = sm.OLS(yData, xData)
result = model .fit()
modelCoeffRow = list(result.params)
modelCoeffRow.append(np.std(result.resid,ddof=1))
modelCoeffs.append(modelCoeffRow)
print(result.summary())

```

we have calculates ResidualCofficents for each stock individually , using that let’s calculate Systematic and Ideosyncratic risk on Each stock.

SystematicRisk = Weignt*Factor(S)Beta Matrix* FactorsCovarianceMatrix*Transpose_BetaMatric*Tranpose_WeightMatrix

idiosyncraticRisk = sum(modelCoeffs[“ResidVol”] * modelCoeffs[“ResidVol”]* weights * weights)

factorModelTotalRisk = systemicRisk + idiosyncraticRisk

Systematic and Ideosyncractic  Variance calculations

```
factorCov = factorReturns[["VOOG","^GSPC"]].cov()
reconstructedCov = np.dot(np.dot(modelCoeffs[["B_FVX","B_SP"]], factorCov),modelCoeffs[["B_FVX","B_SP"]].T)
systemicRisk = np.dot(np.dot(weights,reconstructedCov),weights.T)
idiosyncraticRisk = sum(modelCoeffs["ResidVol"] * modelCoeffs["ResidVol"]* weights * weights)
factorModelTotalRisk = systemicRisk + idiosyncraticRisk
```

We have calcualted RiskFactors on each model as well. Let’s move on to generate Scenarios for each stock.

Scenario baed Model:

Let’s conside two scenarios:

1. S&P500 is at lowest point (Systemic Scenario)
2. FVX is at lowest point point. (ideosyncractic scenario)

we are assuming that 5% stp change for S&P500 and 2% for FVX each day.

Let’s use Python to generate FVX scenarios and Spscenarios,  wich indicate that how high and how low S&p500 and FVX could go in Future.

```<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>
fvxScenarios = np.arange(min(returns["FVX"]),max(returns["FVX"]),0.05)
spScenarios = np.arange(min(returns["^GSPC"]),max(returns["^GSPC"]),0.02)

```

Let’s Test Scenarios with Each individual Stock.

```
scenarios = []
for oneFVXValue in fvxScenarios:
for oneSPValue in spScenarios:
oneScenario = [oneFVXValue,oneSPValue]
for oneStockName in stockNames:
alpha = float(modelCoeffs[modelCoeffs["Names"] == oneStockName]["Alpha"])
beta_sp = float(modelCoeffs[modelCoeffs["Names"] == oneStockName]["B_SP"])
beta_fvx = float(modelCoeffs[modelCoeffs["Names"] == oneStockName]["B_FVX"])
oneStockPredictedReturn = alpha + beta_sp * oneSPValue + beta_fvx * oneFVXValue
oneScenario.append(oneStockPredictedReturn)
scenarios.append(oneScenario)

```

We have obtained Hitorical Risk, Factor based Model risk, this time we will calculate Scenario based risk

```
scenarios = pd.DataFrame(scenarios)
scenariosCov = scenarios[stockNames].cov()
scenarioTotalRisk = np.dot(np.dot(weights,scenariosCov ),weights.T)

So what we have done.
<ol>
<li>Calculated Historical Relationship</li>
<li>Calculated Risk using Factor based models.</li>
<li>Using factor Based models we generated Scenario-Based models.</li>
</ol>
<img class="alignnone size-full wp-image-2483" src="https://arshpreetsingh.files.wordpress.com/2020/08/did-all.png?w=680" alt="did-all" width="1344" height="759" />

Calculating VAR:

P = Our amont invested in the Stocks

sigma = Variance of Returns from (historical/Risk-factor/Scenario-Stress-Test baed model)

Z = How much %age of loss we can bear in which is "Number of Standard Deviation" away from mean.

<img class="alignnone size-full wp-image-2486" src="https://arshpreetsingh.files.wordpress.com/2020/08/thisone.png?w=680" alt="thisone" width="1343" height="758" />

Best way to Express VaR is as follows:

<img class="alignnone size-full wp-image-2487" src="https://arshpreetsingh.files.wordpress.com/2020/08/best-way.png?w=680" alt="best-way" width="1361" height="787" />
<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>

<strong>
Calculations of VaR using Python
</strong>

from scipy.stats import norm
import math

confLevel = 0.95
principal = 1
numMonths = 1

def calculateVaR(risk,confLevel,principal = 1,numMonths = 1):
vol = math.sqrt(risk)
return abs(principal*norm.ppf(1-confLevel,0,1)*vol*math.sqrt(numMonths))

print (calculateVaR(scenarioTotalRisk,0.99))
print (calculateVaR(historicalTotalRisk,0.99))
print (calculateVaR(factorModelTotalRisk,0.99))

```

Thing that Matters most is Eastimation of  Volitility, Which is Sigma.

MultiPeriod Var = VaR x Sqrt(number of trading Periods)

1. Helpful to modeling worst Case outcomes.
2. Sanctioned in regulations and risk accords.
3. Even understood by Non-Finance people.
4. Measured and Reported objectively.
5. easy to aggregate across assests to create End to end Risk matric

1. For short time assumptions
1. Multiperiod VaR assumes that Loss will be same in Each Trading Period, which is ver bad assumption to make. Stick to Single period VaR whereever possible.
2. VaR depends on Standard Deviation only.
• Skewness and Curtosis are ignored.
• VaR is only as Good as Variance Plugged into it.

## Funtionaly Funtions in Pythonistic Python(s) by Pythonista! Part-2

6. having a wrapper outside and inside.

Importance of function wrapper is handling your data behaviour but also making sure , your wrapper is able to handle any kind of behaviour.

``````>>> def escape_unicode(f):
...     def wrap(*args, **kwargs):
...         x = f(*args, **kwargs)
...         return ascii(x)
...     return wrap
...
>>> def northern_city():
...     return 'Tromsø'
...
>>> print(northern_city())
Tromsø
>>> @escape_unicode
... def northern_city():
...     return 'Tromsø'
...
>>> print(northern_city())
'Troms\xf8'
``````

7. Using Class to create wapper/Decorator

Any class level attribute inside __call__() would be a Attribute for function in which Class is Wrapped around!

``````>>> class CallCount:
...     def __init__(self, f):
...         self.f = f
...         self.count = 0
...     def __call__(self, *args, **kwargs):
...         self.count += 1
...         return self.f(*args, **kwargs)
...
>>> @CallCount
... def hello(name):
...     print('Hello, {}!'.format(name))
...
>>> hello('Fred')
Hello, Fred!
>>> hello('Wilma')
Hello, Wilma!
>>> hello('Betty')
Hello, Betty!
>>> hello('Barney')
Hello, Barney!
>>> hello.count
4``````

8. A wrapper inside class __call__() can be used to Turn Wrapper On and Off.

``````>>> class Trace:
...     def __init__(self):
...         self.enabled = True
...     def __call__(self, f):
...         def wrap(*args, **kwargs):
...             if self.enabled:
...                 print('Calling {}'.format(f))
...             return f(*args, **kwargs)
...         return wrap
...
>>> tracer = Trace()
>>> @tracer
... @escape_unicode
... def norwegian_island_maker(name):
...     return name + 'øy'
...
>>> norwegian_island_maker('Llama')
Calling <function escape_unicode.<locals>.wrap at 0x103b22ee0>
'Llama\\xf8y'
>>> norwegian_island_maker('Python')
Calling <function escape_unicode.<locals>.wrap at 0x103b22ee0>
'Python\\xf8y'
>>> norwegian_island_maker('Troll')
Calling <function escape_unicode.<locals>.wrap at 0x103b22ee0>
'Troll\\xf8y'
>>> tracer.enabled = False
>>> norwegian_island_maker('Llama')
'Llama\\xf8y'
>>> norwegian_island_maker('Python')
'Python\\xf8y'
>>> norwegian_island_maker('Troll')
'Troll\\xf8y'    ``````

9. Map would be able to accept multiple arguents if Functions who is being mapped able to accept Multiple argument.

``````>>> colors = ['lavender', 'teal', 'burnt orange']
>>> animals = ['koala', 'platypus', 'salamander']
>>> def combine(size, color, animal):
...     return '{} {} {}'.format(size, color, animal)
...
>>> list(map(combine, sizes, colors, animals))
['small lavender koala', 'medium teal platypus', 'large burnt orange salamander'
] ``````

10. List Comprehensions can be more lazy and complex with multipe For Loops and If statements.

``````>>> values = [x / (x - y) for x in range(100) if x > 50 for y in range(100) if x
- y != 0]
>>> values = [x / (x - y)
...           for x in range(100)
...           if x > 50
...           for y in range(100)
...           if x - y != 0]
>>> values = []
>>> for x in range(100):
...     if x > 50:
...         for y in range(100):
...             if x - y != 0:
...                 values.append(x / (x - y))
...                                ``````

11. list comprehensions could be Nested as well.

``````>>> vals = [[y * 3 for y in range(x)] for x in range(10)]
>>> outer = []
>>> for x in range(10):
...     inner = []
...     for y in range(x):
...         inner.append(y * 3)
...     outer.append(inner)
...
>>> vals
[[], , [0, 3], [0, 3, 6], [0, 3, 6, 9], [0, 3, 6, 9, 12], [0, 3, 6, 9, 12, 15
], [0, 3, 6, 9, 12, 15, 18], [0, 3, 6, 9, 12, 15, 18, 21], [0, 3, 6, 9, 12, 15,
18, 21, 24]]                ``````

12. Some ideas for Code interospection!

1. type() is of class type
``````>>> repr(int)
"<class 'int'>"
>>> type(i) is int
True
>>> type(i)(78)
78
>>> type(type(i))
<class 'type'>
>>> i.__class__
<class 'int'>``````

2. isinstance and issubclass is type-checking subclass of class Type()

``````>>> issubclass(type, object)
True
>>> type(object)
<class 'type'>
>>> isinstance(i, int)
True                                                                         ``````

3. To check if class object has specific attibute Eists.

``````>>> getattr(a, 'conjugate')
<built-in method conjugate of int object at 0x10ff2cfb0>
>>> callable(getattr(a, 'conjugate'))
True
>>> a.conjugate.__class__.__name__
'builtin_function_or_method'
>>> getattr(a, 'index')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'int' object has no attribute 'index'
>>> hasattr(a, 'bit_length')
True
>>> hasattr(a, 'index')
False``````

4. Globas() and Locals() is Dict whch keep track of your globals and Locals.

``````>>> globals()
{'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <cl
ass '_frozen_importlib.BuiltinImporter'>, '__spec__': None, '__annotations__': {
}, '__builtins__': <module 'builtins' (built-in)>}
>>> a = 42
>>> globals()
{'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <cl
ass '_frozen_importlib.BuiltinImporter'>, '__spec__': None, '__annotations__': {
}, '__builtins__': <module 'builtins' (built-in)>, 'a': 42}
>>> globals()['tau'] = 6.283185
>>> tau
6.283185
>>> tau / 2
3.1415925

>>> locals()
{'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <cl
ass '_frozen_importlib.BuiltinImporter'>, '__spec__': None, '__annotations__': {
}, '__builtins__': <module 'builtins' (built-in)>}
>>> def report_scope(arg):
...     from pprint import pprint as pp
...     x = 496
...     pp(locals(), width=10)
...
>>> report_scope(42)
{'arg': 42,
'pp': <function pprint at 0x10e70bee0>,
'x': 496}
>>>
``````

5. import inspect.

Just like Golang’s reflect package.

``````>>> def num_vowels(text: str) -> int:
...     return sum(1 if c.lower() in 'aeiou' else 0
...                for c in text)
...
>>> import inspect
>>> sig = inspect.signature(num_vowels)
>>> sig.parameters['text']
<Parameter "text: str">
>>> sig.parameters['text'].annotation
<class 'str'>
>>> sig
<Signature (text: str) -> int>
>>> sig.return_annotation
<class 'int'>
>>> num_vowels.__annotations__
{'text': <class 'str'>, 'return': <class 'int'>}
``````

## Funtionaly Funtions in Pythonistic Python(s) by Pythonista! Part-1

1. Call in Python:
Yes I meant to Say __call__()

Every Function Object is invoked using __call__() method which is Dunder for Every Object in Python.

Example:

``````import socket
def resolve(host):
… return socket.gethostbyname(host)
…
resolve

'172.217.24.238'
'172.217.166.228'
resolve('gndec.ac.in')
'202.164.53.112'
resolve.call('gndec.ac.in')
'202.164.53.112'``````

2. Implement Local Cache for Class.

any function/object/variable start with _ {underScore} Could be used as Local/Private for that class, this could be used globally as well if defined using “global”

``````import socket

class Resolver:
def __init__(self):
self._cache = {}

def __call__(self, host):
if host not in self._cache:
self._cache[host] = socket.gethostbyname(host)
return self._cache[host]

>>>resolve = Resolver()
>>>resolve('sixty-north.com')
'93.93.131.30'
>>> resolve.__call__('sixty-north.com')
'93.93.131.30'
>>> resolve._cache
{'sixty-north.com': '93.93.131.30'}
>>> resolve('pluralsight.com')
'54.148.56.39'
>>> resolve._cache
{'sixty-north.com': '93.93.131.30', 'pluralsight.com': '54.148.56.39'}
``````

3. Playing with “n” number of Keyword Args

def function(*args)

Imagine You want to Calculate Volume of shape and it could be Square, Cube, Tesseract or anything available even in Marvel universe.

``````>>> def hypervolume(*args):
...     print(args)
...     print(type(args))
...
>>> hypervolume(3, 4)
(3, 4)
<class 'tuple'>
>>> hypervolume(3, 4, 5)
(3, 4, 5)
<class 'tuple'>
>>> def hypervolume(*lengths):
...     i = iter(lengths)
...     v = next(i)
...     for length in i:
...         v *= length
...     return v
...
>>> hypervolume(2, 4)
8
>>> hypervolume(2, 4, 6)
48
>>> hypervolume(2, 4, 6, 8)
384``````

4. Function Enclosing.

Every Function object is Returnable Just like another function. This is also called closures.

``````>>> def raise_to(exp):
...    def raise_to_exp(x):
...        return pow(x,exp)
...    return raise_to_exp
...
>>> square = raise_to(2)
>>> square
<function raise_to_exp at 0x7f9d0f6da950>
>>> square(9)
81
>>> qube = raise_to(3)
>>> qube(3)
27
>>> qube(27)
19683
>>>
``````

Now first time you have set Default value for Expression/Object/Function. Any further cal will apply that value on your data. Very useful while writing default behaviour for API calls, or DB calls.

## Things to Compelte this Week!!

Vanila Implementation of Python and Data-Modeling in Financial markets.

Python Stacks and Queues.

Python BinarySearch Trees.

Python HashTables.

## Easy way to have Module Support in Golang

The go to Root DIR of project and run Following:

go mod init yourProject ..> it will generate go.mod file

go mod verify
That’s all , enjoy life and amazing Module level management in Golang.

## hacker’s guide to Traefik Edge Router

Service Discovery:

• Auto Detection of New instances of Service.
• By Default Load balancing is using Round-Robin, Weighted Round Robin, Custom Options are available based on Creativity.
• Available insights in UI (Routes/Services/Middlewares).
• Just like an edge router, we expose a port on the machine where it’s running, and all the inbound traffic goes through it.
• *Priority: One can assign priority to services, and all the load will be distributed to them based on priority. This isn’t mentioned in load-balancing strategies though.
• Provides Request Mirroring
• Only Round Robin Load Balancing is Supported on *Servers*
• Weighted Round Robin on *Services*

Default Dash-Board     InBuilt Authentication,Http Connection handling(timeout and etc) For EntryPoints and MiddleWares.

More UI Features:

• Entrypoints hits (success/failure etc..)
• HTTP Endpoints(Routers, Services, Middlewares) hits
• TCP(Routers, Services) hits

Configuration For EntryPoints and Routers

• Dynamic Configuration on the level of Strat-up and Runtime: (Hot Reloaded)

Consul Example:

[providers.consul]

endpoints = [“127.0.0.1:8500”]

rootKey = “traefik”

ETCD example:

[providers.etcd]

endpoints = [“127.0.0.1:2379”]

rootKey = “traefik”

Full Path Details for Keys and Values

Routers:

Entry Points talks to Routes to Call out Service Functions/Routes.

*Full Regex Support with && || conditions

* Priority Can be assigned to Each Route.

[http.routers]

[http.routers.Router-1]

rule = “HostRegexp(`.*\.traefik\.com`)”

# …

[http.routers.Router-2]

rule = “Host(`foobar.traefik.com`)”

Available Configuration Options:

https://docs.traefik.io/routing/providers/kv/

Health Checks:

It keeps doing health checkups of services, and are considered alive as long as they reply back with 2xx, 3xx HTTP codes

Mirroring:

• Able to mirror requests to other services.
• Whole request is buffered in memory until it is mirrored.

## Install Vanilla Kubernetes Cluster on Ubuntu.

Here We are Installing Vanilla Flavour of Kubernetes Cluster(With High Availablility), Which is Small but Production-Ready.

Prerequisites:

• You need to have at least Two VMs(Master Node and Slave Node)
• Both VMs should connected on the Same Network.
• Master Node must have 2 CPUs and at least 2 GB of RAM.
• Swap must be turned OFF on Both of the VM(s).
• A basic understanding of Linux, Networking and docker unless you are a magician. 😉

Installation Steps:

Run Following Command on Master Node.(Without Hash 😛 )
#  apt install docker.io

# systemctl enable docker

# apt install curl

# apt-add-repository “deb http://apt.kubernetes.io/ kubernetes-xenial main”

After Running all the Commands above you would be able to Install All Kubernetes Packages on the Master Node.

Now you need to Deploy Cluster. Run Following Commands.

# swapoff -a

#  hostnamectl set-hostname master-node

# kubeadm init –pod-network-cidr=10.244.0.0/16 (Look at it’s output Carefully!!)

# mkdir -p \$HOME/.kube

# chown \$(id -u):\$(id -g) \$HOME/.kube/config

Deploy Pod Network: (There are different Types of Networks you can use. Here we are using flannel)

### Add the slave node to the network in order to form a cluster.

Run the Command which you will get in output after running this:
kubeadm init –pod-network-cidr=10.244.0.0/16” on you Master Node. make sure you have to run Command which will lokk like Following on Your Slave-node. So you can use Kubernetes Cluster.

** Don’t Run Following Command as it is.

kubeadm join 172.31.9.252:6443 –token bixhtx.4bqyhe45g8fvhqeq \

–discovery-token-ca-cert-hash sha256:eb5f21bfda175f8d60ca50780941df83169893182c8fd2073de32daccc7b2e6d

Other Comman Kubernetes commands to Play with your Cluster:

# kubectl get nodes

# kubectl get pods –all-namespaces