my net house

WAHEGURU….!

Graph-Algorithms in Python Part-1

Things required in Graph-Base-Class:

1. Number of vertices.
2. Graph Type-(Directed or un-directed)
4. Method to find adjacent Vertices.
5. Method to Get InDegree
6. Method to Get Edge Weight
7. method to Display the Graph

https://github.com/arshpreetsingh/GraphAlgos/blob/master/graph.py#L11

Node:

A single node in a graph represented by an adjacency set. Every node
has a vertex id, Each node is associated with a set of adjacent vertices

Link for Node Code here: https://github.com/arshpreetsingh/GraphAlgos/blob/master/graph.py#L45

Represents a graph as an adjacency set. A graph is a list of Nodes
and each Node has a set of adjacent vertices.
This graph in this current form cannot be used to represent
weighted edges only unweighted edges can be represented

Inherit from Graph Class,

Number of vertices—Graph Type

1. Add multiple-Nodes in the init() method of the class.

2. Method to add vertices/Edges with some Checks(vertex should not be equal to zero and Vertices value should not be more than nuber of vertices )

``````if v1 >= self.numVertices or v2 >= self.numVertices or v1 < 0 or v2 < 0:
raise ValueError("Vertices %d and %d are out of bounds" % (v1, v2))``````

For specific Location on Vertex List get adjacent value.

``self.vertex_list[v].get_adjacent_vertices()``

4. Get Iindegree of Vertex.

Find all the vertices which are connected with specifc vertex.

```    indegree = 0
for i in range(self.numVertices):
indegree = indegree + 1

return indegree
```

5. Display a Graph:

iterate through list of vertices(self.vertex_list) after that iterate through each Node in the list to get_adjacent_verices!

``````def display(self):
for i in range(self.numVertices):
print(i, "-->", v)``````

https://github.com/arshpreetsingh/GraphAlgos/blob/master/graph.py#L68

Represents a graph as an adjacency matrix. A cell in the matrix has
a value when there exists an edge between the vertex represented by
the row and column numbers.
Weighted graphs can hold values > 1 in the matrix cells
A value of 0 in the cell indicates that there is no edge

Rest of the methods will be Same as AdjacencyGraph. All operaitions will be replaced by matrix operarions insted of set().

Vanila Implementation of Financial Risk modeling in Python!

For most of .

1. Use historical data of stocks to Calculate historical Portfolio Variance.
2. Use Factor Analysis Model to Calculate historical Portfolio Variance.
3. Setup Scenario using Factor Analysis model.(Stress Testing!)
4. Calculate Worst Case Scenario from Historical Data and Scenarios.
5. Compare VaR(Value at Risk) for these cases.

What is Six Step Approach for Scenario Based Risk Model?

1. Create a Basket of Financial Assests. (Each assest has uncertain “Returns”)
2. Calculate the Standard Deviation of That Bucket of Financial Assests.
3. Find out Systematic and ideo-Syncractic risk on Each asset of the portfolio.
4. Study those risk Factors and also compare with Historical Data.
5. Generate Scenarios. Find-out how Assest may perform in Future.
6. Calculate Worst-Case outcomes.

Scrap Data from Yahoo Finance for Mulitple Stock Symbols.

Assest Symbols are as follows:

```["AAPL","ADBE","CVX","GOOG","IBM","MDLZ","MSFT","NFLX","ORCL","SBUX","^GSPC","VOOG"]

```

Find out code to Download Stock’s Data and Save to CSV files individually.

```
for symbol in symbol_names:
period1=1283040000&period2=1598659200&interval=1d&events=history".format(symbol=symbol)
data = requests.get(dataurl)
with open(dir_path+symbol+".csv", 'w') as f:
for line in data.iter_lines():
f.write(line.decode('utf-8')+ '\n')
return

```

Now combine all CSV files into into one DataFrame.

```
def combine_data(dir_path =""):
data_frames=[]
files = os.listdir(dir_path)
for file in files:
[file.split(".")[0]])
data_frames.append(data_frame)

result = pd.concat(data_frames, axis=1, sort=False)
return result

```

Now we have DataFrame which Looks something like this

Now Portfolio assemble part is done.

Let’s calcuate Historical Risk, Understand carefully folllwing two lists. We have stock names on which we want to calculate Historical risk, and FactorNames based on which factors we want to calcuate the Historical Risk.

```
factorNames = ["^GSPC","VOOG","Intercept"]

```

Let’s Calculate StockReturns and FactorReturns. Why?

Because that’s all we need to Callculate to find Historical risk on Financial assests.

Look for following mathematical expression to calcualte historical Risk .

Now calcuations!

```
stockReturns = returns[stockNames]
factorReturns = returns[factorNames]
weights = np.array([1.0/len(stockNames)] * len(stockNames))

historicalTotalRisk = np.dot(np.dot(weights,stockReturns.cov()),weights.T)

```

Now we have calculated the historical variance of our portfolio next step is to “Perform Factor Analysis” Systematic and Ideosyntractic.

Factor based model:

Decompose our model into Systemic and ideosyncractic Risks, Use this understanding for Stress-Testing Scenarios.
That would be our Scenario-Based model.

Systemic Risk:
IdeoSyncractic Risk:

Now we have chosen three Risk-Factors,
1. S&P500 – Spread across the market –> Systemic Risk
2. VFISX – Could effect across indiviual Stocks –>>Ideosyncratic Risk

Here assumption is if Interest Rates are going UP Stock investment willl go Low.
so S&P500 and VFISX are somewhat oposite to each other interms of Risk Factors.

Total Var = SystematicVar(p)+IdesyncracticVar(p)
Risk Factor Analysis: Now let’s Express “returns on Every stock” w.r.t “Factor-Returns” in terms
of regression Equation.

**This regression equation will tell us how much change in the RiskFactor wi effect the Returns of
each stock.

Residual = Observed value – predicted value
e = y – ŷ
Alpha = Stock Specific out Performance
Beta_ on factorF1
Beta_ on FactorF2

```
import statsmodels.api as sm

xData = factorReturns

modelCoeffs = []
for oneStockName in stockNames:
yData = stockReturns[oneStockName]
model = sm.OLS(yData, xData)
result = model .fit()
modelCoeffRow = list(result.params)
modelCoeffRow.append(np.std(result.resid,ddof=1))
modelCoeffs.append(modelCoeffRow)
print(result.summary())

```

we have calculates ResidualCofficents for each stock individually , using that let’s calculate Systematic and Ideosyncratic risk on Each stock.

SystematicRisk = Weignt*Factor(S)Beta Matrix* FactorsCovarianceMatrix*Transpose_BetaMatric*Tranpose_WeightMatrix

idiosyncraticRisk = sum(modelCoeffs[“ResidVol”] * modelCoeffs[“ResidVol”]* weights * weights)

factorModelTotalRisk = systemicRisk + idiosyncraticRisk

Systematic and Ideosyncractic  Variance calculations

```
factorCov = factorReturns[["VOOG","^GSPC"]].cov()
reconstructedCov = np.dot(np.dot(modelCoeffs[["B_FVX","B_SP"]], factorCov),modelCoeffs[["B_FVX","B_SP"]].T)
systemicRisk = np.dot(np.dot(weights,reconstructedCov),weights.T)
idiosyncraticRisk = sum(modelCoeffs["ResidVol"] * modelCoeffs["ResidVol"]* weights * weights)
factorModelTotalRisk = systemicRisk + idiosyncraticRisk
```

We have calcualted RiskFactors on each model as well. Let’s move on to generate Scenarios for each stock.

Scenario baed Model:

Let’s conside two scenarios:

1. S&P500 is at lowest point (Systemic Scenario)
2. FVX is at lowest point point. (ideosyncractic scenario)

we are assuming that 5% stp change for S&P500 and 2% for FVX each day.

Let’s use Python to generate FVX scenarios and Spscenarios,  wich indicate that how high and how low S&p500 and FVX could go in Future.

```<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>
fvxScenarios = np.arange(min(returns["FVX"]),max(returns["FVX"]),0.05)
spScenarios = np.arange(min(returns["^GSPC"]),max(returns["^GSPC"]),0.02)

```

Let’s Test Scenarios with Each individual Stock.

```
scenarios = []
for oneFVXValue in fvxScenarios:
for oneSPValue in spScenarios:
oneScenario = [oneFVXValue,oneSPValue]
for oneStockName in stockNames:
alpha = float(modelCoeffs[modelCoeffs["Names"] == oneStockName]["Alpha"])
beta_sp = float(modelCoeffs[modelCoeffs["Names"] == oneStockName]["B_SP"])
beta_fvx = float(modelCoeffs[modelCoeffs["Names"] == oneStockName]["B_FVX"])
oneStockPredictedReturn = alpha + beta_sp * oneSPValue + beta_fvx * oneFVXValue
oneScenario.append(oneStockPredictedReturn)
scenarios.append(oneScenario)

```

We have obtained Hitorical Risk, Factor based Model risk, this time we will calculate Scenario based risk

```
scenarios = pd.DataFrame(scenarios)
scenariosCov = scenarios[stockNames].cov()
scenarioTotalRisk = np.dot(np.dot(weights,scenariosCov ),weights.T)

So what we have done.
<ol>
<li>Calculated Historical Relationship</li>
<li>Calculated Risk using Factor based models.</li>
<li>Using factor Based models we generated Scenario-Based models.</li>
</ol>
<img class="alignnone size-full wp-image-2483" src="https://arshpreetsingh.files.wordpress.com/2020/08/did-all.png?w=680" alt="did-all" width="1344" height="759" />

Calculating VAR:

P = Our amont invested in the Stocks

sigma = Variance of Returns from (historical/Risk-factor/Scenario-Stress-Test baed model)

Z = How much %age of loss we can bear in which is "Number of Standard Deviation" away from mean.

<img class="alignnone size-full wp-image-2486" src="https://arshpreetsingh.files.wordpress.com/2020/08/thisone.png?w=680" alt="thisone" width="1343" height="758" />

Best way to Express VaR is as follows:

<img class="alignnone size-full wp-image-2487" src="https://arshpreetsingh.files.wordpress.com/2020/08/best-way.png?w=680" alt="best-way" width="1361" height="787" />
<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>

<strong>
Calculations of VaR using Python
</strong>

from scipy.stats import norm
import math

confLevel = 0.95
principal = 1
numMonths = 1

def calculateVaR(risk,confLevel,principal = 1,numMonths = 1):
vol = math.sqrt(risk)
return abs(principal*norm.ppf(1-confLevel,0,1)*vol*math.sqrt(numMonths))

print (calculateVaR(scenarioTotalRisk,0.99))
print (calculateVaR(historicalTotalRisk,0.99))
print (calculateVaR(factorModelTotalRisk,0.99))

```

Thing that Matters most is Eastimation of  Volitility, Which is Sigma.

MultiPeriod Var = VaR x Sqrt(number of trading Periods)

1. Helpful to modeling worst Case outcomes.
2. Sanctioned in regulations and risk accords.
3. Even understood by Non-Finance people.
4. Measured and Reported objectively.
5. easy to aggregate across assests to create End to end Risk matric

1. For short time assumptions
1. Multiperiod VaR assumes that Loss will be same in Each Trading Period, which is ver bad assumption to make. Stick to Single period VaR whereever possible.
2. VaR depends on Standard Deviation only.
• Skewness and Curtosis are ignored.
• VaR is only as Good as Variance Plugged into it.

Funtionaly Funtions in Pythonistic Python(s) by Pythonista! Part-2

6. having a wrapper outside and inside.

Importance of function wrapper is handling your data behaviour but also making sure , your wrapper is able to handle any kind of behaviour.

``````>>> def escape_unicode(f):
...     def wrap(*args, **kwargs):
...         x = f(*args, **kwargs)
...         return ascii(x)
...     return wrap
...
>>> def northern_city():
...     return 'Tromsø'
...
>>> print(northern_city())
Tromsø
>>> @escape_unicode
... def northern_city():
...     return 'Tromsø'
...
>>> print(northern_city())
'Troms\xf8'
``````

7. Using Class to create wapper/Decorator

Any class level attribute inside __call__() would be a Attribute for function in which Class is Wrapped around!

``````>>> class CallCount:
...     def __init__(self, f):
...         self.f = f
...         self.count = 0
...     def __call__(self, *args, **kwargs):
...         self.count += 1
...         return self.f(*args, **kwargs)
...
>>> @CallCount
... def hello(name):
...     print('Hello, {}!'.format(name))
...
>>> hello('Fred')
Hello, Fred!
>>> hello('Wilma')
Hello, Wilma!
>>> hello('Betty')
Hello, Betty!
>>> hello('Barney')
Hello, Barney!
>>> hello.count
4``````

8. A wrapper inside class __call__() can be used to Turn Wrapper On and Off.

``````>>> class Trace:
...     def __init__(self):
...         self.enabled = True
...     def __call__(self, f):
...         def wrap(*args, **kwargs):
...             if self.enabled:
...                 print('Calling {}'.format(f))
...             return f(*args, **kwargs)
...         return wrap
...
>>> tracer = Trace()
>>> @tracer
... @escape_unicode
... def norwegian_island_maker(name):
...     return name + 'øy'
...
>>> norwegian_island_maker('Llama')
Calling <function escape_unicode.<locals>.wrap at 0x103b22ee0>
'Llama\\xf8y'
>>> norwegian_island_maker('Python')
Calling <function escape_unicode.<locals>.wrap at 0x103b22ee0>
'Python\\xf8y'
>>> norwegian_island_maker('Troll')
Calling <function escape_unicode.<locals>.wrap at 0x103b22ee0>
'Troll\\xf8y'
>>> tracer.enabled = False
>>> norwegian_island_maker('Llama')
'Llama\\xf8y'
>>> norwegian_island_maker('Python')
'Python\\xf8y'
>>> norwegian_island_maker('Troll')
'Troll\\xf8y'    ``````

9. Map would be able to accept multiple arguents if Functions who is being mapped able to accept Multiple argument.

``````>>> colors = ['lavender', 'teal', 'burnt orange']
>>> animals = ['koala', 'platypus', 'salamander']
>>> def combine(size, color, animal):
...     return '{} {} {}'.format(size, color, animal)
...
>>> list(map(combine, sizes, colors, animals))
['small lavender koala', 'medium teal platypus', 'large burnt orange salamander'
] ``````

10. List Comprehensions can be more lazy and complex with multipe For Loops and If statements.

``````>>> values = [x / (x - y) for x in range(100) if x > 50 for y in range(100) if x
- y != 0]
>>> values = [x / (x - y)
...           for x in range(100)
...           if x > 50
...           for y in range(100)
...           if x - y != 0]
>>> values = []
>>> for x in range(100):
...     if x > 50:
...         for y in range(100):
...             if x - y != 0:
...                 values.append(x / (x - y))
...                                ``````

11. list comprehensions could be Nested as well.

``````>>> vals = [[y * 3 for y in range(x)] for x in range(10)]
>>> outer = []
>>> for x in range(10):
...     inner = []
...     for y in range(x):
...         inner.append(y * 3)
...     outer.append(inner)
...
>>> vals
[[], [0], [0, 3], [0, 3, 6], [0, 3, 6, 9], [0, 3, 6, 9, 12], [0, 3, 6, 9, 12, 15
], [0, 3, 6, 9, 12, 15, 18], [0, 3, 6, 9, 12, 15, 18, 21], [0, 3, 6, 9, 12, 15,
18, 21, 24]]                ``````

12. Some ideas for Code interospection!

1. type() is of class type
``````>>> repr(int)
"<class 'int'>"
>>> type(i) is int
True
>>> type(i)(78)
78
>>> type(type(i))
<class 'type'>
>>> i.__class__
<class 'int'>``````

2. isinstance and issubclass is type-checking subclass of class Type()

``````>>> issubclass(type, object)
True
>>> type(object)
<class 'type'>
>>> isinstance(i, int)
True                                                                         ``````

3. To check if class object has specific attibute Eists.

``````>>> getattr(a, 'conjugate')
<built-in method conjugate of int object at 0x10ff2cfb0>
>>> callable(getattr(a, 'conjugate'))
True
>>> a.conjugate.__class__.__name__
'builtin_function_or_method'
>>> getattr(a, 'index')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'int' object has no attribute 'index'
>>> hasattr(a, 'bit_length')
True
>>> hasattr(a, 'index')
False``````

4. Globas() and Locals() is Dict whch keep track of your globals and Locals.

``````>>> globals()
{'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <cl
ass '_frozen_importlib.BuiltinImporter'>, '__spec__': None, '__annotations__': {
}, '__builtins__': <module 'builtins' (built-in)>}
>>> a = 42
>>> globals()
{'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <cl
ass '_frozen_importlib.BuiltinImporter'>, '__spec__': None, '__annotations__': {
}, '__builtins__': <module 'builtins' (built-in)>, 'a': 42}
>>> globals()['tau'] = 6.283185
>>> tau
6.283185
>>> tau / 2
3.1415925

>>> locals()
{'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <cl
ass '_frozen_importlib.BuiltinImporter'>, '__spec__': None, '__annotations__': {
}, '__builtins__': <module 'builtins' (built-in)>}
>>> def report_scope(arg):
...     from pprint import pprint as pp
...     x = 496
...     pp(locals(), width=10)
...
>>> report_scope(42)
{'arg': 42,
'pp': <function pprint at 0x10e70bee0>,
'x': 496}
>>>
``````

5. import inspect.

Just like Golang’s reflect package.

``````>>> def num_vowels(text: str) -> int:
...     return sum(1 if c.lower() in 'aeiou' else 0
...                for c in text)
...
>>> import inspect
>>> sig = inspect.signature(num_vowels)
>>> sig.parameters['text']
<Parameter "text: str">
>>> sig.parameters['text'].annotation
<class 'str'>
>>> sig
<Signature (text: str) -> int>
>>> sig.return_annotation
<class 'int'>
>>> num_vowels.__annotations__
{'text': <class 'str'>, 'return': <class 'int'>}
``````

Funtionaly Funtions in Pythonistic Python(s) by Pythonista! Part-1

1. Call in Python:
Yes I meant to Say __call__()

Every Function Object is invoked using __call__() method which is Dunder for Every Object in Python.

Example:

``````import socket
def resolve(host):
… return socket.gethostbyname(host)
…
resolve

'172.217.24.238'
'172.217.166.228'
resolve('gndec.ac.in')
'202.164.53.112'
resolve.call('gndec.ac.in')
'202.164.53.112'``````

2. Implement Local Cache for Class.

any function/object/variable start with _ {underScore} Could be used as Local/Private for that class, this could be used globally as well if defined using “global”

``````import socket

class Resolver:
def __init__(self):
self._cache = {}

def __call__(self, host):
if host not in self._cache:
self._cache[host] = socket.gethostbyname(host)
return self._cache[host]

>>>resolve = Resolver()
>>>resolve('sixty-north.com')
'93.93.131.30'
>>> resolve.__call__('sixty-north.com')
'93.93.131.30'
>>> resolve._cache
{'sixty-north.com': '93.93.131.30'}
>>> resolve('pluralsight.com')
'54.148.56.39'
>>> resolve._cache
{'sixty-north.com': '93.93.131.30', 'pluralsight.com': '54.148.56.39'}
``````

3. Playing with “n” number of Keyword Args

def function(*args)

Imagine You want to Calculate Volume of shape and it could be Square, Cube, Tesseract or anything available even in Marvel universe.

``````>>> def hypervolume(*args):
...     print(args)
...     print(type(args))
...
>>> hypervolume(3, 4)
(3, 4)
<class 'tuple'>
>>> hypervolume(3, 4, 5)
(3, 4, 5)
<class 'tuple'>
>>> def hypervolume(*lengths):
...     i = iter(lengths)
...     v = next(i)
...     for length in i:
...         v *= length
...     return v
...
>>> hypervolume(2, 4)
8
>>> hypervolume(2, 4, 6)
48
>>> hypervolume(2, 4, 6, 8)
384``````

4. Function Enclosing.

Every Function object is Returnable Just like another function. This is also called closures.

``````>>> def raise_to(exp):
...    def raise_to_exp(x):
...        return pow(x,exp)
...    return raise_to_exp
...
>>> square = raise_to(2)
>>> square
<function raise_to_exp at 0x7f9d0f6da950>
>>> square(9)
81
>>> qube = raise_to(3)
>>> qube(3)
27
>>> qube(27)
19683
>>>
``````

Now first time you have set Default value for Expression/Object/Function. Any further cal will apply that value on your data. Very useful while writing default behaviour for API calls, or DB calls.

Understand Stack and Heap from the Ideology of C and C/C++

Writing Flow:
1. Introduction to working of Stack and Heap inside Memory .

2. How Stack Works.

3. How Heap Works.

4. Conclusion based on performance and Ease of use. ::

There are various parts which are responsible to make one complete set of memory which is responsible for Execution and life cycle of application  or process. We can also say that this process is unified either you are using ThreadPool or Process Pool. Whole memory is divided into major three parts, Stack, Heap, Code(Text), Global variables and Constant’s Section, As from the name of last two it can be concluded easily that these parts are responsible for storage of Global Variable and keeping the source code, Other two important parts left are Stack and Heap. Let’s talk about Stack in next Section.

Stack is container of one and many functions with the static memory allocated to it, All the memory to stack is allocated before the compile time and it remains same throughout the whole life cycle of application, if there comes more requirement of memory than the allocated one it becomes famous “Stack-over-flow. Main() function call sits on the bottom of the stack. It gets executed at the end of the Stack, All the other Function based on calls sit on each other  in the stack. All the local Variables are also stored in the Stack as well.

Heap is something different in terms of ideology from the stack, Where stack is all about occurring of function call  in the series with fixed memory size but in the case of Heap it is all Dynamic. When we need requirement of memory in the runtime without not fixed size heap is there for us. There are multiple ways we can use memory allocations in heap in C and C++. Some known methods are Malloc(), Calloc(), realloc() and free(). C++ is superset of C so all the mentioned functions can also be used in C++ as well, with each call of these functions we get Pointer to Memory block, with that Block one can store data. Make sure one this that in C and C++ memory allocation is not Dynamic so one has to Allocate and De-Allocate memory manually.

Apart from C and C++ there are many other languages those make substantial use of Stacks and Heaps, For example in Python most of the stuff is Stored on heap so  you can allocated data to it even on run time but when you use Generators() or Lazy-Iterators() those are framed on stack and with each continuous  call each functions free the space in the Stack, In the case of Go-lang  something very weird happens, Quoting from StackOverFlow-

It’s worth noting that the words “stack” and “heap” do not appear anywhere in the language spec. Your question is worded with “…is declared on the stack,” and “…declared on the heap,” but note that Go declaration syntax says nothing about stack or heap.

That technically makes the answer to all of your questions implementation dependent. In actuality of course, there is a stack (per goroutine!) and a heap and some things go on the stack and some on the heap. In some cases the compiler follows rigid rules (like “`new` always allocates on the heap”) and in others the compiler does “escape analysis” to decide if an object can live on the stack or if it must be allocated on the heap.

Source for Stack and Heap of C/C++: https://www.youtube.com/watch?v=_8-ht2AKyH4

Note:* Overall  Performance and Consistency of the Application is based on smart use of both Stack and Heap.

Click – Command Line Interface Creation Kit , OH yeahh Python

This is an Easy to do things in python so we can run scripts using command line Arguments for us as well.

```
import click

@click.command()
@click.option('--count',default = 1, )
@click.option('--name',prompt = "What you are supposed to do here?")
def test_click(count,name):
for i in range(count):
click.echo(str(name))
'''
How to run this script:

python test_click.py --name "Arsh" --count 4

'''
```

Logging and importance of it with examples

Python and Flask supports wide range of logging as well. Either it’s warning, error or just a logger you can ago through all of those in very specific instance of time.

Logging is important of the Maintainability of the application.

Now logging is something like you need to go for when you see or feel that your web app needs lots of “Watching as well!”

Here is simple Example in Flask:

import logging
from logging.handlers import RotatingFileHandler

@app.route('/')
def foo():
app.logger.warning('A warning occurred (%d apples)', 42)
app.logger.error('An error occurred')
app.logger.info('Info')
return "foo"

if __name__ == '__main__':
handler = RotatingFileHandler('foo.log', maxBytes=10000, backupCount=1)
handler.setLevel(logging.INFO)
app.run()

[/code ]

for more detailed view on logging and system you can go for the following link as well. :

It’s very much explanatory: https://gist.github.com/mariocj89/73824162a3e35d50db8e758a42e39aab

Hacker’s way to Build of Block-Chain!(Purely-Pythonic+HyperLedger)

What is Block-Chain?

it is just a DB and it is immutable. DB has access with multiple users and they can only create new entries.

What are the potentials behind Block-Chain?

-Cross-Question->Dude! Talk to a salesman who know how to sell comb to bald. I just write code.

How block chain works?  No Idea, Let’s build ONE!!

Objective: We will use IBM hyperledger and Python-flask to create a wrapper around it and this wrapper  will be able to make REST calls to hyperledger. Bingo-Bascially we will hack Hyper-ledger from Pythonic way!

Before a business network definition can be deployed it must be packaged into a Business Network Archive (.bna) file.

Create your model file, You can use Online Playground:http://composer-playground.mybluemix.net

models/org.acme.biznet.cto

We’re assuming each Entity (i.e. Factory), will make use of RFID tags to store information on the food, and will scan that tag as it’s received. This information, such as timestamp, the date, and state (production, freezing, packaging, distribution) is stored on the Block-chain.

lib/logic.js

Script File From here, you will define the transaction processor functions, these are the functions that will execute when the transactions are invoked.The ChangeStateToProduction function will change the state of the current food to Production.

permissions.acl

“Access Control” from the left pane. From here, you can determine which participants of the business network have access to which assets and transactions.

These are the important Parts of .bna file for Hyperledger-Composer! One need to understand these three very well to get to know more about “A Smart Contract!”

1. Setting up the Environment(On Ubuntu 16.04 )

`curl -O https://gist.github.com/arshpreetsingh/2e628aea04d8615766b2ce14de4e5888`

Run the script:

`./prereqs-ubuntu.sh`

2. Installing the Development Environment

Essential CLI tools

`npm install -g composer-cli`

Utility for running a REST Server on your machine to expose your business networks as RESTful APIs:

`npm install -g composer-rest-server`

Useful utility for generating application assets:

`npm install -g generator-hyperledger-composer`

Yeoman is a tool for generating applications, which utilises generator-hyperledger-composer:

npm install -g generator-hyperledger-composer

npm install yeoman-generator

`npm install -g yo`

Install Playground

`npm install -g composer-playground`

Install Hyperledger Fabric

This step gives you a local Hyperledger Fabric runtime to deploy your business networks to.

1. In a directory of your choice (we will assume ~/fabric-tools), get the .zip file that contains the tools to install Hyperledger Fabric:

`mkdir ~/fabric-tools && cd ~/fabric-tools`

`curl -O https://raw.githubusercontent.com/hyperledger/composer-tools/master/packages/fabric-dev-servers/fabric-dev-servers.zip`

`unzip fabric-dev-servers.zip`

A `tar.gz` is also available if you prefer: just replace the `.zip` file with `fabric-dev-servers.tar.gz1` and the `unzip` command with a `tar xvzf` command in the above snippet.

`cd ~/fabric-tools` `./downloadFabric.sh`

Start Hyperledger Fabric

Start the fabric:

`./startFabric.sh`

`./createPeerAdminCard.sh`

You can start and stop your runtime using `~/fabric-tools/stopFabric.sh`, and start it again with `~/fabric-tools/startFabric.sh`.

At the end of your development session, you run `~/fabric-tools/stopFabric.sh` and then `~/fabric-tools/teardownFabric.sh`. Note that if you’ve run the teardown script, the next time you start the runtime, you’ll need to create a new PeerAdmin card just like you did on first time startup.

After creating the `.bna` file, the business network can be deployed to the instance of Hyperledger Fabric. Normally, information from the Fabric administrator is required to create a `PeerAdmin` identity, with privileges to deploy chaincode to the peer. However, as part of the development environment installation, a PeerAdmin identity has been created already.

After the runtime has been installed, a business network can be deployed to the peer. For best practice, a new identity should be created to administrate the business network after deployment. This identity is referred to as a network admin.

Retrieving the Correct Credentials

A `PeerAdmin` business network card with the correct credentials is already created as part of development environment installation.

Deploying a business network to the Hyperledger Fabric requires the Hyperledger Composer chaincode to be installed on the peer, then the business network archive (.bna) must be sent to the peer, and a new participant, identity, and associated card must be created to be the network administrator. Finally, the network administrator business network card must be imported for use, and the network can then be pinged to check it is responding.

1. To install the composer runtime, run the following command:

`composer runtime install --card PeerAdmin@hlfv1 --businessNetworkName pizza-on-the-blockchain`

The `composer runtime install` command requires a PeerAdmin business network card (in this case one has been created and imported in advance), and the name of the business network.

1. To deploy the business network, from the `pizza-on-the-blockchain` directory, run the following command:

```composer network start --card PeerAdmin@hlfv1 --networkAdmin admin --networkAdminEnrollSecret adminpw --archiveFile pizza-on-the-blockchain@0.0.1.bna --file networkadmin.card```

The `composer network start` command requires a business network card, as well as the name of the admin identity for the business network, the file path of the `.bna` and the name of the file to be created ready to import as a business network card.

1. To import the network administrator identity as a usable business network card, run the following command:

`composer card import --file networkadmin.card`

The `composer card import` command requires the filename specified in `composer network start` to create a card.

1. To check that the business network has been deployed successfully, run the following command to ping the network:

`composer network ping --card admin@pizza-on-the-blockchain`

The `composer network ping` command requires a business network card to identify the network to ping.

Generating a REST Server

Hyperledger Composer can generate a bespoke REST API based on a business network. For developing a web application, the REST API provides a useful layer of language-neutral abstraction.

1. To create the REST API, navigate to the `pizza-on-the-blockchain` directory and run the following command:

`composer-rest-server`

1. Enter `admin@pizza-on-the-blockchain` as the card name.
2. Select never use namespaces when asked whether to use namespaces in the generated API.
3. Select No when asked whether to secure the generated API.
4. Select Yes when asked whether to enable event publication.
5. Select No when asked whether to enable TLS security.

The generated API is connected to the deployed blockchain and business network.

Once the REST server is up and running, head over to https://localhost:3000/explorer

Running the Application

1. Ensure Python is installed on your local environment (Both Python 2 and Python 3 are supported).
2. Install the requirements using the command `pip install -r requirements.txt`.
3. Run the application as: `python application.py`.
4. Point your web browser to the address `localhost:<port>`.

One may Enjoy Different parts of My Application here.

https://github.com/arshpreetsingh/food-on-the-blockchain

TPOT Python Example to Build Pipeline for AAPL

This is  just first Quick and Fast Post.

TPOT Research  Paper: https://arxiv.org/pdf/1702.01780.pdf

```
import datetime
import numpy as np
import pandas as pd
import sklearn
from tpot import TPOTClassifier
from sklearn.model_selection import train_test_split

df = pd.DataFrame(index=apple_data.index)
df['price']=apple_data.Open
df['daily_returns']=df['price'].pct_change().fillna(0.0001)
df['multiple_day_returns'] =  df['price'].pct_change(3)
df['rolling_mean'] = df['daily_returns'].rolling(window = 4,center=False).mean()

df['time_lagged'] = df['price']-df['price'].shift(-2)

df['direction'] = np.sign(df['daily_returns'])
Y = df['direction']
X=df[['price','daily_returns','multiple_day_returns','rolling_mean']].fillna(0.0001)

X_train, X_test, y_train, y_test = train_test_split(X,Y,train_size=0.75, test_size=0.25)

tpot = TPOTClassifier(generations=50, population_size=50, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_aapl_pipeline.py')

```

The Python file It returned: Which is real Code one can use to Create Trading Strategy. TPOT helped to Selected Algorithms and Value of It’s features. right now we have only provided ‘price’,’daily_returns’,’multiple_day_returns’,’rolling_mean’ to predict Target. One can use multiple features and implement as per the requirement.

```
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split

# NOTE: Make sure that the class is labeled 'target' in the data file
features = tpot_data.drop('target', axis=1).values
training_features, testing_features, training_target, testing_target = \
train_test_split(features, tpot_data['target'].values, random_state=42)

# Score on the training set was:1.0
exported_pipeline = GradientBoostingClassifier(learning_rate=0.5, max_depth=7, max_features=0.7500000000000001, min_samples_leaf=11, min_samples_split=12, n_estimators=100, subsample=0.7500000000000001)

exported_pipeline.fit(training_features, training_target)
results = exported_pipeline.predict(testing_features)

```

Socket Programming and have fun with Python

Client Socket and Server Socket:

Client Computer like your browser or any piece of code you want to talk to your server uses client socket and Server uses both client and server socket.

Sockets are great for Cross-Platform communication.

Following is minimal Example of Socket and stuff:

```s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("www.python.org", 80))```

What is INET?What is Sock_Stream?

Almost that is all happened on client side, When connect is completed socket that is ‘s’ we just created can be used to send and request the specific text page requested. This socket will be read and reply, after that it will be destroyed. Client sockets are normally only used for one exchange (or a small set of sequential exchanges).

Now let’s look what is happening at server side:

```# create an INET, STREAMing socket
serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# bind the socket to a public host, and a well-known port
serversocket.bind((socket.gethostname(), 80))
# become a server socket
serversocket.listen(5)```