my net house

WAHEGURU….!

Category Archives: Python

Logging and importance of it with examples

Python and Flask supports wide range of logging as well. Either it’s warning, error or just a logger you can ago through all of those in very specific instance of time.


Logging is important of the Maintainability of the application.

Now logging is something like you need to go for when you see or feel that your web app needs lots of “Watching as well!”

Here is simple Example in Flask:

import logging
from logging.handlers import RotatingFileHandler

from flask import Flask

app = Flask(__name__)

@app.route('/')
def foo():
app.logger.warning('A warning occurred (%d apples)', 42)
app.logger.error('An error occurred')
app.logger.info('Info')
return "foo"

if __name__ == '__main__':
handler = RotatingFileHandler('foo.log', maxBytes=10000, backupCount=1)
handler.setLevel(logging.INFO)
app.logger.addHandler(handler)
app.run()

[/code ]

for more detailed view on logging and system you can go for the following link as well. :

It’s very much explanatory: https://gist.github.com/mariocj89/73824162a3e35d50db8e758a42e39aab

Hacker’s way to Build of Block-Chain!(Purely-Pythonic+HyperLedger)

What is Block-Chain?

it is just a DB and it is immutable. DB has access with multiple users and they can only create new entries.

What are the potentials behind Block-Chain?

-Cross-Question->Dude! Talk to a salesman who know how to sell comb to bald. I just write code.

w

How block chain works?  No Idea, Let’s build ONE!!

12740327191_SPLASH

Objective: We will use IBM hyperledger and Python-flask to create a wrapper around it and this wrapper  will be able to make REST calls to hyperledger. Bingo-Bascially we will hack Hyper-ledger from Pythonic way!

 

  1. Create your Business-Network.

Understanding of .bna file is really important! (You can download )

Before a business network definition can be deployed it must be packaged into a Business Network Archive (.bna) file.

Create your model file, You can use Online Playground:http://composer-playground.mybluemix.net

models/org.acme.biznet.cto

We’re assuming each Entity (i.e. Factory), will make use of RFID tags to store information on the food, and will scan that tag as it’s received. This information, such as timestamp, the date, and state (production, freezing, packaging, distribution) is stored on the Block-chain.

lib/logic.js

Script File From here, you will define the transaction processor functions, these are the functions that will execute when the transactions are invoked.The ChangeStateToProduction function will change the state of the current food to Production.

permissions.acl

“Access Control” from the left pane. From here, you can determine which participants of the business network have access to which assets and transactions.

These are the important Parts of .bna file for Hyperledger-Composer! One need to understand these three very well to get to know more about “A Smart Contract!”

1. Setting up the Environment(On Ubuntu 16.04 )

Download and install all required Pre-requisites:

curl -O https://gist.github.com/arshpreetsingh/2e628aea04d8615766b2ce14de4e5888

Run the script:

./prereqs-ubuntu.sh

2. Installing the Development Environment

Essential CLI tools

npm install -g composer-cli

Utility for running a REST Server on your machine to expose your business networks as RESTful APIs:

npm install -g composer-rest-server

Useful utility for generating application assets:

npm install -g generator-hyperledger-composer

Yeoman is a tool for generating applications, which utilises generator-hyperledger-composer:

npm install -g generator-hyperledger-composer

npm install yeoman-generator

npm install -g yo

Install Playground

npm install -g composer-playground

Install Hyperledger Fabric

This step gives you a local Hyperledger Fabric runtime to deploy your business networks to.

  1. In a directory of your choice (we will assume ~/fabric-tools), get the .zip file that contains the tools to install Hyperledger Fabric:

mkdir ~/fabric-tools && cd ~/fabric-tools

curl -O https://raw.githubusercontent.com/hyperledger/composer-tools/master/packages/fabric-dev-servers/fabric-dev-servers.zip

unzip fabric-dev-servers.zip

A tar.gz is also available if you prefer: just replace the .zip file with fabric-dev-servers.tar.gz1 and the unzip command with a tar xvzf command in the above snippet.

  1. Use the scripts you just downloaded and extracted to download a local Hyperledger Fabric runtime:

cd ~/fabric-tools ./downloadFabric.sh

Start Hyperledger Fabric

Start the fabric:

./startFabric.sh

Generate a PeerAdmin card:

./createPeerAdminCard.sh

You can start and stop your runtime using ~/fabric-tools/stopFabric.sh, and start it again with ~/fabric-tools/startFabric.sh.

At the end of your development session, you run ~/fabric-tools/stopFabric.sh and then ~/fabric-tools/teardownFabric.sh. Note that if you’ve run the teardown script, the next time you start the runtime, you’ll need to create a new PeerAdmin card just like you did on first time startup.

Deploying the Business Network

After creating the .bna file, the business network can be deployed to the instance of Hyperledger Fabric. Normally, information from the Fabric administrator is required to create a PeerAdmin identity, with privileges to deploy chaincode to the peer. However, as part of the development environment installation, a PeerAdmin identity has been created already.

After the runtime has been installed, a business network can be deployed to the peer. For best practice, a new identity should be created to administrate the business network after deployment. This identity is referred to as a network admin.

Retrieving the Correct Credentials

A PeerAdmin business network card with the correct credentials is already created as part of development environment installation.

Deploying the Business Network

Deploying a business network to the Hyperledger Fabric requires the Hyperledger Composer chaincode to be installed on the peer, then the business network archive (.bna) must be sent to the peer, and a new participant, identity, and associated card must be created to be the network administrator. Finally, the network administrator business network card must be imported for use, and the network can then be pinged to check it is responding.

  1. To install the composer runtime, run the following command:

composer runtime install --card PeerAdmin@hlfv1 --businessNetworkName pizza-on-the-blockchain

The composer runtime install command requires a PeerAdmin business network card (in this case one has been created and imported in advance), and the name of the business network.

  1. To deploy the business network, from the pizza-on-the-blockchain directory, run the following command:

composer network start --card PeerAdmin@hlfv1 --networkAdmin
admin --networkAdminEnrollSecret adminpw --archiveFile
pizza-on-the-blockchain@0.0.1.bna --file networkadmin.card

The composer network start command requires a business network card, as well as the name of the admin identity for the business network, the file path of the .bna and the name of the file to be created ready to import as a business network card.

  1. To import the network administrator identity as a usable business network card, run the following command:

composer card import --file networkadmin.card

The composer card import command requires the filename specified in composer network start to create a card.

  1. To check that the business network has been deployed successfully, run the following command to ping the network:

composer network ping --card admin@pizza-on-the-blockchain

The composer network ping command requires a business network card to identify the network to ping.

Generating a REST Server

Hyperledger Composer can generate a bespoke REST API based on a business network. For developing a web application, the REST API provides a useful layer of language-neutral abstraction.

  1. To create the REST API, navigate to the pizza-on-the-blockchain directory and run the following command:

composer-rest-server

  1. Enter admin@pizza-on-the-blockchain as the card name.
  2. Select never use namespaces when asked whether to use namespaces in the generated API.
  3. Select No when asked whether to secure the generated API.
  4. Select Yes when asked whether to enable event publication.
  5. Select No when asked whether to enable TLS security.

The generated API is connected to the deployed blockchain and business network.

Once the REST server is up and running, head over to https://localhost:3000/explorer

Running the Application

  1. Ensure Python is installed on your local environment (Both Python 2 and Python 3 are supported).
  2. Install the requirements using the command pip install -r requirements.txt.
  3. Run the application as: python application.py.
  4. Point your web browser to the address localhost:<port>.

One may Enjoy Different parts of My Application here.

1

2

3

4

5

https://github.com/arshpreetsingh/food-on-the-blockchain

 

TPOT Python Example to Build Pipeline for AAPL

This is  just first Quick and Fast Post.

TPOT Research  Paper: https://arxiv.org/pdf/1702.01780.pdf


import datetime
import numpy as np
import pandas as pd
import sklearn
from pandas_datareader import data as read_data
from tpot import TPOTClassifier
from sklearn.model_selection import train_test_split

apple_data = read_data.get_data_yahoo("AAPL")
df = pd.DataFrame(index=apple_data.index)
df['price']=apple_data.Open
df['daily_returns']=df['price'].pct_change().fillna(0.0001)
df['multiple_day_returns'] =  df['price'].pct_change(3)
df['rolling_mean'] = df['daily_returns'].rolling(window = 4,center=False).mean()

df['time_lagged'] = df['price']-df['price'].shift(-2)

df['direction'] = np.sign(df['daily_returns'])
Y = df['direction']
X=df[['price','daily_returns','multiple_day_returns','rolling_mean']].fillna(0.0001)

X_train, X_test, y_train, y_test = train_test_split(X,Y,train_size=0.75, test_size=0.25)

tpot = TPOTClassifier(generations=50, population_size=50, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_aapl_pipeline.py')

The Python file It returned: Which is real Code one can use to Create Trading Strategy. TPOT helped to Selected Algorithms and Value of It’s features. right now we have only provided ‘price’,’daily_returns’,’multiple_day_returns’,’rolling_mean’ to predict Target. One can use multiple features and implement as per the requirement.


import numpy as np
import pandas as pd
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import train_test_split

# NOTE: Make sure that the class is labeled 'target' in the data file
tpot_data = pd.read_csv('PATH/TO/DATA/FILE', sep='COLUMN_SEPARATOR', dtype=np.float64)
features = tpot_data.drop('target', axis=1).values
training_features, testing_features, training_target, testing_target = \
            train_test_split(features, tpot_data['target'].values, random_state=42)

# Score on the training set was:1.0
exported_pipeline = GradientBoostingClassifier(learning_rate=0.5, max_depth=7, max_features=0.7500000000000001, min_samples_leaf=11, min_samples_split=12, n_estimators=100, subsample=0.7500000000000001)

exported_pipeline.fit(training_features, training_target)
results = exported_pipeline.predict(testing_features)

Socket Programming and have fun with Python

Client Socket and Server Socket:

Client Computer like your browser or any piece of code you want to talk to your server uses client socket and Server uses both client and server socket.

Sockets are great for Cross-Platform communication.

Following is minimal Example of Socket and stuff:

s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("www.python.org", 80))

What is INET?What is Sock_Stream?

Almost that is all happened on client side, When connect is completed socket that is ‘s’ we just created can be used to send and request the specific text page requested. This socket will be read and reply, after that it will be destroyed. Client sockets are normally only used for one exchange (or a small set of sequential exchanges).

Now let’s look what is happening at server side:

# create an INET, STREAMing socket
serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# bind the socket to a public host, and a well-known port
serversocket.bind((socket.gethostname(), 80))
# become a server socket
serversocket.listen(5)

 

Generators, Context-Managers and Coroutines:(Completing course)

Generator Pipelines:

  1. Several Pipelines can be linked together.
  2. Items flows one by one through the entire Pipeline.
  3. Pipeline Functionality can be packaged into callable functions.

One Example of Generator pipeline:

def seprate_names():

for full_name in names:

for name in full_name.split(‘ ‘):

yield name

full_names = name.strip() for name in open(‘names.txt’)

names = seprate_names(full_names)

lengths = ((name,len(name))for name in names)

longest  = max(lengths,key=lambda x:x[1])

Another approach is as follows is one wants to use using Function name: anotherapproac

 

Context-Manager:

Why we need context-Manager?

‘with’ statement in Python that we use to do file operations is Context-manager. It is something like to “Have the state and open that state and with-in that state to do things”. using with in Python we open file and till the file is open we do some good things and after doing all good things we close the file. So ‘with’ is a context Manager using that we make the state of file open and after that we do all the things we need to do with file.

Other useful cases of Context-manager:

Close/Open File/Socket(Even it crashes)

Commit/Fetch (Even if crashes)

release the lock (Even it crashes)

When you really need Context-Managers?

At last fun not least, If I will be creating a Chat-BOT in Python then I would be able to use Context-Manager in Python so I would be doing some stuff and after completing that stuff I can go out and have fun.

when_need_context?

So we use @staticmethod in Python class,  that means no matter what happens we will be able to run this method at any cost.

That is just using decorator. Now if we want to create a context manager using decorator?

So what is Context-manager?

  1. I have to go to a particular directory, list all files with .txt extension, then come-back to current location (Simple use case)

context

  1. I have to load specific ML model, I have to predict against several parameters and get results, Return at specific state(un-load the model)
  2. I have to open Socket connection, Read various kind of data, close Socket connection.

On the other way we can also write it like:

Setup

Yield-control

Wrap-UP

Context-managers are powerful tool to make code more modular and Succinct.

Now What if we have to use Context-manager as Yielded value?

interesting_context

May be little-bit  more about COntext-managers-

https://jeffknupp.com/blog/2016/03/07/python-with-context-managers/

What are Coroutines and how we have to handle those?

  1. Receive Values
  2. May not return anything
  3. Not for iteration

What is the design of Co-routine:

  1. Receive Input
  2. Process that Input
  3. Stop at yield statement

send() method is used to send value to coroutines.

 

More uses of Co-routines:

Coroutines are really powerful for Data-Processing Operations.

https://stackoverflow.com/questions/303760/what-are-use-cases-for-a-coroutine(Concurrency and Multi-Processing)

One of most important course about Co-Routines/Concurrency and really interesting way to handle multiprocessing: http://www.dabeaz.com/coroutines/ —think win win—-

 

RUN parallel commands on your linux system Using Python

This is really simple Python script that would run on your system if You have to run parallel commands, All you just has to do is Open multiple tabs and run commandson each ab as per your requirement.


#!/usr/bin/env python
import subprocess

command = 'ping .8.8.8.8'
terminal = ['gnome-terminal']

for host in ('server1', 'server2'):
    terminal.extend(['--tab', '-e', '''
        bash -c '
            echo "%(host)s$ %(command)s"
            ssh -t %(host)s "%(command)s"
            read
        '
    ''' % locals()])

subprocess.call(terminal)

Asynchronous recipes in Python

“Concurrency” is not “Parallelism” May be it’s better. If you will not work with DataScience, DataProcessing, Machine-Learning and other operations which are CPU-Intensive you probably will found that you don’t need parallelism but you need concurrency more!

  1. A Simple Example is Training a machine learning model is CPU intensive or You can use GPU.
  2. To Make various Predictions from one model based on many Different Input-Parameters to find out best result You need Concurrency!

There are so Many ways one can hack into Python stuff and do cool Stuff either it is CPU intensive or just a task to do stuff that is good/bad/Better/Best for one user to communicate. One thing you have to believe that Python Does support Multiprocessing as well as Multi-threading

but for various reasons when you are doing CPU intensive Tasks you have to Stay away from using Threading operations in Python. Use Numpy, Cython,Jython or anything you feel, Write C++ code and glue it with Python

The number of threads will usually be equivalent to the number of cores you have. If you have hyperthreading on your processor, than you will be able to double the number of threads used.

Above image is just one Example to understand what actually we are doing. We are processing Chunks and Chuncks of Data. Now the real common scenario is If you are using I/O bound tasks use Threads in Python if you are using CPU bound tasks use Processes in Python.  I have worked with various Python Projects where Performance was issue at some level so at that time I always went to other things like Numpy, Pandas, Cyhton or numba but not Plain-Python.

Let’s come to the point and Point is What are those Recipes I can use:

Using concurrent.futures(futures module is also back-ported into Python2.x):

Suppose you have to call multiple URLs at same time using same Method. That is what actually Concurrency is, Apply same method different operations, We can do it either using ThreadPool or ProcessPool.

# Using Process Pool
from concurrent.futures import ProcessPoolExecutor,as_completed
def health_check1(urls_list):
pool = ProcessPoolExecutor(len(urls_list))
futures = [pool.submit(requests.get,url,verify=False) for url in final_url]
results = [r.result() for r in as_completed(futures)] # when all operations done
return results # a Python list of all results, Here you can also use Numpy as well

Using ThreadPool it is also not different:

# Using Thread Pool
from concurrent.futures import ThreadPoolExecutor,as_completed</code>

def just_func(urls_list):
pool = ThreadPoolExecutor(len(urls_list))
futures = [pool.submit(requests.get,url,verify=False) for url in urls_list]
results = [r.result() for r in as_completed(futures)] # when all operations done
return results # a Python list of all results, Here you can also use Numpy as well

In the above code ‘url_list’ is just list of tasks which are similar and can be processed using same kind of functions.

On the other-side using it with with as context manager is also not different. In this Example I will Use ProcessPoolexecutor’s inbuilt map function.

def just_func(url_list):
   with concurrent.futures.ProcessPoolExecutor(max_workers=len(final_url)) as executor:
        result = executor.map(get_response,final_url)
    return [i for i in result]

Using multiprocessing: (Multiprocessing is also Python-library that can be used for Asynchronous behavior of your code.)

*in Multiprocessing the difference between map and apply_async is only that Map returns results as task list is passed to it on the other-hand apply_async returns results based on results those returned by function.

# Function that run multiple tasks
def get_response(url):
“””returns response for URL ”””
    response = requests.get((url),verify=False)
   return response.text

Now above function is simple enough that is getting one URL and returning response but if have to pass multiple URLs but I want that get request to each URL should be fired at same time then That would be Asynchronous process not multiprocessing because in Multiprocessing Threads/Processes needs to communicate with each other but on the other hand in case of Asynchrounous threads don’t communicate(in Python because Python uses Process based multiprocessing not Thread Based although you can do thread-based multiprocessing in Python but then you are on your OWN 😀 😛 Hail GIL (Mogambo/Hitler)).

So above function will be like this as usual:

from multiprocessing import Pool
pool = Pool(processes=20)
resp_pool = pool.map(get_response,tasks)
URL_list = []
resp_pool = _pool.map(get_response,tasks)
pool.terminate()
pool.join()

Although This is an interesting link one can watch while going into Multiprocessing in Python using Multiprocessing: It is Process-Bases Parallelism.
http://sebastianraschka.com/Articles/2014_multiprocessing.html

Using Gevent: Gevent is a concurrency library based around libev. It provides a clean API for a variety of concurrency and network related tasks.

import gevent
import random

def task(pid):
"""
Some non-deterministic task
"""
    gevent.sleep(random.randint(0,2)*0.001)
    print('Task %s done' % pid)

def asynchronous():
    threads = [gevent.spawn(task, i) for i in xrange(10)]
    gevent.joinall(threads)

    print('Asynchronous:')
asynchronous()

If you have to Call Asynchronously but want to return results in Synchronous Fashion:

import gevent.monkey
gevent.monkey.patch_socket()

import gevent
import urllib2
import simplejson as json

def fetch(pid):
    response = urllib2.urlopen('http://json-time.appspot.com/time.json')
    result = response.read()
    json_result = json.loads(result)
    datetime = json_result['datetime']

    print('Process %s: %s' % (pid, datetime))
    return json_result['datetime']

def asynchronous():
    threads = []
    for i in range(1,10):
        threads.append(gevent.spawn(fetch, i))
    gevent.joinall(threads)

print('Asynchronous:')
asynchronous()

Assigning Jobs in Queue:

import gevent
from gevent.queue import Queue</code>

tasks = Queue()

def worker(n):
    while not tasks.empty():
        task = tasks.get()
        print('Worker %s got task %s' % (n, task))
   gevent.sleep(1)

print('Quitting time!')

def boss():
   for i in xrange(1,25):
      tasks.put_nowait(i)

    gevent.spawn(boss).join()

   gevent.joinall([
gevent.spawn(worker, 'steve'),
gevent.spawn(worker, 'john'),
gevent.spawn(worker, 'nancy'),
])

When you have to manage Different Groups of Asynchronous Tasks:

import gevent
from gevent.pool import Group</code>

def talk(msg):
    for i in xrange(3):
        print(msg)

g1 = gevent.spawn(talk, 'bar')
g2 = gevent.spawn(talk, 'foo')
g3 = gevent.spawn(talk, 'fizz')

group = Group()
group.add(g1)
group.add(g2)
group.join()

group.add(g3)
group.join()

Same As multiprocessing Library you can also use Pool to map various operations:

import gevent
from gevent.pool import Pool</code>

pool = Pool(2)

def hello_from(n):
    print('Size of pool %s' % len(pool))

pool.map(hello_from, xrange(3))

Using Asyncio:

Now let’s talk about concurrency Again! There is already lot of automation is going inside asyncio or Gevent but as programmer we have to understand how we need to break a “One large task into small chuncks of Subtasks so when we will write code we will be able to understand which tasks can work independently.

import time
import asyncio

start = time.time()

def tic():
    return 'at %1.1f seconds' % (time.time() - start)

async def gr1():
# Busy waits for a second, but we don't want to stick around...
    print('gr1 started work: {}'.format(tic()))
    await asyncio.sleep(2)
    print('gr1 ended work: {}'.format(tic()))

async def gr2():
# Busy waits for a second, but we don't want to stick around...
    print('gr2 started work: {}'.format(tic()))
    await asyncio.sleep(2)
    print('gr2 Ended work: {}'.format(tic()))

async def gr3():
    print("Let's do some stuff while the coroutines are blocked, {}".format(tic()))
    await asyncio.sleep(1)
    print("Done!")

ioloop = asyncio.get_event_loop()
tasks = [
ioloop.create_task(gr1()),
ioloop.create_task(gr2()),
ioloop.create_task(gr3())
]
ioloop.run_until_complete(asyncio.wait(tasks))
ioloop.close()

Now in the above code gr1 and gr2 are somehow taking some time to return anything it could any kind of i/o operation so what we can do here is go to the gr3 in using the event_loop and event_loop will run until all three tasks are not completed.

Please have a closer look at await keyword in the above code. It is one of the most important step where you can assume interpreter is shifting from one task to another or you can call it pause for function. If you have worked with yield or yield from in Python2 and Python3 you would be able to understand that this is stateless step for the code.

There is on more library which is aiohttp that is being used to handle blocking Http requests with asyncio.

import time
import asyncio
import aiohttp

URL = 'https://api.github.com/events'
MAX_CLIENTS = 3

async def fetch_async(pid):
    print('Fetch async process {} started'.format(pid))
    start = time.time()
    response = await aiohttp.request('GET', URL)
    return response

async def asynchronous():
    start = time.time()
   tasks = [asyncio.ensure_future(fetch_async(i)) for i in range(1, MAX_CLIENTS +1)]
   await asyncio.wait(tasks)
    print("Process took: {:.2f} seconds".format(time.time() - start))

print('Asynchronous:')
ioloop = asyncio.get_event_loop()
ioloop.run_until_complete(asynchronous())
ioloop.close()

In all the above Examples we have just Scratched the world of concurrency but in real there would be much more to look into because real world problems are more complex and intensive. There are various other options in asyncio like handling exceptions with-in futures, creating future wrappers for normal tasks,Applying timeouts if task is taking more than required time and doing something else instead.

There is lot of inspiration I got while learning about concurrent programming in Python from the following Sources:

https://hackernoon.com/asyncio-for-the-working-python-developer-5c468e6e2e8e
http://www.gevent.org/
https://www.binpress.com/tutorial/simple-python-parallelism/121
http://masnun.com/2016/03/29/python-a-quick-introduction-to-the-concurrent-futures-module.html

Run Flask in Parallel using ThreadPoolExecutor

from flask import Flask
from time import sleep
from concurrent.futures import ThreadPoolExecutor

# DOCS https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor
executor = ThreadPoolExecutor(2)

app = Flask(__name__)

@app.route('/jobs')
def run_jobs():
executor.submit(some_long_task1)
executor.submit(some_long_task2, 'hello', 123)
return 'Two jobs was launched in background!'

def some_long_task1():
print("Task #1 started!")
sleep(10)
print("Task #1 is done!")

def some_long_task2(arg1, arg2):
print("Task #2 started with args: %s %s!" % (arg1, arg2))
sleep(5)
print("Task #2 is done!")

if __name__ == '__main__':
app.run()

Running Multiprocessing in Flask App(Let’s Spawn) Hell Yeah

Ok It was going to be long time but Finally yeah Finally Able to do Process based multiprocessing in Python and even on Flask. 🙂 oh yeah! There are various recipes for Multiprocessing in this python but here you can only Enjoy with Flask.

:D


from multiprocessing import Pool
from flask import Flask
from flask import jsonify
import ast
import pandas as pd
import requests

app = Flask(__name__)
_pool = None

# Function that run multiple tasks
def get_response(x):
"""returns response for URL list"""
m = requests.get((x),verify=False)
return m.text

@app.route('/call-me/')
def health_check():
"""returns pandas dataframe into HTML for health-check Services"""
resp_pool = _pool.map(get_response,tasks)
table_frame= pd.DataFrame([ast.literal_eval(resp) for resp in resp_pool])
return table_frame.to_html()

if __name__=='__main__':
_pool = Pool(processes=12) # this is important part- We
try:
# insert production server deployment code
app.run(use_reloader=True)
except KeyboardInterrupt:
_pool.close()
_pool.join()

 

Python For Yankees

Before going further reading about Class Development I would like you to tell
one important thing:

Most of the uses of inheritance can be simplifi ed or replaced with composition, and multiple
inheritance should be avoided at all costs.

I am sure after this you will be able to read lots and lots of code written in
python. 🙂 If you want to do it fast, Do it well. 🙂

Python Class Development Toolkit.

def __init__():
    pass

#Don't put anything in instance that you don't need instance for.

def i_am_class_method(self):
    return 'takes self as argument'

#If you need to shared data for whole class then put it on class level,
#not on instance level.

class IAm(object):
    shared_data=[1,2,3,4,5]

    def __init__(self):
        print shared_data

    def fun_2(self):
        print shared_data

Iron_clad_rule:- in java or C++ is–>do not expose your attributes

Subclassing is just like inhertance. Data+methods:

Class NewOne(IAm):

    def fun3(self):
       return IAm.fun_2(self)

#Multiple/Alternative-Constructors:(When you need to change the behavior
#of class's data by just calling an intensive function):

class Circle(object):

    def init_(self,radius):
        self.radius=radius

    def fun_2(self):
        print self.radius

   @classmethod   # Alternative constructive
   def from_bbd(cls,new_radius):
        """construct a circle from bounding box diagnol"""

       raduis = new_radius/2.0/math.sqrt(2.0)
       return Circle(radius)

Make Alternate constructor to work for Subclass as well:

class Circle(object):

    def __init__(self,radius):
       self.radius=radius

    def fun_2(self):
        print self.radius

   @classmethod   # Alternative constructive
    def from_bbd(cls,new_radius):
       """construct a circle from bounding box diagnol"""
        raduis = new_radius/2.0/math.sqrt(2.0)
        return cls(radius)

Independent methods inside class: – Why we ever need those?
Think of situation where everything in your code breaks but you just need to tell
user something and something and all you need is 🙂 Static-Method.!!

class Circle(object):

    def __init__(self,radius):
        self.radius=radius

    def fun_2(self):
        print self.radius

    @staticmthod
    def just_method(just_parameter):
        return 'I am independent'

Getters and Setters in Python:(Access Data,change data on the fly)
As in the other languages there are inbuilt methods to access data from
class as well as change values of method attributes in the class.

Python uses the property decorator to perform such operations:

class Person(object):
    def __init__(self, first_name, last_name):
       self.first_name = first_name
       self.last_name = last_name</code>

    @property
    def full_name(self):
        return self.first_name + ' ' + self.last_name

    @full_name.setter
    def full_name(self, value):
        first_name, last_name = value.split(' ')
        self.first_name = first_name
        self.last_name = last_name

    @full_name.deleter
    def full_name(self):
       del self.first_name
       del self.last_name

Slots in Pyhton:
When you have lots and lots of things to perform or do in the way that
your class instance is consuming HUGE memory then you must use SLOTS.

Earlier approach:

class MyClass(object):

def __init__(self, name, identifier):
    self.name = name
    self.identifier = identifier
    self.set_up()

#Approach after SLOTS:

class MyClass(object):
    __slots__ = ['name', 'identifier']
    def __init__(self, name, identifier):
       self.name = name
       self.identifier = identifier
       self.set_up()
%d bloggers like this: