my net house

WAHEGURU….!

Hacker’s Guide to Quantitative Trading(Quantopian Python) Part 2

Quantopain Provides required API functions,Data,Helpful-community as well as batteries included Web-based Dashboard to play with Algorithmic-Trading, Create Your own trading Strategies, and launch your Trading model in live Market.

Here I will only talk about code and how it should be written to create your own Trading Strategy.

There are basically Two methods.

initialize() and handle_data()

initialize act as initializer for various variables. same as __init__ method in Python.

Now what kind of variables we have to declare in initialize() function is dependent on your strategy. we can select limited number of stocks,days,type of trading,variables required for Algorithms.

A very simple example of initialize() code could look like as follows:

def initialize(context): # consider context just as 'self' in Python

   context.stocks = [sid(24),sid(46632)] # sid stands for stock_id

initialize() also contains the stuff that can be used many times or all the times in our Trading Algorithm:

1. A counter that keeps track of how many minutes in the current day we’ve got.

2. A counter that keeps track of our current date.

3. A list that stores the securities that we want to use in our algorithm.

Whatever variables that you define here will remain persistent (meaning that they’ll exist) but will be mutable. So that means that if you initialize context.count as 0 in initialize, you can always change it later in handle_data().

A Simple Example of handle_data():

def handle_data(context,data):

   for stock in context.stocks:

        if stock in data:

            order(stock,1)

Momentum Strategy:(Common Trading Strategy)

In this strategy we consider Moving average price of stock as an important factor to make decision to put a security price in Long or Short.

Here is simple explanation of momentum Strategy:

● If the current price is greater than the moving average, long the security

● If the current price is less than the moving average, short the security

Now we will use Quantopian API to implement this strategy for Trading. instead, our algorithm here is going to be a little more sophisticated. We’re going to look at two moving averages: the 50 day moving average and the 200 day moving average.

David Edwards writes that “the idea is that stocks with similar 50 & 200 day moving averages are more likely to be fairly valued and the algorithm will avoid some of the wild swings that plague momentum strategies. The 50/200 day crossover is also a very common signal, so stocks might be more likely to continue in the direction of the 50day MA because a lot of investors enter and exit positions at that threshold.”

The decision-making behind Moving-average is as follows:

● If the 50 day moving averages is greater than the 200 day moving average, long the security/stock.

● If the 50 day moving average is less than the 200 day moving average, short the security/stock

So now Let’s make a Trading Bot!

1. First we have to create our initialize() function:

def initialize(context):

   set_universe(universe.DollarVolumeUniverse(floor_percentile=99.5,ceiling_percentile=100))

”’Set universe is inbuilt function by Quantopian which provide us the stocks with-in required universe. Here we have selected stocks those we have DollarVolumeUniverse with 99.5% and 100% as our floor and ceiling. This means that we’ll be selecting the top 99.5 ~ 100% stocks of our universe with the highest dollar*volume scores.

Please read the comments in the code.

   context.stocks_to_long = 5

   context.stocks_to_short = 5
   context.rebalance_date = None # we will get today's date then we will keep positions active for 10 days here

   context.rebalance_days = 10 # it is just an assumption now for 10 days or finer value


Now we have defined required __init__ parameters in initiliaze() let’s move to

handle_data()

def handle_data():

   if context.rebalance_date!=None: # if rebalnce date is not null then set next_date for changing the position of algorithm

       next_date = context.rebalance_date + timedelta(days=context.rebalnce_days) # next_date should be that days away from rebalnce_date

   if context.rebalance_date==None or next_date==get_datetime(): # if today is that day after 10 days when we market long/short for out stock

       context.rebalnce_date = get_datetime() # set rebalnce_date for today so next_date will be set to again 10 days ahead from rebalnce_date

       historical_data = history(200,'1d',price)

Get historical data of all stocks initilized in initiliaze() function, ‘1d’= 1 day,200=days,’price’=we are only fetching price details because that is only required for our strategy, may be for some strategy volume of stock could be more beneficial

  past_50days_mean = historical_data.tail(50).mean()

  past_200days_mean = historical_data.mean()

  diff = past_50days_mean/past_200days_mean-1

# if diff>0 we will long if diff<1 we will short

   buys = diff[diff>0]

   sells = diff[diff<0]   

# here we will get list of securities/stocks whose moving average will be

# greater as well as less than 0

   buys.sort() # sorting buys list why? - getting top securities from top- more is better
   sells.sort(ascending=False) # reverse sorting sells list - getting top seurities from bottom, less is better because we are selling agiainst market
   buys = buys.iloc[:buy_length] if buy_weight !=0 else None # buy_length = number of securities we want to purchase , 
   sells = sells.iloc[:short_length] if short_weight !=0 else None # short_length = number of securities we want to short

Now here we have buys and sells are two lists!! (remember carefully) all the decisions are going to be made based on these two lists

We can also implement risk factors in out Trading Strategy. Let’s implement minimum form of Risk-Factor, 0.02% of last_traded_price that means if security is going to much lower than that then we will exit.

We will go through each security in our data/universe and those who will satisfy condition of ‘buys’ and ‘sells’ list will be bought/sold.

# if security exists in our sells data

   for sym in data:

       if sells is not None and sym in sells.index:

           log.info('SHORT:%s'%sym.symbol)

           order_target_price(sym,short_weight.stop_price=data[sym].price_stops[sym])

   

# here stop_price is the price of security in real-time+change happend in stops

# order_target_price is inbuilt function.




   # if security exists in our buy data

   elif buys is not None and sym in buys.index:

       log.info('Long:%s'%sym.symbol)

       order_target_percent(sym,buy_weight,stop_price=data[sym].price-stops[sym])

   else:

       order_target(sym,0)


The `order_target_percent` method allows you to order a % target of your portfolio in that security. So this means that if 0% of your total portfolio belongs in AAPL and you order 30%, it will order 30%. But if you had 25% already and you tried ordering 30%, it will order 5%.

You can order using three different special order methods if you don’t want a normal market order:

#`stop_price`: Creates a stop order

#`limit_price`: Creates a limit order

#`StopLimitOrder`: Creates a stop­limit order





How Trading Differentiates from Gambling:

Most of times when you find that you are able to get good returns from your capital you try to beat the market, Beating the market means most of the traders tried to earn much more than fine earnings are being returned by the market for your stock, Such beating the market process can be done by various actions like reversing the momentum or looking for bad happenings in the market(which is also called finding the shit!)Some people are really good at this kung-fu but as you are just budding trader and you have only limited money of yours, So here one important thing should be remembered, “”Protect your capital””. – That’s what most of the Big banks do and if they will hire you as their Quant or Trading-Execution person they will expect same from you. Big banks have billions of dollars that they don’t want to loose but definitely want to used that money to get good returns from market.

So they follow one simple rule for most of the times.

Guaranteed returns even if those are low.

[Make sure returns should be positive after subtracting various costs like brokerage,leverage etc, Because getting positive returns by neglecting market costs is far easy but such strategies should not be used with real money.]

So the real key is think like a computer programmer at first place, something like it should work at first place, so first thing to make sure is getting returns even low but stable returns by calculating various risk-factors.

I am quoting some of informative things from SentDex Tutorial:

Most individual traders are trading on account sizes of somewhere between maybe $25,000 and $100,000 USD, so their motives are to hopefully increase that account size as much as possible, so this person is more likely to take part in High Risk High Yield (HRHY).

Most people who use HRHY strategies, tend to ignore the HR (High Risk) part, focusing on the HY (High Yield).

The same is common with gamblers,even over astronomical odds with things like the lottery.

In other words, always ask yourself – what’s about the market that makes my strategy work? Because, at the end of the day, algorithmic trading is more about trading than about algorithm.

Hacker’s Guide to Quantitative Trading(Algorithmic Trading) Part 1

Quant-ita-tive Analytics:

Word Quant is derived from Quantitative Analytics. In present system finacial market is very heterogeneous in nature,  With coming of many private banks in financial sector
now most the private banks also invest their account holder’s fund into the stock market as a safe trading,  with very low but almost guaranteed returns(money they earn from trading
There was an exception of this ‘safe investment’ term used by banks when whole economy crashed and  millions of people suicided,lost their homes,lost their jobs,lost their lives
as well as most the small countries like in Europe like greece,____,____, were almost finished.

Main reason of this whole disaster happened in digital age was  due to ‘not looking into data properly’.

Now we can also use term Quantitative Analytics is process where a
person/Mathematician/Expert/Data Scientist/Computer-Programmer(with domain specific knowledge)  parses,analyses and develops results from MBs , GBs or sometimes TBs of data to produce results those results are based on history of trading.  Such results helps BIG banks or big investors in the Financial market to build trading Strategies with the target to gain maximum profit from trading equities or at-lest to play safely with their money which results low but assured returns.

Statistical science Plays an important role in study of Quantitative Analytics:

Statistical Science deals with every aspect of our life. Before going further into Statistical science we have to understand the meaning of ‘Statistics’.
According to Dictionary of Statistical terms:

“”Numerical data relating to an aggregate of individuals or the science of
collecting,analyzing and interpretation such data.””

There are various Statistical methods those are used to analyze the data which is Quantitative(HUGE with number of variables or  SMALL with number of variables and Quant’s job is to analyze how those variables are correlated) in nature for example: using grouping,measures of location:average-mean,median,mode,,
measures of spread-range,variance and standard deviation,skew,identifying patterns, univariate and multivariate analysis, comparing groups,choosing right test for data.

Word Quant comes from:
Word Quant come from the the person who use Mathematical formulas and machine learning techniques to predict output of stock market. There are various other profiles as well,  Like Quant Developer- Trading strategy written in steps(not programming) is provided to programmer with domain specific  knowledge of financial system.
it is job of Quant developer to convert Trading Strategy into  Live Trading System(System that buy or sells stocks,options,futures to get profit from market).

Mathematician and Quant:
The relationship between Quant and Mathematician is quite close or in some sense
it can be stated that Quant is the person having fun in real life with Mathematics. Quant calculate  various factors while implementation an algorithm/equation in real-time trading system. with showing results to other people in the form of  graphs rather than complicated equations of significances.

So in some sense Quant uses serious Statistics to get sense out of data and tell  people why his/her trading strategy is better to produce profit from the financial market.  As a Quant developer only job is to write code for equal
Algorithms those are being used but as a Quantitative Analytics we can assume one has to have  various skills of statistics which help to make sense out of Stock-Trading-Data.

Process(!steps) of doing Quantitative analytics:

Take Care of Data: At first as a analyst you should not get yourself lost in the data.

Frequency Distribution: Find frequencies of occurring values against one variable.
Histograms are best for Frequency Analysis.

Find Descriptive Statistics: We can use Central tendencies(mean,median,mode) and
dispersion(range,standard deviation,variance) to learn more about data.

Comparing means:Need to perform various tests on your Data like T-Tests.

Cross Tabulation: Cross tabulation tells what are the relations among different variables.

Correlations: Find relationship between two variables

** Never mix Correlation and Cross Tabulation between your thoughts.

Trading Strategy/Algorithm:
When Traders buy/sell stock/options/future in trading market based on various calculations to make decisions, Combination of such  decisions is called Trading Strategy. Strategy can be built applied without using any Programming Language. In Algorithmic trading A Quant+developer come up with self-running computer program(Trading Strategy/Quantitative model) that is built based upon various trading decisions to automate
whole process of buying/selling stocks.

Tips and Tricks while building Quantitative models:

* Important things about Your Trading model is it should be good on Back-testing(Make sure back-testers will be different for stocks as well as for futures) but as well as it should be as good on forward testing(paper trading).

* You need Separate Model for Separate Trading. Trading model(Strategy) working at Bitcoin Market will not be beneficial for Stocks.
* You should run Strategy as an experimental Project for various Days to get data from results, Read the data and refine the Strategy.

* Every Strategy is sort of sensitive to various risk factors.

* Think about **risks** , Think about Eliminations.
* Run Multiple models at same time, Some will loose some will win.

* One strategy is not enough , Strategies loose it’s Efficiencies after certain period of time.

* Back-testing is not always true. Never try to create model that matches 100% to your back-testing because that would be over-fitting rather than try to create generalized/simple models which will be more effective to predict abrupt changes in the live-trading.

* What actually happens is strategies are catching strategies.

* Right now every Strategy need a human to operate.

* If we know what kind of shields we have given our model we will get to know that either such kind of things those are coming in-News can effect
our strategy.

* Only Human is not well enough to do trading by it’s own. We must use trading strategies to come out as great trading with profitable returns.
* Diversification of Strategies are as important as required.

* Momentum trade’s behave is somewhat in loss and sometimes very ”good” Profit, Because on Momentum we try to find what’s hot in market and that hot could go so high as well as so low.

* A trading model may not take consideration of Earth-Quake, some kind of Govt. fall into any country but humans can.

* Now using the Sentiment Data analysis tools we can incorporate news while building a strategy but for most of the backtesters does not contain that news data to check performance of strategy. So if you say news data record will increase the chances of working that might not be true all the time.

* Sometimes using a news data as input can cause overwrite of entire model because ‘news are not true always’ 🙂

* Keep reading new Ideas on Blog Posts, Look for interesting Books.

* Learn how to interpret the live market.

* As in the programming we say it mostly does not work at the first time, that is also true with trading strategies. 🙂

* Back-test is at-least good not to give -ve results, Highest the Sharp ratio great is the significance of strategy.

* Data you use to build your own strategy is equally important as important as the factors you are considering to build your model.

* You could come to situations where you feel either you need to stop usage of perticular strategy or modify it.
* Back-test is null Hypothesis.We assume that at this specific case our model is accurate.
* Always concentrate on Sharp Ratios.

* Quantitative Trading is suitable for technically anybody. Quantitative Trading could be slow or Fast, One does not need to be a Math Ninja or Computer Wizard to do Quantitative Trading.

* It is always good to start with Simple Strategies present in Public Domains, implement those, run tests, tune,Play and get rich, 🙂

* It’s better if you go with Open-Source Languages because those can very easily turn into live trading systems.(Python or C++)
* Choosing a standard Language is always a great idea because wide range of library support is there to build things! 🙂 😀 * (Python)

Top view of DataScience and machine learning

As a Data Scientist you have or you should not be limited yourself with only the training you got around, you have to think far and more ahead in terms of various things around, think of yourself as an amazing or great thinker like how many variables are possible to run the system or how many variables can really effect the system at which level.

Let’s Look at top three Questions:

What is a Data Scientist?

Why does it appear to be such a hard job?

What are the Skill Sets a Data Scientist Need?

First thing we have to remember or taken care is big data. Organizations have lot of data those are collecting from various sources(mostly clients and activities) but that data is HUGE and no idea how to manage that in particular order. A Data Scientist’s job is to find meaning in that data.

What kind of skills one should look for dataScientist?

For a Data Scientist’s job one need to hire a PhD,Mathematician or Statistician for the job or one can also grow a Data Scientist with-in the organization.

What is the fundamental Job of Data Scientist?

Data Scientist is the one who found new discoveries from the data.

That’s what Scientist do. As a scientist first make a hypothesis and then try to investigate that Hypothesis under various conditions now in the case of Data Scientist they just do it with Data!

Data Scientist look for meaning,Knowlede in data and they do that in couple of different ways.

Visualization of Data:

For example one is Data-Visualization. Data Scientist visualize data in various forms and look for the meaning in the data, That’s what we can say business intelligence or Data Analyst might do.

Using Algorithms:

Advanced Algorithms those actually run through the data by looking all the meaning. Such algorithms are like Machine-learning Algorithms, Neural-Network, SVM, Regression Algorithm or K-means. There are dozens of algorithms and those run through data Looking for the meaning that is one of the fundamental tool of Data Scientist.

So to use those Algorithms Data Scientist must have knowledge of Mathematics,Statistics and Computer Science.

So how Data Scientist’s work is being Started or Done?

A data Scientist is given a large Data-Set or may be small Data-Set with a question.

Something like what customers are likely to return?

Or What customers are likely to buy in weekends?

or How many families buy sweets/fruits on festivals.(You can find the income range of families)-that is classic statics problem.

Now it is Data Scientist job to run various Algorithms on data and look for the answer. Here is simple thing one must think about, It is like how or why specific algorithm would work out on data. If we have basic or general level knowledge of algorithms then we can identify what this algorithm would really answer such question.

“”””””””So Data Scientist go through various algorithms until they can find some pattern in data to answer the questions””””””

Same thing is applicable with any trading strategy, we have to look for in our research that’s why such specific algorithm would work out Or other question is that how one Data Scientist can improve the recommendations of recommendation engine.

Netflix came with competition that Netflix would pay million dollars to one who would just improve their Recommendation Algorithm by 10%.

Five Data Scientist actually came up with that Algorithm that would do that.

So again we can say that Data Scientists are people who answer questions and they are using data to answer those questions Or they are using the combination of Data or algorithms to answer those questions.

When you have large dataset with various categories and columns then you have to rely on various algorithms so fundamanetal knowledge of such algorithms is what DataScientist should be aware of.

What a Data Scientist is not?

There are various myths about Data Scientist, is not a Java Programmer who knows Hadoop, many people are billing themselves as Data Scientists as they have such technical skills they are not data Scientist unless they don’t know data-discovery techniques and how algorithms would work on that data!!

What is the difference between a A data Scientist and Data Analyst?

Now we should also not confuse a Data-Analyts or busniess analyst with Data Scientist, Data Analyst is the one who create reports,graphs,dashboards based on data. Those reports based on their own knowledge that what they think is “important” to show or consider .

Data Scientist is the one who Hypothesis what is important and then try to prove that Hypothesis.

Now It is great for one person to have both of skills for programming as well as for Business domain knowledge. but most important is fundamental knowledge of ‘Algorithms,mathematics and statistics’. That is the one reason it is bit difficult to find a A Data Scientist because it needs some unique Skills.

Python for text processing

Python is more about ‘Programming like Hacker’ while writing your code if you keep things in mind like reference counting, type-checking, data manipulation, using stacks, managing variables,eliminating usage of lists, using less and less “for” loops could really warm up your code for great looking code as well as less usage of CPU-resources with great Speed.

Slower than C:

Yes Python is slower than C but you really need to ask yourself that what is fast or what you really want to do. There are several methods to write Fibonacci in Python. Most popular is one using ‘for loop’ only because most of the programmers coming from C background uses lots and lots of for loops for iteration. Python has for loops as well but if you really can avoid for loop by using internal-loops provided by Python Data Structures and Numpy like libraries for array handling You will have Win-Win situation most of the times. 🙂

Now let’s go with some Python tricks those are Super cool if you are the one who manipulates,Filter,Extract,parse data most of the time in your job.

Python has many inbuilt methods text processing methods:

>>> m = ['i am amazing in all the ways I should have']

>>> m[0]

'i am amazing in all the ways I should have'

>>> m[0].split()

['i', 'am', 'amazing', 'in', 'all', 'the', 'ways', 'I', 'should', 'have']

>>> n = m[0].split()

>>> n[2:]

['amazing', 'in', 'all', 'the', 'ways', 'I', 'should', 'have']

>>> n[0:2]

['i', 'am']

>>> n[-2]

'should'

>>>

>>> n[:-2]

['i', 'am', 'amazing', 'in', 'all', 'the', 'ways', 'I']

>>> n[::-2]

['have', 'I', 'the', 'in', 'am']

Those are uses of lists to do string manipulation. Yeah no for loops.

Interesting portions of Collections module:

Now let’s talk about collections.

Counter is just my personal favorite.

When you have to go through ‘BIG’ lists and see what are actually occurrences:

from collections import Counter

>>> Counter(xrange(10))

Counter({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1, 7: 1, 8: 1, 9: 1})

>>> just_list_again = Counter(xrange(10))

>>> just_list_again_is_dict = just_list_again

>>> just_list_again_is_dict[1]

1

>>> just_list_again_is_dict[2]

1

>>> just_list_again_is_dict[3]

1

>>> just_list_again_is_dict['3']

0

Some other methods using counter:

Counter('abraakadabraaaaa')

Counter({'a': 10, 'r': 2, 'b': 2, 'k': 1, 'd': 1})

>>> c1=Counter('abraakadabraaaaa')

>>> c1.most_common(4)

[('a', 10), ('r', 2), ('b', 2), ('k', 1)]

>>> c1['b']

2

>>> c1['b'] # work as dictionary

2

>>> c1['k'] # work as dictionary

1

>>> type(c1)

<class 'collections.Counter'>

>>> c1['b'] = 20

>>> c1.most_common(4)

[('b', 20), ('a', 10), ('r', 2), ('k', 1)]

>>> c1['b'] += 20

>>> c1.most_common(4)

[('b', 40), ('a', 10), ('r', 2), ('k', 1)]

>>> c1.most_common(4)

[('b', 20), ('a', 10), ('r', 2), ('k', 1)]

Aithematic and uniary operations:

>>> from collections import Counter

>>> c1=Counter('hello hihi hoo')

>>> +c1

Counter({'h': 4, 'o': 3, ' ': 2, 'i': 2, 'l': 2, 'e': 1})

>>> -c1

Counter()

>>> c1['x']

0

Counter is like a dictionary but it also considers the counting important of all the content you are looking for. So you can plot the stuff on Graphs.

OrderedDict:

it makes your chunks of data into meaningful manner.

>>> from collections import OrderedDict
>>> d = {'banana': 3, 'apple':4, 'pear': 1, 'orange': 2}
>>> new_d = OrderedDict(sorted(d.items()))
>>> new_d
OrderedDict([('apple', 4), ('banana', 3), ('orange', 2), ('pear', 1)])
>>> for key in new_d:
...     print (key, new_d[key])
... 
apple 4
banana 3
orange 2
pear 1

Namedtuple:

Think it the way you need to save each line of your CSV into list of lines but along with that you also need to take care of not just the memory but as well as You should be able to store each line as dictionary data structure so if you are fetching lines from Excel or CSV document which comes in place when you work at Data-Processing environment.

# The primitive approach
lat_lng = (37.78, -122.40)
print 'The latitude is %f' % lat_lng[0]
print 'The longitude is %f' % lat_lng[1]

# The glorious namedtuple
LatLng = namedtuple('LatLng', ['latitude', 'longitude'])
lat_lng = LatLng(37.78, -122.40)
print 'The latitude is %f' % lat_lng.latitude
print 'The longitude is %f' % lat_lng.longitude

ChainMap:

It is Container of Containers: Yes that’s really true. 🙂

You better be above Python3.3 to try this code.

>>> from collections import ChainMap

>>> a1 = {'m':2,'n':20,'r':490}

>>> a2 = {'m':34,'n':32,'z':90}

>>> chain = ChainMap(a1,a2)

>>> chain

ChainMap({'n': 20, 'm': 2, 'r': 490}, {'n': 32, 'm': 34, 'z': 90})

>>> chain['n']

20

# let me make sure one thing, It does not combines the dictionaries instead chain them.

>>> new_chain = ChainMap({'a':22,'n':27},chain)

>>> new_chain['a']

22

>>> new_chain['n']

27

Comprehensions:

You can also do comprehensions with dictionaries or sets as well.

>>> m = {'a': 1, 'b': 2, 'c': 3, 'd': 4}

>>> m

{'d': 4, 'a': 1, 'b': 2, 'c': 3}

>>> {v: k for k, v in m.items()}

{1: 'a', 2: 'b', 3: 'c', 4: 'd'}


StartsWith and EndsWith methods for String Processing:

Startswith, endswith. All things have a start and an end. Often we need to test the starts and ends of strings. We use the startswith and endswith methods.

phrase = "cat, dog and bird"

# See if the phrase starts with these strings.
if phrase.startswith("cat"):
    print(True)

if phrase.startswith("cat, dog"):
    print(True)

# It does not start with this string.
if not phrase.startswith("elephant"):
    print(False)

Output

True
True
False

Map and IMap as inbuilt functions for iteration:

map is rebuilt in Python3 using generators expressions under the hood which helps to save lot of memory but in Python2 map uses dictionary like expressions so you can use ‘itertools’ module in python2 and in itertools the name of map function is changed to imap.(from itertools import imap)

>>>m = lambda x:x*x
>>>print m
 at 0x7f61acf9a9b0>
>>>print m(3)
9

# now as we understand lamda returns the values of expressions for various functions as well, one just have to look
# for various other stuff when you really takes care of other things

>>>my_sequence = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
>>>print map(m,my_sequence)
[1,4,9,16,25,36,49,64,81,100,121,144,169,196,225,256,289,324,361,400]

#so square is applied on each element without using any loop or if.

For more on map,reduce and filter you can fetch following jupyter notebook from my Github:

http://github.com/arshpreetsingh

 

How to learn Fast!!

Learning is divine and sometimes process to reach that divine feeling could be stressful. Most of the times we do things at work or at life to make life more interesting and rich(in terms of knowledge and security) but in real manner Learning is DAMN! difficult and it takes to much time to learn any new skill. That is biggest lie running around us which is really making us slow and less progressive in our life.

All we really need is have to take care of few things/hurdles those come in our situations and learn few tactics or build up unique attitude towards those Tactics so we will be able to overcome such stuff and be good at anything just in 20 hours.

Learning new skills improves your life:

We as regular people have stuff to do in day to day life to earn some bucks and pay rent, even if you are a student or anything you have various tasks like Playing,conversations over coffee,movies,class-bunks etc so does not matter either you are in professional life or non-professional you have no time at all! But one thing we all know that learning a new skill can improve our life, it is just not to improve your life but even the people around you!

So First thing is believe in this thought/idea/saying ‘Learning a new skill improves life.’ It will give lot of fuel to your feelings and will motivate you constantly at each hurdle you get for specific skill that you are going to learn any time soon.

Set your time for each day or just take 4-5 days off from Everything: As I specifically told you that it only takes just 20 hours to learn anything or to be good at anything so you have to manage those 20 hours. if you are going to give 20 hours to each of your project /skill you want to learn/do you have to set time for sure.

it will work out something like this:

1 hour each day–>1 day to complete 1 hour learning:

20 days to complete 20 days learning

2 hours each day:

10 days to learn anything or to be good at anything.

4 hours each day and BOOM! –> Just ‘Five days’ to learn anything!

One thing you have to understand carefully, If you are giving 4 hours of your daily life to specific task/work you have to make sure that you have to give yourself a challenge or set of challenges those you have to complete in time. Don’t make the challenges too hard or too soft, Just the right amount and that right amount will entirely be dependent on your capacity of memorizing/reading and doing practical work related to that skill. Amount of doing research and practical may vary based on the skill, If you want to learn swiming then you will spent like 0.5 hour for reading and other 3.5 into swimming pool, If you are learning about programming it is good to give 1 hour for reading about some basics of programmingand rest will be consumed by solving a challenge. Let me make sure one thing solving a programming challange does not mean looking at Google to find a solution. 😀

If you are going to learn about Machine-learning or modeling a system you must give like 50-50 %age of time to each of the task.

By following above approach you don’t have to like wait for ever do learn/do anything new or something else.

Perfection is the enemy of Good:

There is one another research which is you have to spend 10,000 hours if you really want to learn anything, Which is also true in another sense. This research is based on the events/learnings of people who are GODs in their fields.

So you don’t have to be GOD or Just perfect to learn a new skill and enjoy returns come from that skill, For example if you want to learn how to play football you just have to read some rules,Get a football,find a ground/park around and kick your football with your legs, may be after 7-8 days you will be able to find some friends or team or others to join you. so that will be easy but it will take to really 10,000 hours of you want to compete against Ronaldo or Messy.

And when you get good enough and you enjoy doing it and that leads to more practice/perfection of that skill.

Make your decision and set target performance level:

You have to decide first what you actually want to do with that skill or what you actually want to learn, There will me many tasks you want to do in your life but you really have to write those in some manner. Now other important thing is setting a Target Performance level, How much you want to gain from specific skill. If you want to learn programming you have to tell yourself that you are learning because you want to make your business website or you are learning to code because you want to get job at company or you want to learn to code because you want to get job at Google.

It is always great to dream BIG but having a small stepping stones does matter, so If you want to get job at Google as a programmer it is always good to work on your personal project first then move to some professional paid work, after that you will get the ability to guide yourself like where to go from that point.

in other words once you get most baseline proficiency in something it sucks less that baseline level give you inspiration to learn more about that skill.

Deconstruction of skill:

Most of the time you see a skill is subset of various things, When you care about learning a new skill a quick study about that skill can tell you that how many other subsets are there for that particular skill-set, Some subsets are good to go with but some are really difficult to understand, So this deconstruction can help you to understand which easy skills you can learn first as well as what about of subsets you need/want to learn.

For example if you want to learn about Algorithmic trading you don’t need to learn first about marketing strategies, macro economics, policies/factors effects wall-street,internal structure of wall-street,quantitative analytics,machine-learning as well various machine learning techniques those are used for research purpose while construction of a algorithm/strategy, But in real for starting you just have to learn about those strategies which are currently being used by most of the traders and gives good returns, that number of strategies is not more than 10 or something so at first for algorithmic trading you have to know about those strategies and know for what conditions which strategy should be applied on stock market so you will b able to get better results of your trading.

In this process you will found that most of the 2-3 sub-skills repeat over and over again which help you to learn/do things much faster and that save your lot of time and energy which is mot important.

Research VS Practice:

When we have to learn any skill procrastinate! At some level procrastination is really good thing because back in your brain you unconsciously process/think about that skill, but if it is too much it will kill your focus as well, so right amount of procrastination is great for you. When we learn anything new we read-research-discuss. But rather than just limiting ourself into research mode will kill our productivity as well.

Human brain loves to do research but we need to switch between things constantly which is read/research and do.

For example if you really are going to learn programming either you can read 5-6 books first then try to write a function which will fetch birthday information of your friend from Facebook and let you know if anyone’s birthday falls in present month.

Practices makes you perfect But how to practice?:

There are various things you have to know about practice, Make sure whatever new work you are doing/learning, Do it just before you are going to sleep and after sleeping try it out as first thing in the morning, Study shows that in sleep your brain turns your small practice into good neuron structures for passing of various messages, such messages makes your mind more strong and fast to react towards grasping of new skill-sets.

Above method works for both either it is Cogitative or Motor skill.

Removing the barriers: (a general approach)

Sometimes those barriers are just environmental distractions. You have to make a list of distractions those really comes into your world when you try learn a new skill, Those distractions should be turned off like your phone or Chat, some sound coming from outside, Turning off your TV Or at last but not least TURN OFF YOUR INTERNET.

If you want to learn how to play Harmonica you just have to put it some=where in front of you! this is behavior psychology, that just make sure rather than getting distracted from any other shiny object you have to see that thing you want to do/learn first.

It is something like keep those things on your Computer Desktop those you want to learn/do but that does not mean your desktop should be overfilled with things because that also kills your productivity and you system’s speed.

It is also observed from studies that if you listen vocal-music while doing/reading something or even programming effects your ability to be more productive,but if you listen non-vocal music or some jazz it will not just help you to increase your productivity but also help you to improve your mind state.

Commit to practice for at-least 20 hours!

For more information please refer following video:

ORMise your DB operations!

ORM stands for Object Relational Mapping.

Now what is that?

Compared to traditional techniques of exchange between an object-oriented language and a relational database, ORM often reduces the amount of code that needs to be written.

There must be some kind of downside for this approach?

Disadvantages of ORM tools generally stem from the high level of abstraction obscuring what is actually happening in the implementation code. Also, heavy reliance on ORM software has been cited as a major factor in producing poorly designed databases

 

 

Now that diagram is just pulled from Internet that one is supposed to tell us about working structure of SQLAlchemy that is our main purpose to write this blog.

Let’s start from groud:

  • We have Database. (of-course what else we should have, mangoes 😀 )
  • DBAPI. (definitely needs otherwise how else we will make calls to our database for read and write operations )
  • SQLAlchemy CORE

Before talking about SQLAlchemy CORE we should really talk about what SQLAlchemy people believe about ORMs.

SQLAlchemy CORE

SQLAlchemy’s overall approach to these problems is entirely different from that of most other SQL / ORM tools, rooted in a so-called complimentarity- oriented approach; instead of hiding away SQL and object relational details behind a wall of automation, all processes are fully exposed within a series of composable, transparent tools. The library takes on the job of automating redundant tasks while the developer remains in control of how the database is organized and how SQL is constructed.

The main goal of SQLAlchemy is to change the way you think about databases and SQL!

That is true, when you start working with it you feel like you are controlling your DB with your crazy logics not from DB quiries. (That looks like bit freedom it could crash my DB if I have no Idea what I am doing but I guess that is the beauty of it. 😉 😀 )

Core contains the methods those are integrated with DB API to create Connection with DB,handle sessions,create and delete tables-rows-columns,insertion,execution,selection,accessing values from IDs comes into place. It really feels like you are just writing your favourite language while handling DB operations in place. Moreover every operation just works on the fly unless you have really messed up your DB like breaking connection while reading/writing process,declaring wrong types,messing with fields or just dumping data without even parsing-cleaning it bit. Exception handling really comes into place when you interact with DB this way. 🙂 ❤ 🙂 [I just love programming and it's nature]

There are many things as well from SQLAlchemy core those we can talk about but I feel we should stop here otherwise I might have to shift my career from developer to writer. 😉 😀

Let's taste some code so this post will really help me in near future when I will work with much complicated DB operations those really need mind mash up. 😉


from sqlalchemy import * # don't use * in production

# if you are using Mysql look for commented code
#engine = create_engine(‘mysql+pymysql://:@localhost/mdb_final’)

engine = create_engine(‘sqlite:////home/metal-machine/Desktop/sqlalchemy_example.db’)

metadata= MetaData(engine)

# creating tables, be careful with data-types 🙂
omdb_data = Table(‘positions’, metadata,
Column(‘omdb_id’, Integer, primary_key=True),
Column(‘status’, String(200)),
Column(‘timestamp’, Float(10)),
Column(‘symbol’, String(200)),
Column(‘amount’, Float(10)),
Column(‘base’, Float(10)),
Column(‘swap’, Float(10)),
Column(‘pl’, Float(10)),)

omdb_data.create() # creating tables and values
mm = omdb_data.insert() #

mm.execute({‘title’:str(movie_info.title),’poster’:str(movie_info.poster_url),’cover’:str(movie_info.cover_url),’imdb_rating’:float(movie_info.rating),’genre’:str(movie_info.genres),’plot’:str(movie_info.plot_outline),’year’:float(movie_info.year),’movie_id’:i,’runtime’:float(movie_info.runtime)})
So above can be considered as simplest form for understanding DB writing operations using SQLAlchemy.

Making connection and updating DBs.

# make sure this DB is already created, this time we are only creating connection to read
# or insert data if we need.

bit_fine_data = create_engine('sqlite:////home/metal-machine/Desktop/sqlalchemy_example.db')
order_data_meta=MetaData(bit_fine_data)
# calling all the tables in the required DB, we just have to pass table name in
# Table Class so we will be able to access,create,insert,execute from one variable.

positions_table = Table('positons',order_data_meta, autoload=True)
balance_status_table = Table('balance_status',order_data_meta, autoload=True)
account_info_table = Table('account_info',order_data_meta, autoload=True)

# inserting values in table
m=positions_table.insert({ 'status':positions['status'],'timestamp':positions['status'],'symbol':positions['status'],'amount':positions['amount'],'base':positions['base'],'swap':positions['swap'],'pl':positions['pl']})

# executing the insert data command
print bit_fine_data.execute(m)

How to read data from rows or columns from DB:


db = create_engine('sqlite:////home/metal-machine/Desktop/order_id.db')
metadata = MetaData(db)
# creating instance for Table-'orders'
tickers = Table('orders', metadata, autoload=True)

#selecting particular column from table 'orders'
time_stamp = tickers.select(tickers.c.timestamp)
# creating array from the data we get in the 'timestamp' column (creating array is optional #here)

timestamp_array = np.array([i[1] for i in time_stamp.execute()])

There are much more things left for SQLAlchemy core but I believe we should stop here and look for other things as well.

Stay tuned for SQLAlchemyORM part.

Rocks cluster for virtual containers

First of all I would like to thanks Rocks community for saving our lots of money, at initial we were thinking about buying very expensive hardware and use it as dedicated server on which we would be able to run multiple docker containers as well as multiple virtual machines. Such systems are quite  expensive: Following examples of such systems are considerable when you are really serious about some kind of computing power either for research or for server business kind of thing.

  1. http://www.ebay.in/itm/191944430915 (A base class example, price range is 50 K)
  2. http://stores.ebay.com/Cypress-Technology-Inc/HP-9000-Servers-HP-UX-/ (Other possible high availability options price range is more thank 100K)

 

But now we had to do setup with solution which should not be costlier more than 20-30K and we want at least 8 cores and 16 GB of RAM. Presently our requirement was not too high so rather than spending much amount on SSDs(Solid state drives) we just zeroed to normal mechanical HDs.

We used used core2duo and dual-core CPUs as slave nodes for Rocks cluster, Presently we are having i3 second generation home PC for Front-Node that we might upgrade in near future but it is really efficient and working pretty much fine on CentOS. ❤ ❤

 

Now when we talk about Cluster-computing only one thing comes in mind a set of connected CPUs to perform heavy operations and using all core together to run some kind of simulation and feel like a scientist at NASA. 😀

Thanks to ‘Dr. H.S. Rai’(http://hs.raiandrai.com/) that he introduced me Rock’s Cluster many months ago that really changed my perception about super-computers,parallel-processing and most of the stuff which I am still not able to remember. 😛

So back to clusters! There are many types of clusters it just really depends on your problem, like what kind of problem you want to  solve using such systems.

Problem: User/Client wanted a simple Machine having multiple cores and GBs of memory so he/she will be able to create new virtual container for any new user as per the requirement

This tutorial assumes that you have installed Rocks Front-node in one of the system and have lots of other hardware available to you to connect with your Front-node.

Something like this:

 

 

This:

OR EVEN THIS:

If you are still not getting what I am trying to say you better be first go to Rocks cluster website and look what they really are doing!Adding compute Nodes:(It is one form of cluster)

 

For adding virtual containers Rocks comes with XEN(http://www.rocksclusters.org/roll-documentation/xen/5.0) roll. There are so many Rock’s roles those come as per the requirement. For example there is HPC roll that comes with OpenMPI(Open message protocol interface) that can be used if you want to execute your code more than two or three nodes using computing cores of most systems together. A generic way is something like this:

# execute_progarm compute-node-0 compute-node-1 compute-node-3

Such type of systems are used when you have lot of data to analyse or handle but even for that present industry rely on expensive stuff rather than using Rock’s implementation. 😦 ;D let’s save this for another day and concentrate only on visualization stuff.

 

So for implementation of vitalized containers we have to install XEN roll in Front-node, that can be installed while normal installation of Front-node if you are using Jumbo DVD(comes with all rolls ~ size of 3.SOMETHING GBs) or Rocks also provide all rolls(http://www.rocksclusters.org/roll-documentation/) as different ISOs.

After successful installation of XEN roll one need to connect slave node either via direct to Ethernet card or use network-switch. (Make sure while doing all this stuff you are logged-in as root user)

Execute following command.(That’s my favourite command in the whole world, I FEEL like GOD 😀 )

#  insert-ethers

You will get screen like following or it could be different if you are using other version of Rocks but you only have to concentrate on VM-Containers. Mkae sure at this time your slave node is having PXE-boot enabled. To enable PXE boot you have to look for slvave-node BIOS.

 

 

Hit enter after choosing required option and you will look installation on slave node will be started. It could take some time so have patience. 😛

While your VM container is being installed please have a look at the stuff we are doing so you will be able to understand the architecture or our cluster.

 

 

 

 

 

Or you can also see our creativity as well. 😛

20160919_184952

 

 

 

 

 

 

 

 

After successful installation of VM container you can see it will be available in your System, save your node and quit. To analyse all this process or to get idea what I am really talking about you can look for this link as well but let me clear that first we are using VM container here not compute node.(http://www.rocksclusters.org/rocks-documentation/4.3/install-compute-nodes.html)

 

To assign IP to your VM container run following command but make sure you have your Static IP so one will be able to access your container from public internet.

# rocks add cluster ip="your_static_IP" num-computes=<1,2,or 3>

above commands look simple just mention your static IP and number of compute nodes
you want to use. It could be 1 or 2 depending on how many nodes you have and
how many VM containers you want to create.

Now clear one thing first yet we have only created virtual cluster not Virtual machines

# rocks list cluster

Above command will give you available VM clusters present in your system.

Now before creating Virtual machines we need to create RSA key pairs so we will be able to do login from Front-node to

Virtual cluster and do required operations:

# rocks create keys key=private.key passphrase=no
setting option: passphrase=no will not ask you for password but you can skip
that if you want to use password with your security key.

Add that key to your newly created virtual cluster:

# rocks add host key frontend-0-0-0 key=public.key
Following command will start installation of VM on your Virtual cluster:

# rocks open host console frontend-0-0-0 key=private.key

After installation of VM on frontend now it depends on you how you want to add virtual nodes to your system. This time these nodes will be real virtual and you can associate your static IP with those.

Again we are back to insrt-ethers but this time we are logged-in to our Virtual frontend node that we created by combining one and more node connecting together. (this text is written in bold format because it is mind blowing concept and I have blown my mind many times while understanding this step but I really don’t want you to blow yours. :D)

AGAIN: I am shouting that we have to login to VM Front-node not real Front-node 😀

# insert-ethers

Select “Compute” as the appliance type. (This time select compute and you don’t have to worry about booting and setting up slave node because we are working Virtually!!! yeah man!  I am high I think at this point and songs are being played in my mind:D)

##########################################################################################

In another terminal session on vi-1.rocksclusters.org, we’ll need to set up the environment to send commands to the Airboss on the physical frontend. We’ll do this by putting the RSA private key that we created in section Creating an RSA Key Pair (e.g., private.key) on vi-1.rocksclusters.org.

Prior to sending commands to the Airboss, we need to establish a ssh tunnel between the virtual frontend (e.g., vi-1) and the physical frontend (e.g., espresso, where the Airboss runs). This tunnel is used to securely pass Airboss messages. On the virtual frontend (e.g., vi-1), execute:

# ssh -f -N -L 8677:localhost:8677 espresso.rocksclusters.org

###########################################################################################

Now we can securely send messages to the Airboss.

Did I tell you what is Airboss?

 

 

 

 

Now make sure you know mac address of your systems so you would be able to power them on/off using following command:

# rocks set host power <mac-address> key=private.key action=install

How to get MAC address?

# rocks list host macs <your-cluster-name> key=private.key

<your-cluster-name> is the real-font-node that you named while installing the Rocks on your system first time!

Above command will be give output like this: (yours output will be definitely different according to your compute nodes)

MACS IN CLUSTER  
36:77:6e:c0:00:02
36:77:6e:c0:00:00
36:77:6e:c0:00:03

when you will power on your real node in VM container you will see that it is
detected by VM containers as follows:

 

To turn your VM off following command should be executed:

# rocks set host power compute-0-0 key=private.key action=off

It was all about setting up virtual cluster and turning on/off your nodes in-between the VM containers. Let me clear one thing that VM container is one that contains various physical nodes those we can use in combined form to create virtual machines as big or as small we want. (:P  that looks like easy definition :P)

 

OK now that was most difficult part and if you have reached here just give yourself a BIG SABAASH!

 

If you are aware of  Virt-manager provided by the RED hat you are good to have smooth ride from here otherwise defiantly take a look at  (https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/chap-Virtualization_Administration_Guide-Managing_guests_with_the_Virtual_Machine_Manager_virt_manager.html)

Now keep yourself in Real-Front-node as root user and make sure the required Virtual-clusters are running as required otherwise you will not be able to create virtual machines on Virtual containers. (I am using ‘virtual’ word so many times and  I am really not sure either I am in real world or virtual?)

As root user in Front-node run:

# virt-manager

You should be able to see your VM containers there and virtual-machines:

 

Voilaa!!!

Now Don’t ask What to do with your Virtual Machines. 😛
Here is our setup:

 

4377333377526847638account_id1-jpg

Power of brain relaxation

This is kind of funny thing that is happening to me in these days, I am trying to be as relax as possible most of the time and it is going to increase my productivity, I feel people around me much cooler,calm,happy,positive,smiling and funny.

relaxation = less stress on mind = sit free do nothing excepting thinking 😀

 

If I am sitting freely most of the time then how I can be more productive or I am more productive because my mind  loves to have soft reboots after completion of small programming tasks?  soft reboots could be anything

  1. Going wash-room and say hello to stranger
  2. Looking at pretty girls in office 😀
  3. closing your eyes and remember the time when you play with your dog
  4. thinking about your dreams
  5. making possible your dreams by reading great quotes on internet
  6. thinking about journey of life
  7. dreaming to have great soul-mate or talking/chatting to her/him if you already have one 😀
  8. having fun with your colleagues
  9. drinking coffee
  10. Playing volleyball or any game which is available. yeahhhh!!!
  11.  writing silly blog-posts exactly like this one 😀 😀 😀

 

 

 

A simple script to do parsing of large file and save it to Numpy array

A normal approach:


huge_file = 'huge_file_location'
import re
import numpy as np
my_regex=re.compile(r'tt\d\d\d\d\d\d\d') #using a compiled regex saves the time
a=np.array([]) # just an array to save all the files
with open(file_location,'r') as f: # almost default method to open file
m = re.findall(my_regex,f.read())
np_array = np.append(a,m)
print np_array
print np_array.size
print 'unique'
print np.unique(np_array) # removing duplicate entries from array
print np.unique(np_array).size
np.save('BIG_ARRAY_LOCATION',np.unique(np_array))

In the above code f.read() saves big chuck of string into memory that is about 8GB in present situation. let’s fire up Generators.

A bit improved version:


def read_in_chunks(file_object):
while True:
data = file_object.read()
if not data:
break
yield data
import numpy as np
import re
a=np.array([])
my_regex=re.compile(r'tt\d\d\d\d\d\d\d')
f = open(file_location)
for piece in read_in_chunks(f):
m = re.findall(my_regex,piece) # but still this is bottle neck
np_array = np.append(a,m)
print np_array
print np_array.size
print 'unique'
print np.unique(np_array)
print np.unique(np_array).size

A little bit faster code:


file_location = '/home/metal-machine/Desktop/nohup.out'
def read_in_chunks(file_object):
while True:
data = file_object.read()
if not data:
break
yield data

import numpy as np
import re
a=np.array([])
my_regex=re.compile(r’tt\d\d\d\d\d\d\d’)
f = open(file_location)
def iterate_regex():
”’ trying to run iterator on matched list of strings as well”’
for piece in read_in_chunks(f):
yield re.findall(my_regex,piece)
for i in iterate_regex():
np_array = np.append(a,i)
print np_array
print np_array.size
print ‘unique’
print np.unique(np_array)
print np.unique(np_array).size

But why performance is still not taht good? Hmmm……
Have to look for more things. Please use the required indentation while testing. 😛

Look at the CPU usage running on Goole instance 8Core system.

 

cpu-usage.png

Julia Why and how not te be Python!(GD Task)

 

 

This post is intended to be rough draft for preparation for Julia presentation at TCC GNDEC. I am excited.

Now one thing always comes to mind why another Language/Technology?(I am nerd and geek as well, That’s what I do for living passionately!)

There is quite great trend has reached in field of computer science and that is all want to become Data-Scientist. or at least want to get paid as high as possible. DS seems to be right choice. 😉 (this is really a troll 😀 )

 

First thing how Julia is born?

Someone posted on Reddit!

 

We want a language that’s open source, with a liberal license. We want the speed of C with the dynamism of Ruby. We want a language that’s homoiconic, with true macros like Lisp, but with obvious, familiar mathematical notation like Matlab. We want something as usable for general programming as Python, as easy for statistics as R, as natural for string processing as Perl, as powerful for linear algebra as Matlab, as good at gluing programs together as the shell. Something that is dirt simple to learn, yet keeps the most serious hackers happy. We want it interactive and we want it compiled.

(Did we mention it should be as fast as C?)

 

WoW!!! that’s real, interactive,compiled,with the speed of c and also easy to learn. as well as general purpose like Python? 😀 I am laughing yeah really laughing.

 

Well that’s how Julia came to life, or that’s the real motive behind Julia Project.

 

How you know Julia is for You?

  1. You are continually involved in computationally intensive work where runtime speed is a major bottleneck. (I just do that with each code block of mine if possible)

2. You use relatively sophisticated algorithms. (sometimes I do that)

3. You write a lot of your own routines from scratch (I don’t do that)

4. Nothing pleases you more than the thought of diving into someone else’s code library and dissecting its internals. (do that ‘a’-lot!)

Most of the above tasks listed I try to Do with ‘Python'(I am actually in serious relationship with this tool 😀 😉 because it never puts me down in Day job and just works! )

HOW JULIA A SO MUCH FASTER????

images

 

Julia is a really well-thought-out language. While the syntax looks superficially Matlabby,(Is that really a word?) that is about as far as the similarity goes. Like Matlab, R, and Python, Julia is interactive and dynamically typed, making it easy to get started programming.

But Julia differs from those languages in a few major ways.

Under the hood, it has a rigorous but infinitely flexible type system, and calls functions based on “multiple dispatch”: different code is automatically chosen based on the types of the all arguments supplied to a function.

(is it some kind of skipping type-checks each time?)

 

When these features are combined with the built-in just-in-time (JIT) compiler,

No GIL.. yeahhh!!!!!

they let code–even scalar for-loops, which are famous performance killers in R–run as fast as C or Fortran.

Yes I want it fast LIke SEE (C)

But the real killer is that you can do this with code as concise and expressive as Python.

Still Pythonic!

 

I am excited are you to sale the SHIP in the sea?

 

 

 

%d bloggers like this: