Wednesday, December 2, 2015

Using the new CSS unit "vh" and "vw" which is relative to the viewport size, for CSS3

There is a new CSS unit, which is vh, which is 1/100 of the viewport height.  It actually might be more appropriately named cvh, which means centi-viewport-height, just like we have meter and centimeter — denoted by m and cm.
Note that it is not a variable, but a unit, just like px or em.  And because it is a unit, we cannot say:
height: vh;  /* cannot do this */
height: calc(100px - vh);  /* cannot do this */
We have to always put a number in front of it, just like when we say how wide something is, we can say 1 cm, but we cannot usually just say “cm”.  We would say “1 cm”, versus, if it is a variable, we can say “a – b” or “1a – 1b”.

So the following line:

height: 80vh;  /* this is ok */
is fine.  It means the height is to be 80% of the viewport's height.

Note that this is a “relative unit,” just like em is a relative unit, because it is not absolute.  It is relative to how tall the viewport is.  Note that when the user resizes the browser window to make it shorter, then the vh value will change, and the browser will re-display the elements on the page automatically.
So let’s say we are to design a webpage, with the top part introducing our product or company.  This region is to have the size of the whole window, except for 100px, which we want to show half of a 200px tall region, for people to sign up.  So we want to only show the top half of this sign-up region when the page first loads up, to show that the page is continued down below, or just to intrigue the reader to scroll down and read more.
Note that the height of the top region is:
height: calc(100vh - 100px - 8px * 2)
This is so the element is as tall as the viewport, except for 100px less, and the 8px * 2 is to compensate for the top and bottom margins for this region.
(The body element of the page has 8px of margin at the top as well, but because vertical margins collapse, the result is just 8px of top margin, as showing at the top of the page.)
The second region just have 200px of height, as a sign-up region, perhaps including a sign up message and a sign up button.
This design would have been possible using JavaScript if we don’t have this new unit, but using this new unit, we don’t have to rely on JavaScript, and we don’t have to use JavaScript to observe on the window resize event to do any redrawing.
See or for the full window result.  Note that if you resize the window to make it shorter, the elements will be re-drawn according the the new window height.

If a user resizes the window to make it really short, the top section can shrink down and eventually disappear. To prevent that, we can use:
height: calc(100vh - 100px - 8px * 2);
min-height: 200px;
so that it has a minimum height of 200px, as in

Besides the vh unit, there is also vw for viewport width, and vmin, for the minimum of vh and vw, and vmax, for the maximum of vh and vw.  They are just units, and you can use them wherever you want when a length value is expected.  So you can say height: 100vw; That is to say, you can use vw to describe the height, or use vh to describe the width — you are not limited to only using vh to describe the height.
Since older browsers may not support calc, vh, and vw, this is solved by having a rule in front:
height: 600px;
height: calc(100vh - 100px - 8px * 2);
so that the value can fall back to 600px when the browser doesn’t know how to handle the second line.

Monday, November 30, 2015

What is a Promise, and what is the difference between a Promise and a Deferred object? (in JavaScript (ES6), or jQuery)

A promise is an object.
It is an object that can tell you some value (possibly), in the future.
Besides telling you some value, it can merely tell you that some lengthy task is done.  But for now, let’s just say it is some value that we want.
It can be some value that needs to be fetched from the Internet, or it can be some complex calculation that can take a long time, and the Promise object can possibly tell you later.
It is an observable, as in the Observer Pattern.
In JavaScript that supports ES6, such as in the current latest version of Chrome browser, you can just get a promise by:
var somePromise = new Promise(function( ... ) { 
The function passed to the constructor is the “worker” that will somehow get the value, for this promise object.
We say “the promise” object is obtained immediately.  That’s because the promise object is created, but the expected value or the lengthy task is not done yet.  But the promise object is created and returned to you immediately.
Now we have this promise object.  And it can be observed:
somePromise.then(function(someValue) { ... });
This is to tell the promise object: “let me know when you succeeded in getting the value (or have finished the task)”.
If you use the “catch” method:
somePromise.catch(function() { ... });
then it is to say: “let me know when you cannot keep the promise (of providing me the data or finish the task for some reason).”
Because the then() method and the catch() method both returns the very same promise object, we can chain them together:
or we can register the observer multiple times:
and fn1 and fn2 will be invoked in that sequence we register them, when the promise has succeeded.
Now, remember the function we pass to the Promise constructor?  That’s the worker to get the value, or to finish some lengthy task.
var somePromise = new Promise(function(resolve, reject) { ... });
The function actually takes a parameters, which is a resolve function and a reject function.
When that worker finally gets the value, or finish the task, it will say, “hey, I honor the promise”, by invoking
or when it cannot get the promised value or finish the lengthy task, it will say, “oh I cannot keep the promise”, by invoking
I may also use the name for the “resolve” function as “honor_with”, and name the function “reject” as “not_keep”, to make it easier to visualize keeping a promise or breaking a promise.
So let’s look at an example:
Here, we create a promise object:
var promise = new Promise(function (honor_with, not_keep) {
Note that the promise object is instantiated and returned immediately. The function is invoked, but we should only set up or initiate the lengthy task: for example, we can start the AJAX call to fetch some data, or set up the calculation of something complicated, but we should not actually do all those calculations right at this moment.
Now, as a user of this promise, we can say:
promise.then(function (foo) {

    $("body").append("Person A: wow I just know that the promise was kept, with the data being " + foo + ", at " + (new Date()).toLocaleTimeString() + "</div>");

Note that jQuery is used, but it merely is used to display something in the document body. (In other word, the promise we are talking about is not a jQuery promise, but an ECMAScript promise).
Now, we can simulate some lengthy task, by having the worker doing something, every second:
setInterval(function () {
    $("body").append("Person B: I am thinking or not doing anything, at " + (new Date()).toLocaleTimeString() + "</div>"); }, 1000);
Right now, it is just not doing anything, but just adding a message to the screen, to show that it could be doing something, for demo purpose.
Note that if we fetch data from the Internet, we would only set up the AJAX call to do that, and register the callback function, but we won’t use any setInterval() or setTimeout().  The setInterval() here is merely to simulated doing something, such as doing some complex computations or task that may take a few seconds or more.
Now, let’s also add the observer that goes ahead and register with the promise that, when something goes wrong or you cannot keep the promise, let me know:
promise.catch (function () {
    $("body").append("Person A: wow I just know that the promise was not kept, at " + (new Date()).toLocaleTimeString() + "</div>"); 
So now let’s pretend that some calculations are done every second, by just generating some random number from 0 to 9999.  When the number is greater than 9000, we treat it as: we finally finished the calculations and now can provide you with the data we promised you:
var i = Math.round(Math.random() * 10000);

if (i > 9000) {
The clearInterval() is to stop this function from being called again.
Note that when the script runs (and you can run it several times to see), the result window will show that the “worker” may be working, and no promised data is provided yet. But after several lines of such message, if the random number is greater than 9000, then the observer will be notified about the promised data. This is done by the worker invoking honor_with(i), or in a more traditional terminology: resolve(i).
Now let’s pretend that something could go wrong, if fetching the data from the Internet or some calculation resulted in an error, then how to we say we can’t keep the promise:
if (i > 9000) {
} else if (i < 1000) {
So we simulated it by checking if the random number is less than 1000. If so, we pretend that it is some kind of error, and invoke not_keep(). In a more traditional terminology, it is: reject().
So that’s it. The overview is, a promise is an object that will promise to have some data or finish some lengthy task, and any number of observers can say: yes, I am interesting in knowing this data at a later time, and interesting in knowing when you cannot keep the promise — please notify me when it happens. You will also see in the result window, that Person A is the user of the promise object, while person B is worker, to find the data or to do some lengthy task.
In jQuery, the promise object and the resolve, reject functions are lumped together, as a deferred object. It is described as: the deferred object is a superset of the promise object. So the deferred object can notify the observers by a resolve() or reject(). At the same time, the observers can register with the deferred object that they are interested to be notified.
However, it is much more proper to pass around the promise object (not the deferred object) so that the observers can register. If you pass around the deferred object, who know whether those observers might do a resolve or reject incorrectly? Supposedly, only the worker should resolve or reject, but the user of the promise only register themselves to observe it, not to resolve or reject it.

As Terry Jones and Nicholas Tollervey say in the book on jQuery Deferreds, it is "create a deferred, but return a promise."

I like to think of it this way: the Deferred class is a blueprint to instantiate a deferred object.  The deferred object encapsulates the capabilities of: (1) giving out the promise object (which is associated with the promised value or task), so that it can be observed by others, and (2) setting the state of the promise to be completed or incomplete, by using the resolve or reject methods, which in turn will notify the observers.

To visualize it, it can be thought of as a person, let's call him Mr. D for Deferred.  You come to Mr. D for a result, and he will give you a beeper to notify you when it is ready (or when the result can't be given), and it is like how you go to a restaurant and they give you a beeper to notify you when a table is ready.  Now Mr. D also accepts two messages.  One is a resolve message, to tell him, yes, the result is ready and it is ______.  The other message Mr. D accepts is a reject, meaning that the result can't be provided.  When Mr. D takes either of these messages, then he will notify all people who took a beeper.  This beeper is like the promise.  Mr. D is the deferred.
In jQuery, if we already have a deferred object:
var deferred = $.Deferred();
Then to get the promise object, we can invoke the promise() method on the deferred object:
var promise = deferred.promise();
Also note that since jQuery 1.5, jQuery.ajax(), or $.ajax() returns a jqXHR object, which implements the promise interface. So we can actually treat it as a promise, in such a way:
$.ajax({ ... }).done(fn1).fail(fn2).always(fn3);
Note that there are 3 states of a promise object. Before any result is known or before the task is completed, the state is pending. When the task is completed as a success, the state of the promise object shall be set as resolved. And when some error has occurred, then the state of the promise object shall be set as rejected.  So a promise is always in one of these 3 states: pending, resolved, or rejected.

If we use jQuery’s Deferred and Promise to write the code, it will be something like:
var deferred = $.Deferred(),
    promise = deferred.promise();

var timerID = setInterval(function () {

    var i = Math.round(Math.random() * 10000);

    if (i > 9000) {
    } else if (i < 1000) {

    $("main").append("Person B: I am thinking or not doing anything, at " + (new Date()).toLocaleTimeString() + "\n");

}, 1000);

promise.done(function (foo) {
    $("main").append("Person A: wow I just know that the promise was kept, with the data being " + foo + ", at " + (new Date()).toLocaleTimeString() + "\n");
}); () {
    $("main").append("Person A: wow I just know that the promise was not kept, at " + (new Date()).toLocaleTimeString() + "\n");
Using jQuery, we can also observe how much the promise is done: by progress.
Consider the following situation: the promise is to talk 30 steps or more with not a single time that 10 steps are walked. Each second, he will walk 1 to 10 steps. But once he found he walked exactly 10 steps at one time, he will consider it too much and declare the promise is not kept. When he has walked 30 steps or more and every time it is not exactly 10 steps, then he will consider the promise is kept.
We can use jQuery deferred’s notify() method, which is to notify any interested observer of any progress. The observer will register itself using the progress() method, on the promise:
var deferred = $.Deferred(),
    promise = deferred.promise();

var totalSteps = 0;

var timerID = setInterval(function () {

    var singleSteps = 1 + Math.floor(Math.random() * 10);

    totalSteps += singleSteps;

        singleSteps: singleSteps,
        totalSteps: totalSteps

    if (singleSteps === 10) {
    } else if (totalSteps >= 30) {

}, 1000);

promise.done(function (foo) {
    $("main").append("Person A: wow I just know that the promise was kept, with " + foo + " steps walked, at " + (new Date()).toLocaleTimeString() + "\n");
}); () {
    $("main").append("Person A: wow I just know that the promise was not kept, at " + (new Date()).toLocaleTimeString() + "\n");

promise.progress(function (info) {
    $("main").append("Person A: wow Person B told me he just walked " + info.singleSteps + " steps, for a total of " + info.totalSteps + " steps, at " + (new Date()).toLocaleTimeString() + "\n");
To show a progress bar by using the HTML5 progress element, we can use
promise.progress(function(info) {
  $("#walking-progress").val(info.totalSteps / totalStepsToFinish);
and a demo is in here:

Tuesday, October 2, 2012

JavaScript's Pseudo Classical Inheritance diagram

The following is a chart of JavaScript Pseudo Classical Inheritance.  The constructor
Foo is just a class name for an imaginary class.  The foo object is an instance of Foo.

Note that the prototype in Foo.prototype is not to form a prototype chain.  Foo.prototype points to some where in a prototype chain, but this prototype property of Foo is not to form the prototype chain.  What constitute a prototype chain are the __proto__ pointing up the chain, and the objects pointed to by __proto__, such as going from foo.__proto__, going up to foo.__proto__.__proto__, and so forth, until null is reached.

JavaScript's Pseudo Classical Inheritance works like this way: I am a constructor, and I am just a function, and I hold a prototype reference, and whenever foo = new Foo() is called, I will let foo.__proto__ point to my prototype object.  So Foo.prototype and obj.__proto__ are two different concepts.  Foo.prototype indicates that, when an object of Foo is created, this is the point where the prototype chain of the new object should point to -- that is, foo.__proto__ should point to where Foo.prototype is pointing at.

In the ECMA-262 Edition 5.1 spec, the term [[Prototype]] is used.  And that's the same as __proto__.  It is often mentioned as the "[[Prototype]] internal property".  And don't confuse this with a function's prototype property.  One of the key points regarding [[Prototype]] is: "All objects have an internal property called [[Prototype]]. The value of this property is either null or a reference to an object and is used for implementing inheritance."

And now we can see from the diagram why when we inherit Dog from Animal, we would do:

    function Dog() {}    // the usual constructor function
    Dog.prototype = new Animal();
    Dog.prototype.constructor = Dog;

It is to say: first of all, define a function Dog.  This function is an object, as every function in JavaScript is an object.  Now, add a property prototype to the object Dog.  And then, new Animal() will create a new object (and it will become part of the prototype chain), and since this new object is created using the Animal constructor, this new object's __proto__ will point to where Animal.prototype is pointing to.  Now, make Dog.prototype point to this new object.  And that's how the object pointed to by Dog.prototype is created as shown in the diagram.

Next, since that new object was created by the Animal constructor, the new object's constructor property will point to Animal (an object created by constructor Foo will have a constructor property pointing to Foo).  This constructor property is not Dog.prototype's own property -- it is the own property of Dog.prototype.__proto__, so it is an inherited property.  Note that in this case, we actually want the Dog.prototype object's constructor property to point to Dog, and that why we have the second line of code above: Dog.prototype.constructor = Dog;

Note that we can use an empty function F() to set up the above relationship as well:

    function Dog() {}    // the usual constructor function
    function F() {}
    F.prototype = Animal.prototype;
    Dog.prototype = new F();
    Dog.prototype.constructor = Dog;

Because Animal() may take a longer time or more complex logic to run, and it can also create properties in that new object that we don't need.  A way to set up the Dog.__proto__ to point at Animal.prototype is what we need, and F() can already accomplish that.

If there are 3 Dog instances, they would point to the middle of that long prototype chain.  It is still a complete prototype chain, but a shorter one:

Now we can understand why when we add a method to the Animal class, we would use

    Animal.prototype.move = function() { ... };

That's because when we say


If woofie the object doesn't have the move method, it will go up the prototype chain, just like any prototypal inheritance scenario, first to the object pointed to by woofie.__proto__, which is the same as the object that Dog.prototype refers to.  If the method move is not a property of that object (meaning that the Dog class doesn't have a method move), go up one level in the prototype chain, which is woofie.__proto__.__proto__, or the same as Animal.prototype.  Remember we already did

    Animal.prototype.move = function() { ... };

earlier, and so now move is found, and the method can be invoked.

Note again that this is how prototypal inheritance works, and see how "classical inheritance" is simulated: by the help of prototypal inheritance.

Using the diagram, we can also see the working of instanceof.  For foo instanceof Animal, it is true, because we take foo and look at the whole prototype chain, and the Animal.prototype object is part of that chain.  Therefore, it returns true.  woofie instanceof Animal is true for the similar reason: take woofie's whole prototype chain, and the Animal.prototype object is part of that chain.  woofie instanceof Bichon is false because the Bichon.prototype is not part of that chain.  Note that woofie.__proto__ instanceof Animal is true, the same as Dog.prototype instanceof Animal, because instanceof checks for whether the right operand's prototype object is part of the left operand's prototype chain. (Note that Dog.prototype instanceof Dog used to be true, but it has changed in later implementation of JavaScript: so it will go up to see if Dog.prototype is part of the chain, but it won't include the object itself (the left operand) to check against Dog.prototype).

Note that in reality, each constructor function has a __proto__ property as well, and if the Function constructor is also shown here,  a more complete picture is:

Even though foo.constructor === Foo, the constructor property is not foo's own property.  It is actually obtained by going up the prototype chain, to where foo.__proto__ is pointing at.  The same is for Function.constructor.  The diagram can be complicated, and sometimes confusing when we see Constructor.prototype, foo.__proto__, Foo.prototype.constructor.  Note that Firefox, Chrome, Safari, and node.js support __proto__, but IE doesn't support it, and it can be obtained by Object.getPrototypeOf(foo).  (IE 9 or above is needed.  Before IE 9, it can be defined as in John Resig's post, and it requires that the constructor property is set properly.)  To verify the diagram, note that even though foo.constructor will show a value, the property constructor is not foo's own property, but is obtained by following up the prototype chain, as foo.hasOwnProperty("constructor") can tell.

Friday, September 14, 2012

Creating Bitmap Context for Retina and regular iOS devices

When it is a Retina display iOS device, the resolution is 4 times as a regular display.  How is a bitmap context created in this case that can make use of the higher resolution, while for a regular display, a regular bitmap context is used so that memory isn't wasted?

The answer is using the [[UIScreen mainScreen] scale], and create the bitmap context accordingly.  But will any drawing routine also take special care to draw on this bitmap context because now the pixels on x and y-axis have both doubled?

The solution is that we can just do a transform, and everything will be taken care of.  By doing this, any drawing routine will not need to tailor to any particular size.  Moving to a point at (300, 300) will be actually moving to pixel (600, 600), but the drawing can just use (300, 300) for both a regular and Retina device.  The solution is:

float scaleFactor = [[UIScreen mainScreen] scale];
CGSize size = CGSizeMake(768, 768);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, 
                           size.width * scaleFactor, size.height * scaleFactor, 
                           8, size.width * scaleFactor * 4, colorSpace, 
CGContextScaleCTM(context, scaleFactor, scaleFactor);

note that the last line, the CGContextScaleCTM is important.  It does the work of making (300, 300) to be the actually pixel (600, 600) on a Retina device.  The line that does the CGSizeMake(768, 768) is how big you'd like the bitmap context to be.  It works on a regular display and is automatically scaled up for a Retina display in the code above.

Thursday, September 13, 2012

Objective-C Manual Retain Release

The Manual Retain Release in Objective-C is not really that hard, but with the following precise rules:

Our motivation is:
  • We would like to hold onto an object, when at least one reference is pointing to it
  • We would like to free an object's memory space, when there is 0 reference pointing to it.

The mechanism is:
  • We use retain count to make this work
  • When an object is alloc'ed by [Foo alloc], the retain count is 1
  • When an object is created by [Foo new], it is the same as [[Foo alloc] init], and the retain count is also 1
  • When an object is copied, the retain count of the new object is 1
  • When the retain count is 1 or greater, that means the object should stay around.
  • When we send the retain message to an object, the retain count is incremented by 1.  This is how we send the retain message to obj: [obj retain];
  • What about decrementing the count?  It is by sending the release message to the object: [obj release];
  • When this release message is sent to the object, first, the retain count is decremented by 1, and when this retain count reaches 0, the system will send the dealloc message to the object.  That is, the system will do [obj dealloc];  for you.  So in the object's dealloc method that you define, make sure to clean up any other objects you keep around for the current object.  Then, call [super dealloc], so that the superclass will clean up objects at each higher level of the class hierarchy.  Then when it reaches the last one: [NSObject dealloc]; the actually memory (RAM or think of it as virtual memory) is released (freed up), and becomes available for other apps or your app to use again (RAM / virtual memory).

That's it.  So match the alloc, new, copy, retain, release, so that there is a balance.  Don't retain too much, and don't release too much (or too early).

Somethings to note:
  • Never call dealloc yourself, except to call super class's dealloc: [super dealloc].  The system will call dealloc for you when performing [obj release] and found the retain count to be 0 after that release.
  • You can check the retain count by [obj retainCount], although you should never use this number to do memory management.  It is only for understanding the mechanism and for experimenting and checking to see how the retain count increased or decreased.

For factory methods:
  • factory methods, such as stringWithFormat, needs to alloc a string, and return it.  So this method cannot do a [str release] because the object cannot be freed up yet.  But alloc has to be matched with a release, so how can this be solved?  This is the way:
  • There is an autorelease pool at each iteration of the app's main loop.  When we do [[[str alloc] initWithFormat: ...] autorelease] the retain count of the str object remain as 1, but the object is added to the autorelease pool.  When the caller gets back the string, it will perform the retain to hold onto the object, so that the object is not freed up.  At this point, the retain count is 2.  And when all application events are handled, the system will drain the autorelease pool, and make the retain count of str become 1, and now everything is in good order, and the system later will start an autorelease pool again, and handle all app related events, and at the end of this iteration, drain the autorelease pool again.
  • Note that every time you send the autorelease to that object, the autorelease pool will keep a number as to how many times to send the release message to the object when the pool drains.  So for example, if [[obj retain] autorelease]; is done 10 times, the retain count of the object will increase by 10, and when the autorelease pool drains, the object will be sent the release message 10 times.  So autorelease doesn't decrease the retain count immediately.  It decreases the retain count later.  A good way to think of autorelease is to think of it the same as a release, but deferred.

For @property:

  • If the property attribute is retain, copy, (or strong, but that is part of ARC), then the instance variable will automatically hold onto this object (the object will be retained once).  The previous object pointed to by this property will be released once.
  • If the property attribute is assign, unsafe_unretained, (or weak, which is ARC), then the instance variable will not cause the object's retain count to increase, as this property is not trying to hold onto the object (no ownership claimed).

Retain count with Cocoa and Cocoa-Touch (iOS) frameworks:

  • The frameworks work naturally with the retain count, so that when an object is added to a collection, such as an NSMutableArray, the retain count is increased by 1, to let the array hold onto the object.
  • In UIKit, such as when a UIView object is added as a subview, its retain count is increased by 1.  When this subview is removed from the superview, the retain count of the subview is decreased by 1.
  • When a certain array element is replaced by another object reference, the old object is released once, while at the same time, the new object is retained once.  This works the same way as a retain property.

Sunday, April 3, 2011

Ruby on Rails time zone

The following are all the timezones in Ruby on Rails 3  (as of Rails 3.0.5)

We can set it in config/application.rb, as

config.time_zone = 'Pacific Time (US & Canada)'

Note that it needs to be the exact string -- if we use:

config.time_zone = 'Pacific Time'      # NOT GOOD

it will stop the server from starting up.

The line = 'Pacific Time (US & Canada)'   can change the time zone in view, but in the controller, the time zone will still be UTC.

The way to get all Time Zones in Rails is:

$ rake time:zones:all

* UTC -11:00 *
International Date Line West
Midway Island

* UTC -10:00 *

* UTC -09:00 *

* UTC -08:00 *
Pacific Time (US & Canada)

* UTC -07:00 *
Mountain Time (US & Canada)


* UTC +00:00 *


* UTC +08:00 *
Hong Kong
Kuala Lumpur
Ulaan Bataar