Friday, December 28, 2018

Complete Angular 5 tutorial link


Important Link:
  1. https://blog.codewithdan.com/10-angular-and-typescript-projects-to-take-you-from-zero-to-hero/ (********)
  2. http://jasonwatmore.com/post/2016/09/29/angular-2-user-registration-and-login-example-tutorial
  3. http://bennadel.github.io/JavaScript-Demos/demos/router-lazy-load-modules-indicator-angular6/#/app/(aside:aside) (*** lazy loading loder)
  4. https://www.bennadel.com/blog/3505-showing-a-loading-indicator-for-lazy-loaded-route-modules-in-angular-6-1-7.htm (*** lazy loading loder)

Monday, December 17, 2018

Mysql scheduler and different if else condition -- done in bjit so posting porject


DROP EVENT IF EXISTS bank_payment_pending_notification_scheduler;
DELIMITER //
CREATE EVENT bank_payment_pending_notification_scheduler
    ON SCHEDULE EVERY 1 HOUR
    STARTS CURRENT_TIMESTAMP
    DO BEGIN
        DECLARE v_order_no VARCHAR(255);
        DECLARE v_customer_id VARCHAR(255);
        DECLARE v_first_name VARCHAR(255);
        DECLARE v_last_name VARCHAR(255);
        DECLARE v_cursor_end INT DEFAULT 0;

        DECLARE v_sender_id VARCHAR(255) DEFAULT '';
        DECLARE v_recipient_id VARCHAR(255) DEFAULT '';
        DECLARE v_title TEXT DEFAULT '';
        DECLARE v_body TEXT DEFAULT '';

        DECLARE notification_list_cursor CURSOR FOR
            SELECT
                orders.order_no, orders.customer_id, orders.first_name, orders.last_name
            FROM orders LEFT JOIN (
                            SELECT DISTINCT attribute->"$.order_no" AS order_no
FROM notifications
WHERE attribute->"$.notification_created_for_status" = 'bank_payment_pending'
                                AND type_of_notification = 'Internal'
                        ) as already_notify ON already_notify.order_no = orders.order_no
            WHERE
                payment_status = 'pending'
                AND payment_method = 'Bank Transfer'
                AND order_date < NOW() - INTERVAL 2 DAY
AND already_notify.order_no is NULL;
                
        DECLARE CONTINUE HANDLER FOR NOT FOUND SET v_cursor_end=1;
        
        OPEN notification_list_cursor;
            notification_loop: LOOP
                FETCH notification_list_cursor INTO v_order_no,v_customer_id, v_first_name,v_last_name;
                IF v_cursor_end THEN
                    LEAVE notification_loop;
                END IF;
                
                INSERT INTO notifications(sender_id, recipient_id, title_html, body_html, type_of_notification, attribute, is_read, created_at)
                VALUES ('System',
                        v_customer_id,
                        CONCAT('Bank payment pending for order no: ',v_order_no),
                        CONCAT('Bank payment pending for order no: ',v_order_no),
                        'Internal',
                        CONCAT('{"email_send": 1,"order_no":"',v_order_no,'", "notification_created_for_status":"bank_payment_pending"}'),
                        0 , now() );                

            END LOOP notification_loop;
        CLOSE notification_list_cursor;
        SET v_cursor_end=0;
    END; //
DELIMITER ;


-- SET v_recipient_id = v_customer_id;
-- SET v_sender_id = 'zia.uddin@bjitgroup.com';
-- SET v_title = CONCAT('Bank payment pending for order no: ',v_order_no);
-- SET v_body = CONCAT('Bank payment pending for order no: ',v_order_no);

-- CALL mysql.lambda_async('arn:aws:lambda:us-east-1:637895362620:function:spw_send_email_notification',
--  CONCAT('{"recipient_id" : "',v_recipient_id,'","sender_id" : "',v_sender_id,'","title" : "',v_title,'","body" : "',v_body,'"}')
-- );


-- SELECT
-- order_no, customer_id, first_name, last_name
-- FROM orders
-- WHERE
-- payment_status = 'pending'
-- AND payment_method = 'Bank Transfer'
-- AND order_date < NOW() - INTERVAL 2 DAY
-- AND order_no NOT IN (SELECT DISTINCT attribute->"$.order_no" AS order_no FROM notifications WHERE attribute->"$.order_no" IS NOT NULL );

-- INSERT INTO notifications(sender_id, recipient_id, title_html, body_html, type_of_notification, attribute, created_at) VALUES ('System',v_customer_id, '','', 'Email', CONCAT('{"is_notify": 1,"order_no":"',v_order_no,'"}'), now() );
-- CALL mysql.lambda_async('arn:aws:lambda:eu-west-1:637895362620:function:spwSendEmailNotificationForPendingPayment',
-- CONCAT('{"order_no" : "',v_order_no,'","customer_id" : "',v_customer_id,'","first_name" : "',v_first_name,'","last_name" : "',v_last_name,'"}')
-- );


---------------------------------------------------------------------------------------------------
-- end_of_mission   
--- run daily   

-- testing
-- ON SCHEDULE EVERY 1 MINUTE
-- STARTS CURRENT_TIMESTAMP
-- ENDS CURRENT_TIMESTAMP + INTERVAL 1 HOUR

-- exact time schedule
-- ON SCHEDULE EVERY 1 DAY
-- STARTS CURRENT_TIMESTAMP

DROP EVENT IF EXISTS end_of_mission_notification_scheduler;
DELIMITER //
    CREATE EVENT end_of_mission_notification_scheduler
ON SCHEDULE EVERY 1 DAY
STARTS CURRENT_TIMESTAMP
        
    DO BEGIN
    
        DECLARE v_order_no VARCHAR(255);
        DECLARE v_mission_ref_no VARCHAR(255);
        DECLARE v_customer_id VARCHAR(255);
DECLARE v_child_customer_id VARCHAR(255);
DECLARE v_temp_recepient_id VARCHAR(255);
        DECLARE v_end_date VARCHAR(255);

        DECLARE v_cursor_end INT DEFAULT 0;

        DECLARE notification_list_cursor CURSOR FOR
                SELECT
                    order_no, missions.mission_ref_no,missions.customer_id, missions.child_customer_id,(JSON_EXTRACT(`mission_data`, '$.end_date')) end_date
                FROM
                    missions LEFT JOIN (
                            SELECT DISTINCT
                                attribute->"$.mission_ref_no" AS mission_ref_no,
                                attribute->"$.customer_id" AS customer_id
                            FROM notifications
                            WHERE attribute->"$.notification_created_for_status" = 'end_of_mission'
                                AND type_of_notification = 'Internal'
                        ) as already_notify on already_notify.mission_ref_no = missions.mission_ref_no AND already_notify.customer_id = missions.customer_id
                WHERE
                    STR_TO_DATE( SPLIT_STR( JSON_UNQUOTE(JSON_EXTRACT(`mission_data`, '$.end_date')), 'T', 1),'%Y-%m-%d') <= NOW() + INTERVAL 7 DAY
                    AND STR_TO_DATE( SPLIT_STR( JSON_UNQUOTE(JSON_EXTRACT(`mission_data`, '$.end_date')), 'T', 1),'%Y-%m-%d') >= NOW()
                    AND already_notify.mission_ref_no is NULL;

        DECLARE CONTINUE HANDLER FOR NOT FOUND SET v_cursor_end=1;
        
        OPEN notification_list_cursor;
            notification_loop: LOOP
                FETCH notification_list_cursor INTO v_order_no,v_mission_ref_no, v_customer_id, v_child_customer_id, v_end_date;
                IF v_cursor_end THEN
                    LEAVE notification_loop;
                END IF;

SET v_temp_recepient_id = v_customer_id;
IF v_child_customer_id IS NOT NULL AND v_child_customer_id != '' THEN
                    SET v_temp_recepient_id = v_child_customer_id;
                END IF;

                INSERT INTO notifications(sender_id, recipient_id, title_html, body_html, type_of_notification, attribute, is_read, created_at)
                VALUES ('System',
                        v_temp_recepient_id,
                        CONCAT('The end date of missions ',v_mission_ref_no, ' is ',SPLIT_STR( JSON_UNQUOTE(v_end_date), 'T', 1)),
                        CONCAT('The end date of missions ',v_mission_ref_no, ' is ', SPLIT_STR( JSON_UNQUOTE(v_end_date), 'T', 1)),
                        'Internal',
                        CONCAT('{"email_send":0,"mission_ref_no":"',v_mission_ref_no,'", "customer_id":"',v_customer_id,'", "notification_created_for_status":"end_of_mission" }'),
                        0,
                        now() );

            END LOOP notification_loop;
        CLOSE notification_list_cursor;
        SET v_cursor_end=0;
    END; //
DELIMITER ;

-------------------------------------------------------------------------------
    
-- run every month  
-- ON SCHEDULE EVERY 1 MINUTE
--      STARTS CURRENT_TIMESTAMP
--      ENDS CURRENT_TIMESTAMP + INTERVAL 1 HOUR

-- ON SCHEDULE EVERY 1 DAY
--      STARTS CURRENT_TIMESTAMP
--DAILY 
DROP EVENT IF EXISTS sipsi_doc_not_yet_uploaded_notification_scheduler;
DELIMITER //
    CREATE EVENT sipsi_doc_not_yet_uploaded_notification_scheduler
ON SCHEDULE EVERY 1 DAY
    STARTS CURRENT_TIMESTAMP
        
    DO BEGIN
        CALL mysql.lambda_async(get_notification_configs_bykey('spw_sipsi_doc_not_yet_uploaded_notification_api_arn'),CONCAT('{"backoffice_service_url" : "',get_notification_configs_bykey('backoffice_service_url'),'"}'));
        
    END; //
DELIMITER ;

-----sql query for sipsi doc not uploaded---


-- SElECT
-- vmil.customer_id,vmil.mission_ref_no,
-- vmil.mission_data->'$.start_date' start_date,
-- vmil.mission_data->'$.end_date' end_date,
-- CONCAT(mempd.first_name,' ', mempd.last_name) emp_name,
-- vmil.mission_data->'$.number_of_workers' number_of_workers,
-- mempd.doc_upload_status->"$.Once" once_file_status
-- FROM
-- view_mission_list AS vmil
-- left join mission_employee_details AS mempd
-- on vmil.customer_id = mempd.customer_id AND vmil.mission_ref_no = mempd.mission_ref_no
-- WHERE
-- STR_TO_DATE( SPLIT_STR( JSON_UNQUOTE(JSON_EXTRACT(`mission_data`, '$.start_date')), 'T', 1),'%Y-%m-%d') = '2018-12-27'
-- and vmil.payment_status = 'success';




-- SElECT
-- vmil.customer_id,vmil.mission_ref_no,
-- vmil.mission_data->'$.start_date' start_date,
-- vmil.mission_data->'$.end_date' end_date,
-- CONCAT(mempd.first_name,' ', mempd.last_name) emp_name,
-- vmil.mission_data->'$.number_of_workers' number_of_workers,
-- mempd.doc_upload_status->"$.m_1218" monthly_file_status
-- FROM
-- view_mission_list AS vmil
-- left join mission_employee_details AS mempd
-- on vmil.customer_id = mempd.customer_id AND vmil.mission_ref_no = mempd.mission_ref_no
-- WHERE
-- (
-- (STR_TO_DATE( SPLIT_STR( JSON_UNQUOTE(JSON_EXTRACT(`mission_data`, '$.start_date')), 'T', 1),'%Y-%m-%d') >= '2018-12-01' AND STR_TO_DATE( SPLIT_STR( JSON_UNQUOTE(JSON_EXTRACT(`mission_data`, '$.end_date')), 'T', 1),'%Y-%m-%d') <= '2018-12-31')
-- OR
-- (STR_TO_DATE( SPLIT_STR( JSON_UNQUOTE(JSON_EXTRACT(`mission_data`, '$.start_date')), 'T', 1),'%Y-%m-%d') <= '2018-12-28' AND STR_TO_DATE( SPLIT_STR( JSON_UNQUOTE(JSON_EXTRACT(`mission_data`, '$.end_date')), 'T', 1),'%Y-%m-%d') >= '2018-12-29')
-- OR
-- (STR_TO_DATE( SPLIT_STR( JSON_UNQUOTE(JSON_EXTRACT(`mission_data`, '$.start_date')), 'T', 1),'%Y-%m-%d') >= '2018-12-01' AND STR_TO_DATE( SPLIT_STR( JSON_UNQUOTE(JSON_EXTRACT(`mission_data`, '$.start_date')), 'T', 1),'%Y-%m-%d') <= '2018-12-31')
-- OR
-- (STR_TO_DATE( SPLIT_STR( JSON_UNQUOTE(JSON_EXTRACT(`mission_data`, '$.end_date')), 'T', 1),'%Y-%m-%d') >= '2018-12-01' AND STR_TO_DATE( SPLIT_STR( JSON_UNQUOTE(JSON_EXTRACT(`mission_data`, '$.end_date')), 'T', 1),'%Y-%m-%d') <= '2018-12-31')
-- )
-- and vmil.payment_status = 'success';
    

------------------------------------------------------------------------


-- end_of_mission email notification
--- run hourly

-- ON SCHEDULE EVERY 1 MINUTE
-- STARTS CURRENT_TIMESTAMP
-- ENDS CURRENT_TIMESTAMP + INTERVAL 1 HOUR


DROP EVENT IF EXISTS email_notification_scheduler;
DELIMITER //

    CREATE EVENT email_notification_scheduler
ON SCHEDULE EVERY 1 HOUR
    STARTS CURRENT_TIMESTAMP
        
    DO BEGIN
    
        DECLARE v_id INT(11);
        DECLARE v_sender_id VARCHAR(255) DEFAULT '';
        DECLARE v_recipient_id VARCHAR(255) DEFAULT '';
        DECLARE v_title TEXT DEFAULT '';
        DECLARE v_body TEXT DEFAULT '';

        DECLARE v_notification_created_for_status varchar(255);

        DECLARE v_cursor_end INT DEFAULT 0;

        DECLARE notification_list_cursor CURSOR FOR
                SELECT
                    notifications.id, notifications.sender_id, notifications.recipient_id,
                        JSON_UNQUOTE(attribute->"$.notification_created_for_status") AS notification_created_for_status                     
                FROM
                    notifications
                WHERE
                    created_at <= NOW() - INTERVAL 1 DAY
                    AND is_read = 0
                    AND JSON_EXTRACT(`attribute`, '$.email_send') = 0
                    AND type_of_notification = 'Internal'   
                    AND (JSON_UNQUOTE(JSON_EXTRACT(`attribute`, '$.notification_created_for_status')) = 'end_of_mission'
OR JSON_UNQUOTE(JSON_EXTRACT(`attribute`, '$.notification_created_for_status')) = 'sipsi_doc_not_yet_uploaded');

        DECLARE CONTINUE HANDLER FOR NOT FOUND SET v_cursor_end=1;
        
        OPEN notification_list_cursor;
            notification_loop: LOOP
                FETCH notification_list_cursor INTO v_id,v_sender_id, v_recipient_id, v_notification_created_for_status;
                IF v_cursor_end THEN
                    LEAVE notification_loop;
                END IF;
                
                UPDATE notifications SET `attribute` = JSON_SET(`attribute` ,'$.email_send' ,1) WHERE id = v_id;
                
                SET v_recipient_id = v_recipient_id;
                SET v_sender_id = get_notification_configs_bykey('sender_email_id');
                SET v_title = CONCAT('You have a unread message/alert in last 24 hours, please log into your account.');
                SET v_body = CONCAT('You have a unread message/alert in last 24 hours, please log into your account.');

                CALL mysql.lambda_async(get_notification_configs_bykey('spw_send_email_notification_lambda_fun_arn'),
                    CONCAT('{"recipient_id" : "',v_recipient_id,'","sender_id" : "',v_sender_id,'","title" : "',v_title,'","body" : "',v_body,'"}')
                );
                        
            END LOOP notification_loop;
        CLOSE notification_list_cursor;
        SET v_cursor_end=0;
    END; //

DELIMITER ;

-- INTERVAL 1 DAY
--SET v_recipient_id = 'tushar.ghosh@bjitgroup.com';




---------------------------------------TRIGGER-------------------------------------------

DROP TRIGGER IF EXISTS send_email_notification_trigger;
DELIMITER //

CREATE TRIGGER send_email_notification_trigger
AFTER INSERT
ON notifications FOR EACH ROW

BEGIN

DECLARE v_sender_id VARCHAR(255) DEFAULT '';
DECLARE v_recipient_id VARCHAR(255) DEFAULT '';
DECLARE v_title TEXT DEFAULT '';
DECLARE v_body TEXT DEFAULT '';

SET v_recipient_id = NEW.recipient_id;
IF NEW.sender_id = 'System' THEN
SET v_sender_id = get_notification_configs_bykey('sender_email_id');
ELSE
SET v_sender_id = NEW.sender_id;
END IF;
SET v_title = CONCAT('You have an new message/alert, please log into your account. ');
SET v_body = CONCAT('You have an new message/alert, please log into your account.');

CALL mysql.lambda_async(get_notification_configs_bykey('spw_send_email_notification_lambda_fun_arn'),
CONCAT('{"recipient_id" : "',v_recipient_id,'","sender_id" : "',v_sender_id,'","title" : "',v_title,'","body" : "',v_body,'"}')
);

END; //

DELIMITER ;

-- INSERT INTO notifications(sender_id, recipient_id, title_html, body_html, type_of_notification, attribute, is_read, created_at)
-- VALUES ('System',
-- 'tushar.ghosh@bjitgroup.com',
-- CONCAT('Bank payment pending for order no:testing '),
-- CONCAT('Bank payment pending for order no: '),
-- 'Internal',
-- NULL,
-- 0 , now() );


---------------- -------- configs table-----------------------

DROP TABLE IF EXISTS notification_configs;
CREATE TABLE notification_configs (
id INT(11) UNSIGNED AUTO_INCREMENT PRIMARY KEY,
config_key VARCHAR(255) NOT NULL UNIQUE,
config_value VARCHAR(255) NOT NULL
)
insert into notification_configs (config_key,config_value)
values('spw_send_email_notification_lambda_fun_arn', 'arn:aws:lambda:us-east-1:637895362620:function:spw_send_email_notification');

insert into notification_configs (config_key,config_value)
values('spw_sipsi_doc_not_yet_uploaded_notification_api_arn', 'arn:aws:lambda:us-east-1:637895362620:function:spw_sipsi_doc_not_yet_uploaded_notification_api');

insert into notification_configs (config_key,config_value)
values('backoffice_service_url', 'http://development.spwsystem-stage.com:3001');

insert into notification_configs (config_key,config_value)
values('sender_email_id', 'zia.uddin@bjitgroup.com');



-------------------- function --------------------------
----get_notification_configs_bykey-----
DROP FUNCTION IF EXISTS get_notification_configs_bykey;
DELIMITER //

CREATE FUNCTION get_notification_configs_bykey(p_config_key VARCHAR(255)) RETURNS VARCHAR(1000)
BEGIN

DECLARE value_found VARCHAR(1000) DEFAULT '';

SELECT config_value INTO value_found FROM notification_configs WHERE config_key = p_config_key;
RETURN value_found;
END; //

DELIMITER ;

------------------- SPLIT_STR --------
DROP FUNCTION IF EXISTS SPLIT_STR;
DELIMITER //

CREATE FUNCTION SPLIT_STR( x VARCHAR(255), delim VARCHAR(12),pos INT ) RETURNS VARCHAR(255)
BEGIN
RETURN REPLACE(SUBSTRING(SUBSTRING_INDEX(x, delim, pos),
CHAR_LENGTH(SUBSTRING_INDEX(x, delim, pos -1)) + 1),
delim, "");
END; //

DELIMITER ;

Wednesday, October 10, 2018

Spreed operation in javascript

References:
  • https://codeburst.io/javascript-es6-the-spread-syntax-f5c35525f754
  • https://www.typescriptlang.org/docs/handbook/functions.html
  •  https://oprea.rocks/blog/what-do-the-three-dots-mean-in-javascript/

Tuesday, September 4, 2018

GIT push and commit

Steps:
  1.  Before starting your code, please update the local code this command. "git fetch origin"  then git pull origin development. here development is the branch name
  2. After finishing the code, again run the command "git pull origin" to check code is update by other team mate;
  3. If conflict occurs, then firstly commit your local code. To commit use "git gui" specially at Bjit. Then using git gui, commit your code
  4. Then run "git log --oneline -5" to check local commit history
  5. "git log --oneline -5 origin/development" to check remote commit history. here development is remote branch. ASD all code belongs to development branch
  6. git checkout origin/development / "git checkout sha1" run this command. this sha1 number  will be the  remote branch latest sha1 number or head number. sha1 like ec7eed. at this position detach head will be created. then you need to create branch otherwise it will be lost. we can run
  7. "git checkout -b branch_name" after running the 6 step command then you need this command. here branch_name is variable. branch_name should be task related
  8. "git cherry-pick sha1" this sha1 will be local commit.
  9. if there is conflict, solve the conflict and again commit and push
  10. for no conflict, just push the code. "git push origin HEAD:refs/for/branch_name"  here branch_name is development so, ex: git push origin HEAD:refs/for/development".

NOTE:  "git log --oneline -5 origin/development" after running this commandm, may we not get update log that time run "git fetch origin"


GIT Clone: 
  1. clone command: git clone ssh://tushar.ghosh@review2.bjitgroup.com:29418/p1207_asd_clientapp_dataentry
  2. cd project folder name
  3. git checkout development : firstly check locally then not found check remote-if match then make a development branch with all remote development commit/code;
  4. after 2: git checkout development1  - showing error message this branch not found
  5. after 2:  git checkout -b development : this time create new branch with courrent branch commit/code;
    1. git pull origin development : it will pull commit from remote development
  6. git checkout origin/development or
    git checkout last_sha1_of_branch is same. Here development is branch; 


GIT Architecture :



Tuesday, June 26, 2018

Experiences


Experiences

  1.   Always try to avoid the text box and select box to remain light the browser page. try to use label.
    • experience gather from line item issue of ASD data entry. (vvi)
    • text box disable is not good decision. if never to enable the text box that time try to use label that help you to load faster page
  2.  floor and ceil issue  
    1. example 
      1. Math.floor(3.4) +1 = 4   (i need 4)   so use Math.ceil(3.4) = 4
      2.  but Math.floor(4)+1 = 5 ( i need 4)  use Math.ceil(4) = 4
  3. for for loop issue try to focus on lower bound and upper bound case, it helps you find the bugs.
  4. Try to bring data from database as less as  possible. only bring  required data otherwise it will make the server slow.  After bring data then filter apply is very bed idea. 
  5. in case of multiple condition please use switch case instead of if-else condition. Switch-case condition is more faster then if-else condition  when condition number reached to more than 5/6/10
  6. for excel or csv file import, we shoud be concerned that all row has been imported.
  7.  please avoid the for loop to find the index. please use the hash-map to find the optimize way.   example: (VVI)
    1.  var arrIndex = alreadyExistProviderId('expedition',
                  dataArray[i].id, invoiceNumber, jsonObj);
      function alreadyExistProviderId(flowType, id, invoiceNumber, jsonObj) {
          var returnIndex = -1;
          for (var i = 0; i < jsonObj[flowType].details.length; i++) {
              if (jsonObj[flowType].details[i].id == id
                      &&  jsonObj[flowType].details[i].invoice_number == invoiceNumber) {
                  returnIndex = i;
                  break;
              }
          }
          return returnIndex;
      }
  8.  always try keep note of task that you have finished. so in the release period you can check your finished task is tested or not. I have faced the issue int mirror entry module. just last day QA tested the task. he forgot. so it's very crucial that try to make your task has been tested before release or delivery 
  9. Before starting the project we need ensure the code structure and database structure. 
  10. always insert raw data in database so you not need to be worried about calculation.
    1. ASD data entry: here we calculate threshold during data entry. its not wise decision.  because  during data update and delete  again we need to calculate threshold . it may create many bugs and wrong calculation.
  11.   always keep data in database with all floating value ...  not need to be round before the before insert. just round before showing the report.
    1. 3.999999999 value insert in db so that we need not to be worried about client demand 3.99 or 3.9999 .
    2. if we save 3.99 then client demand like as 3.9999. so not wise decision.
  12. input form system should not be  different place. because
    1. data management difficulty
    2. so posting client ui when we create mission 
    3. mission list
    4. try to manage one one form to mange update and insert
  13.  Always try to avoid multiple variable in program it could be worst.
  14. Need to send data over URL always use database user ID . it will help many disgusting situation. Example. so posting: mission and misson file upload issue
  15. When project is calculative then use sql database. when project is use data but not calculative like post comments  that time use nosql.
  16. dont do like as ASD threshold tracking and etubeworld dashboard calculation. because it could not be a wise database design.
    1. alwayes keep data raw.
    2. no need to take precalculation for faster data retrive
    3. for faster data  retrieve, firstly chack last transection time and deshboard table data update time. compare the time. then update the dashboard data. if already updated then no need to update the dashboard table data to show dashboard like as etube world.
    4. we can make a procedure to update data of dashboard after checking time status and call the procedure;
  17. Sometime middle of the code, we improve the code and keep the previous logic to avoid inconsistency but acttualy we remove bad logic code and fresh the code. It could be if the system is running in production. 
  18. base config is very important because any time can chanage server or any other things. please do everything as much as possible to configurable 
  19. try to code as much as simple, like statement and declaration reprot generate in when current month reproting. current month report generate problem 
  20. need to careful for special char  to insert in database

********************************************************************


     
     

Thursday, May 24, 2018

AWS Data migration: Homogeneous, Heterogenous

The service supports homogenous migrations such as Oracle to Oracle, as well as heterogeneous migrations  between different database platforms, such as Oracle to Amazon Aurora or Microsoft SQL Server to MySQL.

It also allows you to stream data to Amazon Redshift, Amazon DynamoDB, and Amazon S3 from any of the supported sources, which are Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, SAP ASE, SQL Server, IBM DB2 LUW, and MongoDB.

Features:
  •     Simple to use
  •     Minimal Downtime
  •     Supports Most Widely Used Databases
  •     Low Cost
  •     Fast and Easy to Set-up
  •     Reliable
 The AWS Schema Conversion Tool makes heterogeneous database migrations predictable by automatically converting the source database schema  and a majority of the database code objects, including views, stored procedures, and functions, to a format compatible with the target database

Heterogenous Database Migrations
     In heterogeneous database migrations the source and target databases engines are different, like in the case of Oracle to Amazon Aurora, Oracle to PostgreSQL, or Microsoft SQL Server to MySQL migrations.
     heterogeneous migrations a two step process
  • First use the AWS Schema Conversion Tool to convert the source schema and code to match that of the target database, and then use the AWS Database Migration Service to migrate data from the source database to the target database.
Database Consolidation: We can use AWS Database Migration Service to consolidate multiple source databases into a single target database.

Continuous Data Replication: You can use AWS Database Migration Service to perform continuous data replication.






References:
  1. https://aws.amazon.com/dms/

Sunday, April 29, 2018

Data Modeling of Graph Database

Data Modeling:

 The entities and relationships that we’ve surfaced in analyzing the user story quickly translate into a simple data model, as shown in Figure 4-1. Figure 4-1. Data model for the book reviews user story Because this data model directly encodes the question presented by the user story, it lends itself to being queried in a way that similarly reflects the structure of the ques‐ tion we want to ask of the data, since Alice likes Dune, find books that others who like Dune have enjoyed:

MATCH (:Reader {name:'Alice'})-[:LIKES]->(:Book {title:'Dune'})
<-[:LIKES]-(:Reader)-[:LIKES]->(books:Book)
RETURN books.title



Nodes for Things, Relationships for Structure:

Though not applicable in every situation, these general guidelines will help us choose
when to use nodes, and when to use relationships:
  •  Use nodes to represent entities—that is, the things in our domain that are of interest to us, and which can be labeled and grouped.
  • Use relationships both to express the connections between entities and to estab‐lish semantic context for each entity, thereby structuring the domain.
  • Use relationship direction to further clarify relationship semantics. Many rela‐tionships are asymmetrical, which is why relationships in a property graph arealways directed. For bidirectional relationships, we should make our queries ignore direction, rather than using two relationships.
  • Use node properties to represent entity attributes, plus any necessary entity meta‐data, such as timestamps, version numbers, etc.
  • Use relationship properties to express the strength, weight, or quality of a rela‐tionship, plus any necessary relationship metadata, such as timestamps, version numbers, etc.

Fine-Grained versus Generic Relationships:

It’s the difference between using DELIVERY_ADDRESS and HOME_ADDRESS versus
ADDRESS {type:'delivery'} and ADDRESS {type:'home'} .

Addresses are a good example. Follow‐ing the closed-set principle, we might choose to create HOME_ADDRESS , WORK_ADDRESS , and DELIVERY_ADDRESS relationships. This allows us to follow specific kinds of address relationships ( DELIVERY_ADDRESS , for example) while ignoring all the rest. But what do we do if we want to find all addresses for a user? There are a couple of options here. First, we can encode knowledge of all the different relationship types in our queries: e.g., MATCH (user)- [:HOME_ADDRESS|WORK_ADDRESS| DELIVERY_ADDRESS]->(address) . This, however, quickly becomes unwieldy when there are lots of different kinds of relationships. Alternatively, we can add a more generic ADDRESS relationship to our model, in addition to the fine-grained relation‐ ships. Every node representing an address is then connected to a user using two rela‐ tionships: a fined-grained relationship (e.g., DELIVERY_ADDRESS ) and the more generic ADDRESS {type:'delivery'} relationship. 

Iterative and Incremental Development: 

Graph databases provide for the smooth evolution of our data model. Migrations and denormalization are rarely an issue. New facts and new compositions become new nodes and relationships, while optimizing for performance-critical access patterns typically involves introducing a direct relationship between two nodes that would otherwise be connected only by way of intermediarie.


We will quickly see how different relationships can sit side-by-side with one another, catering to different needs without distorting the model in favor of any one particular need. Addresses help illustrate the point here. Imagine, for example, that we are developing a retail application. While developing a fulfillment story, we add the abil‐ity to dispatch a parcel to a customer’s delivery address, which we find using the fol‐lowing query:

MATCH (user:User {id:{userId}})
MATCH (user)-[:DELIVERY_ADDRESS]->(address:Address)
RETURN address
 
Later on, when adding some billing functionality, we introduce a BILLING_ADDRESS relationship. Later still, we add the ability for customers to manage all their addresses. This last feature requires us to find all addresses—whether delivery, billing, or some other address. To facilitate this, we introduce a general ADDRESS relationship: 

MATCH (user:User {id:{userId}})
MATCH (user)-[:ADDRESS]->(address:Address)
RETURN address
 
By this time, our data model looks something like the one shown in Figure 4-8. DELIVERY_ADDRESS specializes the data on behalf of the application’s fulfillment needs; BILLING_ADDRESS specializes the data on behalf of the application’s billing needs; and
ADDRESS specializes the data on behalf of the application’s customer management needs. 

 

 Just because we can add new relationships to meet new application goals, doesn’t mean we always have to do this. We’ll invariably identify opportunities for refactoring the model as we go. There’ll be plenty of times, for example, where an existing rela‐tionship will suffice for a new query, or where renaming an existing relationship will allow it to be used for two different needs. When these opportunities arise, we should take them.





Wednesday, April 25, 2018

Introduction of Graph Database


Graph Database:

Formally, a graph is just a collection of vertices and edges—or, in less intimidating language, a set of nodes and the relationships that connect them.

A labeled property graph has the following characteristics:
  •  It contains nodes and relationships.
  •  Nodes contain properties (key-value pairs).
  •  Nodes can be labeled with one or more labels.
  •  Relationships are named and directed, and always have a start and end node.
  •  Relationships can also contain properties.
A graph database management system (henceforth, a graph database) is an online database management system with Create, Read, Update, and Delete (CRUD) methods that expose a graph data model. Graph databases are generally built for use with transactional (OLTP) systems.

There are two properties of graph databases we should consider when investigating graph database technologies:
  • The underlying storage: Some graph databases use native graph storage that is optimized and designed for storing and managing graphs. Not all graph database technologies use native graph storage, however. Some serialize the graph data into a relational database, an object-oriented database, or some other general-purpose data store.
  • The processing engine: Some definitions require that a graph database use index- free adjacency, meaning that connected nodes physically “point” to each other in the database.  
    • A variety of different types of graph compute engines exist. Most notably there are in-memory/single machine graph compute engines like Cassovary and distributed graph compute engines like Pegasus or Giraph. Most distributed graph compute engines are based on the Pregel white paper, authored by Google, which describes the graph com‐pute engine Google uses to rank pages.

The Power of Graph Databases:
  1.  Performance: One compelling reason, then, for choosing a graph database is the sheer performance increase when dealing with connected data versus relational databases and NOSQL stores. In contrast to relational databases, where join-intensive query performance deteriorates as the dataset gets bigger, with a graph database performance tends to remain relatively constant, even as the dataset grows.
  2. Flexibility: As developers and data architects, we want to connect data as the domain dictates,thereby allowing structure and schema to emerge in tandem with our growing
    understanding of the problem space, rather than being imposed upfront, when we
    know least about the real shape and intricacies of the data. Graphs are naturally additive, meaning we can add new kinds of relationships, new nodes, new labels, and new subgraphs to an existing structure without disturbing existing queries and application functionality
  3.  Agility:  We want to be able to evolve our data model in step with the rest of our application, using a technology aligned with today’s incremental and iterative software delivery practices. Modern graph databases equip us to perform frictionless development and graceful systems maintenance. In particular, the schema-free nature of the graph data model, coupled with the testable nature of a graph database’s application program‐ming interface (API) and query language, empower us to evolve an application in a controlled manner.

One of the most popular structures for representing geospatial coordinates is called an R-Tree. An R-Tree is a graph-like index that describes bounded boxes around geographies. Using such a structure we can describe overlapping hierarchies of locations. For example, we can represent the fact that London is in the UK, and that the postal code SW11 1BD is in Battersea, which is a district in London, which is in southeastern England, which, in turn, is in Great Britain. And because UK postal codes are fine-grained, we can use that boundary to target people with somewhat similar tastes.

Such pattern-matching queries are extremely difficult to write in SQL, and laborious to write against aggregate stores, and in both cases they tend to perform very poorly. Graph databases, on the other hand, are optimized for precisely these types of traversals and
pattern-matching queries, providing in many cases millisecond responses.

The Labeled Property Graph Model

A labeled property graph is made up of nodes, relationships, properties, and labels.
  •  Nodes contain properties. Think of nodes as documents that store properties in the form of arbitrary key-value pairs. In Neo4j, the keys are strings and the values are the Java string and primitive data types, plus arrays of these types.
  •  Nodes can be tagged with one or more labels. Labels group nodes together, and indicate the roles they play within the dataset. 
  •  Relationships connect nodes and structure the graph. A relationship always has a direction, a single name, and a start node and an end node—there are no dangling relationships. Together, a relationship’s direction and name add semantic clarity to the structuring of nodes.
  •  Like nodes, relationships can also have properties. The ability to add properties to relationships is particularly useful for providing additional metadata for graph algorithms, adding additional semantics to relationships (including quality and weight), and for constraining queries at runtime.

Query Languages for Graph Database
  1.  Cypher (Most popular)
  2. SPARQ
  3. Gremlin 


Cypher Philosophy:
Cypher is designed to be easily read and understood by developers, database professionals, and business stakeholders. Its ease of use derives from the fact that it is in
accord with the way we intuitively describe graphs using diagrams.












This pattern describes three mutual friends. Here’s the equivalent ASCII art represen‐
tation in Cypher:
           (emil)<-[:KNOWS]-(jim)-[:KNOWS]->(ian)-[:KNOWS]->(emil)

The previous Cypher pattern describes a simple graph structure, it doesn’t yet refer to any particular data in the database. To  the pattern to specific nodes and relationships in an existing dataset we must specify some property values and node labels that help locate the relevant elements in the dataset. For example:

(emil:Person {name:'Emil'})
 <-[:KNOWS]-(jim:Person {name:'Jim'})
 -[:KNOWS]->(ian:Person {name:'Ian'})
 -[:KNOWS]->(emil)

Like most query languages, Cypher is composed of clauses. The simplest queries consist of a MATCH clause followed by a RETURN clause (we’ll describe the other clauses you can use in a Cypher query later in this chapter). Here’s an example of a Cypher query that uses these three clauses to find the mutual friends of a user named Jim :

MATCH (a:Person {name:'Jim'})-[:KNOWS]->(b)-[:KNOWS]->(c),
(a)-[:KNOWS]->(c)
RETURN b, c


Cypher Clauses:
  1. MATCH: The MATCH clause is at the heart of most Cypher queries.We draw nodes with parentheses, and relationships using pairs of dashes with greater-than or less-than signs ( --> and <-- ). The < and > signs indicate relationship direction. Between the dashes, set off by square brackets and prefixed by a colon, we put the relationship name. Node labels are similarly prefixed by a colon. Node (and relationship) property key-value pairs are then specified within curly braces (much like a Javascript object) .
  2. RETURN: This clause specifies which nodes, relationships, and properties in the matched data should be returned to the client.
  3. WHERE: Provides criteria for filtering pattern matching results
  4. CREATE and CREATE UNIQUE: Create nodes and relationships.
  5. MERGE: Ensures that the supplied pattern exists in the graph, either by reusing existing nodes and relationships that match the supplied predicates, or by creating new nodes and relationships. 
  6. DELETE : Removes nodes, relationships, and properties.
  7. SET: Sets property values.
  8. FOREACH : Performs an updating action for each element in a list.
  9. UNION: Merges results from two or more queries.
  10. WITH: Chains subsequent query parts and forwards results from one to the next. Similar to piping commands in Unix
  11. START: Specifies one or more explicit starting points—nodes or relationships—in the
    graph. ( START is deprecated in favor of specifying anchor points in a MATCH clause.)






Reference Book:
  1.  Oreilly Graph Databse book




Autoboxing and Unboxing

  Autoboxing  is the automatic conversion that the Java compiler makes between the primitive types and their corresponding object wrapper cl...