Databases are the lifeblood of every business in the modern world. Data enables them to make informed and valuable decisions. Insights, patterns, and outcomes – all require the best quality of data. Therefore, when it comes time to move from an older version to a newer version of the software, there’s a need for data migration planning.

The primary purpose of a data migration plan is to improve the performance of the entire process and offer a structured approach to data. Data migration policies are useful to eliminate errors and redundancies that might occur during the process.

There are a lot of complexities involved in the data migration process. It’s not just copied and paste data task– it’s much more complicated. We must have some data migration strategies and best practices for the entire process.

Data migration takes anywhere between a couple of days, months to a year. It depends on the amount of data, the capabilities of the legacy system, and the compatibility with the new system. While there are data migration tools and software that make the work easier, must need to have a data migration checklist for beginning the procedure.

Few data migration strategies point considered as,

  1. Prevent failure
    Data migration planning helps us to avoid failure. It outlines the problems that might occur from the beginning. Data migration should not have a casual approach. Cloud data migration projects require more critical attention to prevent errors and issues.
  2. Define the larger scope
    By following the data migration best practices, it can be as define the larger scope of migrating the data. Whether it’s due to the transition from legacy systems or upgrading the tools, a data migration plan enables determining what the process aims to achieve.
  3. Meeting deadlines
    it all becomes possible due to strategic data migration. Data is crucial at different stages, and it needs to be available at the right moment.

Data Migration Planning Checklist


There are many important elements to a data migration strategy. They are critical because leaving even a single factor behind may impact the effectiveness of the strategy. The data migration-planning checklist can comprise of the following –

  • Data audit
    Before migrating, we need to do a complete data audit. Knowing data is more essential than anything because that will tell about its characteristics.
  • System cleanup
    Need to clean up system with data migration software and tools to fix any issues that may arise. Third-party sources are more viable in this process.
  • Data migration methodologies
    Outline the techniques, procedures, and data migration steps as begin. Methodologies are important because they determine the success of the process.
  • Maintenance & support
    After migration, there needs to be regular maintenance and checkup of the data. Data may degrade over a period, so it needs to be assessed for any errors.
  • Data integrity
    Governance and compliance is important part of the data migration strategy. Regularly tracking and monitoring data quality is important to assure safety from vulnerabilities.

Data Migration Strategies and Best Practices

Data Migration - Spring Time Software

The best data migration strategies and best practices,

  1. Backup of data
    One of the top data migration best practices is to backup the data. At any level, it’s not affordable to lose even a single piece of data. Backup resources are essential to save data from any mishaps that may occur during the process. Backing up the data is crucial to prevent any failures during the data migration process that may lead to data loss.
  2. Design the migration
    There are two ways to design the data migration steps – big bang and trickle. Big bang involves completing the data migration in a limited timeframe, during which the servers would be down. Trickle involves completing the data migration process in stages. Designing the migration enables you to determine which is the right method for your requirements.
  3. Test the data migration plan
    We can never stress enough about the importance of testing the strategy we plan to choose. we need to conduct live tests with real data to figure out the effectiveness of the process. This may require taking some risks as the data is crucial. To ensure that the process will be complete, need to test every aspect of the data migration planning.
  4. Set up an audit system
    Another top data migration strategy and best practice are to set up an audit system for the data migration process. Every stage needs to be carefully audited for errors and methodologies. The audit is important to ensure the accuracy of data migration. Without an audit system, we can’t really monitor what is going on with data at each phase.
  5. Simplify with data migration tools
    It is important to consider a data migration software that can simplify the process. We need to focus on the connectivity, security, scalability, and speed of the software. Data migration is challenging when the right tools are not available.

Now it’s very frequent, the hackers break into a merchant’s computer system and steal credit card information, which they can use to charge huge amounts worth of stuff to your account. But imagine if instead of a person’s name, card number, expiration date, and other information, the hackers just got a meaningless jumble of numbers and letters – That’s credit card tokenization in action, and it’s a key way payment systems can keep card data safe.
Tokenization is a data security feature where a sensitive data element or set is effectively replaced (“tokenized”) with a non-sensitive alternative, called a token. This renders the data completely useless to exploitation. means, Tokenization replaces sensitive card data with a jumble of letters and numbers that are useless to a hacker. Tokenization can be used to safeguard any sensitive data like PII data, medical records, banking, and credit card payments. Credit Card tokenization is the process of replacing sensitive customer details with an algorithmically generated number that is impossible to trace back to the original data or information. The result – a (credit card) token – makes it impossible for anybody to misuse sensitive information, as the tokenization algorithm ensures that the data is unable to be traced back to its original source. As an example, when a customer makes a purchase using a credit or debit card, the tokenization process takes the card number and transforms it into a mathematically irreversible token. If the credit card number or account number needs to be billed again in the future such as for recurring payment or subscription, the payment system recognizes the token associated with the card, rather than the card number itself. Credit or debit card tokenization increases trust for organizations and significantly reduces the risk of sensitive data such as cardholder data being exposed.

The tokenization process can take many forms, but it’s useful to consider the following possible scenarios:

1. E-commerce Payment Tokenization

  1.  A customer makes a purchase and uses their credit card to check out (ex. 1234 4321 1234 5678).
  2. The card number is changed to a random sequence of characters (ex. EUSH127ABD5562).
  3. The relationship between the actual card number and the token is stored in a separate vault.
  4. If the transaction is recurring for a monthly or another subscription or a refund is required, the merchant can simply use the token rather than needing to store the sensitive card data itself.

    2. Mobile Payment Tokenization

    When users of Apple Pay or Android Pay add a credit card to their mobile device, each of the card numbers will be tokenized and stored on the phone. When a purchase is made, the token is used instead of the payment card itself, thus adding an extra layer of protection for the transaction.

    3. App Payment Tokenization

    Now days, using applications to purchase goods is becoming more common, If a user’s phone contains a token, these apps are unable to retrieve or access any credit card details. All bank details are locked down and hackers/fraudsters would be unable to commit an offense with the data available to them. Checking out to finalize a purchase is simple too as many apps are integrated to be linked directly with your stored shipping and billing information.

    The decrease in data theft and fraud as a result of tokenization means businesses are less likely to incur reputational or financial damage as a result of a data breach. Customers will also feel reassured and confident in shopping with merchants who utilize a tokenization process, as this shows a strong emphasis on protecting the sensitive information of the customer. Tokenization also has additional benefits, particularly when combined with PCI-validated Point-to-Point Encryption.

    Tokenization Vs. Encryption

    When data is encrypted, it is coded into a hidden language, similar to tokenization. However, encryption uses a mathematical formula that is possible to reverse-engineer, meaning encrypted sequences can be deciphered and risks exposing sensitive information such as credit card data.
    Conversely, tokenization turns a meaningful piece of data into a string of random characters that cannot be reversed – so if breached, no meaningful value is exposed. This is a huge benefit in the payment card industry, ensuring the highest security standard possible. The only thing a hacker would obtain is a list of token numbers which would be of no use to them. This makes credit card data unusable, adding additional layers of security.

    Benefits of Tokenization:
    Tokenization can provide several important benefits for securing sensitive customer data:

  • Enhanced customer assurance—tokenization offers an additional layer of security for eCommerce websites, increasing consumer trust.
  • Increased security and protection from breaches—by using tokenization, businesses do not have to capture sensitive information in their input terminals, keep it in internal databases, or transmit the data through their information systems. This safeguards businesses from security breaches.
  • Data tokenization improves medical records security—organizations can use tokenization solutions for scenarios covered under HIPAA. By substituting electronically protected health information (ePHI) and non-public personal information (NPPI) with a tokenized value, healthcare organizations can better comply with HIPAA regulations.
  • Tokenization makes credit card payments more secure—the payment card industry needs to comply with extensive standards and regulations. Tokenization solutions provide a way to protect cardholder data, such as magnetic swipe data, primary account number, and cardholder information. Companies can comply with industry standards more easily, and better protect client information.

Tokenization for Security and Compliance:

Tokenization can be approached on multiple layers of protection, including:

  • Database firewall—prevents SQL injection and similar threats, while assessing for known vulnerabilities.
  • User rights management—tracks the data movements and access of privileged users to identify excessive and unused privileges.
  • Data loss prevention (DLP)—monitors and tracks data in motion, at rest, in cloud storage, or on endpoint devices.
  • User behavior analytics—creates a baseline of data access behavior and uses machine learning to isolate and alert on abnormal and potentially dangerous activity.
  • Data discovery and classification—discloses the volume, location, and context of data on-premises and in the cloud.
  • Database activity monitoring—monitors relational databases, data warehouses, big data, and mainframes to produce real-time alerts on violations of policy.
  • Alert prioritization—using AI and machine learning technology to examine the stream of security events and prioritize the most important events.

Pengertian pemodelan data ciri, jenis, tekniknya | AnakTik.com
If you don’t get the data right, nothing else matters. However, the business focus on applications often overshadows the priority for a well-organized database design.
Several factors can lead to a poor database design — lack of experience, a shortage of the necessary skills, tight timelines, and insufficient resources can all contribute. Addressing some simple data modeling and design fundamentals lead to the right path. Here, I am trying to explain a few common databases design “sins” that can be easily avoided and ways to correct them in future projects.

1) Poor or missing documentation for databases in production: Documentation for databases usually falls into three categories: incomplete, inaccurate, or none at all. This causes developers, DBAs, architects, and business analysts to scramble to get on the same page. They are left up to their own imagination to interpret the meaning and usage of the data. The best approach is to place the data models into a central repository and spawn automated reports so that with minimal effort, everyone benefits. Producing a central store of models is only half the battle, though. Once that is done, executing validation and quality metrics will enhance the quality of the models over time. It will help in data management and can extend what metadata is captured in the models.

2) Inadequate or no normalization:
Sometimes it needs to de-normalize a database structure to achieve optimal performance but sacrificing flexibility will paint you in a corner. Despite the long-held belief by developers, one table to store everything is not always optimal. Another common mistake is repeating values stored in a table. This can greatly decrease flexibility and increase difficulty when updating the data. Understanding even the basics of normalization adds flexibility to a design while reducing redundant data.

3) Inappropriate data modeling:
There are numerous examples of customers performing the modeling upfront, but once the design is in production, all modeling ceases. To maintain flexibility and ensure consistency when the database changes, those modifications need to find their way back to the model.

4) Improper storage of reference data:
There are two main problems with reference data. It is either stored in many places or, even worse, embedded in the application code. Reference values provide valuable documentation which should be communicated in an appropriate location. The best chance is often via the model. The key is to have it defined in one place and used in other places.

5) Ignorance of foreign key or check constraints: End-users complain all the time about the lack of referential integrity – RI or validation checks defined in the database when reverse engineering databases. For older database systems, it was thought that foreign keys and check constraints slowed performance, thus, the RI and checks should be done via the application. If it is possible to validate the data in the database, can be done there. Error handling will be drastically simplified, and data quality will increase as a result.

6) Avoiding uses of domains and naming standards:
Domains and naming standards are probably two of the most important things that can be incorporated into modeling practices. Domains allow the creation of reusable attributes so that the same attributes are not created in different places with different properties. Naming standards allow to clearly identify those attributes consistently.

7) Inappropriate primary key: The simplest principle to remember when picking a primary key is SUMStatic, Unique, Minimal. It is not necessary to delve into the whole natural vs. surrogate key debate, however, it is important to know that although surrogate keys may uniquely identify the record, they do not always uniquely identify the data. There is a time and a place for both, and you can always create an alternate key for natural keys if a surrogate is used as the primary key.

8) Using a composite key: A composite primary key is a primary key that contains two or more columns. The primary key field serves a single purpose of uniquely identifying the row within the system, as a result, it’s used in other tables as foreign keys. Using a composite primary key means there is a need to add two or three columns in these other tables to link back to this table, which is not as easy or efficient as using a single column.

9) Poor indexing: Indexes on a database are objects that allow certain queries to run more efficiently. Indexes are not a silver bullet for performance – they don’t solve every issue. There are commonly three mistakes that are often done when it comes to indexes:

a) No indexes at all: Without the index, as the table grows with more data, then the queries will likely get slower and slower.

b) Too many indexes: Having too many indexes can also be an issue for databases. Indexes help with the reading of data from a table, but slow down the process of DML.

c) Indexes on every column: it might be tempting to add indexes to every field of the table, doing so can slow down the database

10) Incorrect Data Types: Due to incomplete knowledge about business flows, storing data in the wrong format can cause issues with storing the data and retrieving it later.

An immense volume of data is waving into and out in today’s businesses, but it becomes more complex to know how to convert this data into actionable insights. On the other side, data science has an incredible perspective for all types of businesses to design models that further define trends and use them as the foundation for transformative software, i.e. from locating IoT devices to predictive analytics. These models are used to augment customer experience, processing efficiency, user engagement, possible conditions where data can crack difficult problems. The market for Data Science services is increasing with the speed of light, it plays a vital and crucial role in helping to transform our business digitally when many companies are looking to unlock the strength of business data that lacks with the demanding proficiency and support.

Digital transformation is the all-embracing transformation of multiple activities that an organization control to leverage opportunities produced by digital technologies and data. It touches the ubiquitous era of digitalization regardless of the size and worthiness of the industry like,

  • It reflects the digital trends in terms of operations and policies that make severe changes in how businesses control and assist customers.
  • It depends on organizational data to achieve targets more efficiently and abandon values to customers, but how we catch in the next section.

The native components that are very likely to transform are its business models, operations, infrastructures, culture, sorted quantitative and qualitative modes of searching for new sources of customer values. No wonder, Digital transformation covered all the domains of business regarding product innovations, operations, finance, retailing marketing strategies, customer services, etc. The term “DIGITALIZATION” not only speeds up the business process and performance but also delivers business opportunities. It also improves the outpace of digital disruption and fixes the position of a person in the fast-growing business environment. Consider the situation where an individual wants to recognize

  • Which sections need to be transformed,
  • How to drop the risk factors,
  • How to withdraw unwanted pitfalls from resources.

Most of the industries have chosen data-driven approaches to digitally transform their businesses, infact various big data technologies are available to follow the appropriate data-driven approaches. In short, companies are using data science and associated technologies to make the environment completely digital, and BI for gathering, computing, and interrogating their business data that moreover can be turned out into actionable insights. The latest surveys show that more and more organizations are embracing data science as a service to reach a large resource of data experts for enhancing their decision-making. Experts are able enough to generate digital strategies and plans either in terms of increasing revenue and reducing costs or improving efficiency.

The below are the multiple ways when data science acts as services to add value in business.

Authorizing decision-making via a data-driven approach – Like data science, digital transformation is a convoluted process, i.e., customer data combined with appropriate business operations can leverage to make informed conclusions while restricting unwanted risks. With data science capabilities, we can find out how to transform business digitally and which area of business needs to transform.

Classifying warnings, opportunities, and scopes via data-insights – The volume of available information and insights are rapidly growing with the increased volume of data which indirectly initiates the opportunities and hence scope to grow for business as well as the individual. Data science services make organizations capable to cope with the deficiency of data experts and give a detailed description of their business environment. Data science is a technique that enables next-generation outcomes to predict what is going to happen and how to preserve it from risks if any. Data science enables organizations to have real-time visibility about their customers, support in making decisions to optimize the internal process for larger activity, expanded flexibility and reduce the cost.

Adding more values with Machine learning: Being a major part of the data science ecosystem, machine learning can stimulate digital transformation more effectively in bioinformatics and other industries. It supports to break massive data to identify trends and exceptions. One impactive approach is Artificial Intelligence which uses machine learning algorithms to deliver insights, designing timelines models and anticipating chances where disruptions occur.

Coding Artificial Intelligence GIF by Matthew Butler


AI has a whole host of practical uses not only in the fintech industry but in the wider finance world, and even the wider world beyond that. The general gist of AI is that it solves problems, it allows companies to save both time and money. According to the prediction from many Research, AI technology will allow financial institutions to reduce their operational costs by 22-~25% by 2030. Adopting AI enables the industry to create a better environment for the customer, providing better customer service through a variety of different business activities.
In many instances, the practical use of AI is to do with data and enable companies to analyze that data in an efficient strategic way. Organizations particularly financial institutions will often have streams of data on their consumers but will rarely do much with it due to the time it would take to go through and analyze in order to find anything meaningful. This is where artificial intelligence comes in, as AI and machine learning are very effective at analyzing large amounts of data in real-time, then taking that data and drawing conclusions or recommending actions.
One example of applying AI with data is for banks to decide whether someone is creditworthy. Banks and other financial institutions want to be able to offer credit to their customers, but they want to be able to price for it accordingly, i.e., they don’t want to overcharge trustworthy customers or undercharge customers that may be more of a risk. Traditionally, to determine someone’s creditworthiness you would look at their credit scores, credit bureau data kept by agencies like Experian. However, by utilizing AI these institutions can look at their own customer data that they have and draw conclusions from there. From these large portfolios of consumer data AI can infer different kinds of relationships. Details like your job, where you live, or where you work are more obvious sources.
Another way AI’s data analysis can be used is for fraud detection and prevention. AI and machine learning solutions can react to the data they are presented in real-time, finding patterns and relationships and even being able to recognize the fraudulent activity. As we can imagine, this is hugely beneficial to the financial world as an unbelievable amount of digital transactions take place every hour, with heightened cybersecurity and successful fraud detection a necessity. AI takes the brunt of the work away from fraud analysts, allowing them to focus on higher-level cases while the AI ticks along in the background identifying the smaller issues. An example of how AI can detect through is by detecting anomalies, so going back to our banking scenario, perhaps a person has tried to apply for 10 identical loans in 5 minutes, the AI computer would be able to detect this as an anomaly and flag it up as suspicious. The machine has a baseline sense of what is “Normal” and when something deviates from that it can identify it and review it.
Other use cases of AI include automated customer support. We are all used to seeing chat boxes pop up at the bottom of our screens when we are browsing the internet, and these are of course AI bots primed and ready to help out. Companies can simply load up their most commonly asked questions and tell the BOT what answers to give, also instructing it to refer the customer elsewhere on more complex issues. Being able to answer frequently asked questions about the company or the product/ service it provides gives a better experience for the customer, as they get the answer to their query straight away, as well as saving the company time and money from not having to employ someone to sit and type responses or can have a worker direct their attention elsewhere. 

A computer with a logo on the screen

Description automatically generated with low confidence
Ransomware is malicious software that allows a hacker to restrict access to an individual’s or company’s critical information in some way and then demand some form of payment to lift the restriction. The most common form of restriction today is encryption of important data on the computer or network, which essentially lets the attacker hold user data or a system hostage, and message may be looks like “!!! IMPORTANT INFORMATION!!! All of your files are encrypted with RSA-2048 and AES-128 ciphers.” or we might see a readme.txt stating, “Your files have been replaced by these encrypted containers and aren’t accessible; you will lose your files on [a Date value] unless you pay $2000 in Bitcoin.” As it is malware and installed covertly on a system and executes a Cryptovirology (Cryptovirology is a field that studies how to use cryptography to design powerful malicious software.) attack that locks or encrypts valuable files on the systems/networks. Without a comprehensive network segmentation or micro-segmentation policy, malicious actors can also move laterally within organization’s network, infect endpoints and servers, and demand a ransom for access to the valuable data.

Graphical user interface

Description automatically generated

The Current State of Ransomware

Below infographic presents eye-opening facts that demonstrate the danger behind this cyber threat.

Ransomware stats and numbers for 2021

Types of Ransomwares 

Ransomware attacks can be deployed in different forms. Some variants may be more harmful than others, but they all have one thing in common: a RANSOM.

1)      Crypto malware

2)      Lockers

3)      Scareware

4)      Doxware

5)      RaaS

6)      Mac ransomware – KeRanger

7)      Ransomware on mobile devices

Preventing Ransomware

There are many steps organizations can take to prevent ransomware with varying degrees of effectiveness. Below are few tips for actions can take to reduce risk of a ransomware attack: 

 

Action

Description

1

Staff Awareness

Raising awareness about ransomware is a baseline security measure. But it could only take one employee lowering their guard for an organization to be compromised. As training sessions have little influence over staff for every potential attack, it makes added security more imperative.

2

Spam Filter

Cybercriminals send millions of malicious emails to at-random organizations and users, but an effective spam filter that continually adapts alongside a cloud-based threat intelligence center can prevent more than 99% of these from ever reaching employees’ desktops.

3

Configure Desktops Extensions

Employees should be trained not to double-click on executable files with a .exe extension. However, Windows hides file extensions by default, allowing a malicious executable such as “evil.doc.exe” to appear to be a Word document called “evil.doc”.  Ensuring that extensions are always displayed can go a long way to countering that kind of threat.

4

Block Executables

Filtering files with a .exe extension from emails can prevent some malicious files from being delivered to employees, but bear in mind that this isn’t foolproof. Malicious emails can instruct employees to rename files, and ransomware is also increasingly being delivered as JavaScript files (see below).

5

Block Malicious JavaScript Files

Ransomware being delivered in .zip files containing malicious JavaScript files are common. These are disguised as text files with names like “readme.txt.js”  – and often  just visible as “readme.txt”, with a script icon for a text file. You can prevent this vulnerability for staff by disabling Windows Script Host.

6

Restrict Use of Elevated Privilege

Ransomware can only encrypt files that are accessible to a particular user on their system – unless it includes code that can elevate a user’s privileges as part of the attack, which is where patching and zero trust come into play.

7

Promptly Patch Software

It’s a basic security precaution to ensure that all software is updated with the latest security patches, but it’s worth reiterating because breaches continue due to prolonging updating. Just in 2020, the SolarWinds hack could’ve been prevented for organizations that promptly patch software.

8

Zero Trust

Moving toward zero trust offers visibility and control over your network, including stopping ransomware.  The next three actions: prioritize assets and evaluate traffic, microsegmentation, and adaptive monitoring are central steps of the zero trust architecture and greatly reduce your risks of an attack.

9

Prioritize Assets and Evaluate Traffic

With the use of inventory tools and IOC lists, an organization can identify its most valuable assets or segments. This full picture offers staff a look into how an attacker could infiltrate your network and gives needed visibility into traffic flows. This gives your team clear guidelines as to what segments need added protection or restrictions.

10

Microsegmentation

Microsegmentation is the ultimate solution to stopping lateral movement. By implementing strict policies at the application level, segmentation gateways and NGFWs can prevent ransomware from reaching what’s most important.

11

Adaptive Monitoring and Tagging

Once your micro-perimeters surround your most sensitive segments, there’s a need for ongoing monitoring and adaptive technology.  This includes active tagging of workloads, threat hunting, and virus assessments, and consistent evaluation of traffic for mission-critical applications, data, or services.

12

Utilize a CASB

A cloud access security broker (CASB) can help manage policy enforcement for your organization’s cloud infrastructure. CASBs provide added visibility, compliance, data security, and threat protection in securing your data.

13

Rapid Response Testing

In the event of a successful breach, your team must be ready to restore systems and data recovery. This includes pre-assigning roles and ensuring a plan is in place.

12

Sandbox Testing

A common method for security analysts to test new or unrecognized files is by utilizing a sandbox. Sandboxes provide a safe environment, disconnected from the greater network for testing the file.

13

Update Anti-Ransomware Software

As noted, consistent updating of network software is critical. This is especially true for your existing intrusion detection and prevention system (IDPS), antivirus, and anti-malware.

14

Offline Backups

While virtual backups are great, if you’re not storing data backups offline, you’re at risk of losing that data. This means regular backups, multiple copies saved, and monitoring to ensure backups hold true to the original. Restoring data after an attack is often your best approach.

15

Update Email Gateway

All email for your network typically travels through a secure web gateway (SWG). By actively updating this server, you can monitor email attachments, websites, and files for malware. This visibility into attacks trending for your organization can help inform staff moving forward of what to expect.

16

Block Ads

All devices and browsers should have extensions that automatically block pop-up ads. With the extensive use of the internet, malicious ads pose a long-lasting threat if not blocked.

17

Bring-Your-Own-Device (BYOD)Restrictions

If you have a remote work staff or just a loose policy surrounding devices acceptable for network access, it might be time to crack down. Unregulated use of new or unique devices poses an unnecessary risk to your network. Enterprise mobility management (EMM) is one solution.

18

Forensic Analysis

After any detection of ransomware, there needs to be an investigation into its entry point, time in the environment, and confirm that it’s been fully removed from all network devices. From there, the task of ensuring it never returns begins.

Network Security Monitoring Tools

a padlock representing network security

The most common Network Security monitoring tools are as,

Argus

https://www.qosient.com/argus/

POf

http://lcamtuf.coredump.cx/p0f3/#

Nagios

https://www.nagios.org/

Splunk

https://www.splunk.com/

OSSEC

https://www.ossec.net/

Encryption Tools
Best encryption software 2021: Protect your data | ZDNet 

Tor

https://www.torproject.org/

KeePass

https://keepass.info/

TrueCrypt

http://truecrypt.sourceforge.net/

Web Vulnerability Scanning Tools

2015 Bot Traffic Report: Humans Take Back the Web, Bad Bots Not Giving Any  Ground | Imperva

Burp Suite

https://portswigger.net/burp

Nikto

https://cirt.net/Nikto2

Paros Proxy

https://sourceforge.net/projects/paros/

Nmap

https://nmap.org/

Nessus

https://www.tenable.com/products/nessus/nessus-professional

Nexpose

https://www.rapid7.com/products/nexpose/

Penetration Testing

Web Applications Penetration Testing | RedForce - Always Stay Ahead!

Metasploit

https://www.metasploit.com/

Kali Linux

https://www.kali.org/

Password Auditing Tools

Networking – ITSaDC

John the Ripper

https://www.openwall.com/john/

Cain and Abel

http://www.oxid.it/cain.html

Tcpdump

http://www.tcpdump.org/

Wireshark

https://www.wireshark.org/

Network Defense Wireless Tools

5 cybersecurity tips for consumers: Lessons learned in the enterprise | CIO

Aircrack

https://www.aircrack-ng.org/

NetStumbler

http://www.netstumbler.com/downloads/

KisMAC

https://kismac-ng.org/

Network Intrusion & Detection

Securing the Internet of Things with Intrusion Detection Systems - BPI -  The destination for everything process related

Snort

https://www.snort.org/

Forcepoint

https://www.forcepoint.com/product/ngfw-next-generation-firewall

GFI LanGaurd

https://languard.gfi.com/

Acunetix

https://www.acunetix.com/

Ransomware attack solutions

One of the most important ways to stop ransomware is to have a strong endpoint security. This is a program that blocks malware from infecting your systems when installed on endpoint devices such as phones and computers. Just be sure that ransomware protection is included as many traditional anti-virus products are not equipped to defend against modern ransomware attacks. As ransomware is commonly delivered through email, email security is key in preventing ransomware. Secure email gateway technologies filter email communications with URL defenses and attachment sandboxing to identify threats and block them from being delivered to users. This stops ransomware from arriving on endpoint devices while blocking users from inadvertently installing malicious programs onto their machines. DNS web filtering solutions stop users from visiting dangerous websites and downloading malicious files, blocking ransomware that is spread through viruses downloaded from the Internet, including Trojan horse software. DNS filters also block malicious third-party adverts. Isolation technologies completely remove threats from users by isolating browsing activity in secure servers and displaying a safe render to users. Moreover, isolation does not affect the user experience, delivering high security efficacy and seamless browsing. Below are few key points to be noted and needs to implement as practice to prevent & limit the impact of Ransomware.

1)      Perform regular system backups

2)      Segment your network

3)      Conduct regular network security assessments

4)      Conduct employee security training

5)      Get your password security under control

6)      Ransomware Insurance

Ransomware : Infographic

Ransomware infographic

Hello !

I am excited to announce a free webinar on “SQL Server optimization and Performance Tuning”

Please save date and time on 6:00 PM IST / 7:30 AM CST / 8:30 AM EST June 20th, 2020.

Register : https://forms.gle/41iGy4UbLEHeMgDi9

Meeting Link : meet.google.com/tpu-stbh-ryh

Happy Learning!

The caching feature with SQL Server was introduced in SQL Server 2005. This functionality provides the caching of temporary objects (temp tables, table variables and TVFs) across repeated calls of objects like Stored procedures, Triggers and UDFs.

In-short, when a stored procedure execution ends, SQL Server truncates (few exceptions are there) and renames the table, keeping only one IAM and one data page. The structure will be used by the subsequent calls instead of allocating new pages from scratch when the object is created again.

If the temp objects are smaller than 8MB, the truncation happens immediately after module execution ends, for the larger temp objects, SQL Server performs “deferred drop” and immediately returns control to the application.

The caching mechanism works for the temp tables that is created by using CREATE TABLE or SELECT INTO statements. The caching is not possible when there is an explicit DDL on a temp table after it is created like ALTER #table ADD Constraint, CREATE STATISTICS on table columns or there is a named constraint on a temp table. Temporary tables are also not cached if they are part of dynamic SQL or ad-hoc batch.

Statistics created using an explicit CREATE STATISTICS statement are not linked to a cached temporary object. Auto-Created statistics are linked to a cached temp object. UPDATE STATISTICS does not prevent temp table caching.

As we can track temp table name by using SELECT OBJECT_ID(‘tempdb.dbo.#temp’) , This will show that temp table object_id never changes – an internal process renames temp table name to hexadecimal form at the end of the stored procedure. This would happen even if we explicitly dropped the table.

If a stored procedure is executed concurrently, multiple separate cached objects may be created in tempdb. There is a cached temp object per execution context.


For a Database Administrator managing Log Files growth is a consequential task and I hope every DBA have been faced this issue on very frequent basis. Depending on coding pattern and business logic implementation there might be n numbers of causes for Log files growth. Now a DBA wanted to get notify when a defined threshold regarding file growth breaches, a system generated alert should be triggered and the related team members should notify via email.

Here, to do this, we can create an alert in SSMS as per below steps for desired notification as,

Step 1) Right click on “Alert” from SQL Server Agent Section and select “New Alert”

Step 2)     From General Page, we can define parameter as below

a) Define Name as per your convenience , I have given Name as LogFileSize5GB, because here I want alert when Log file size increase after 5 GB in size.
b) Select Type as SQL Server Performance condition alert
    c) Just Set parameter in Performance condition alert definition section as
Object = Databases
Counter = Log File(s) Size (KB) —- 5242880 KB = 5 GB
Instance = Database Name which log files growth needs to be monitored, in this case I have select my test DB – VirendraTest

Step 3)     Click on Response Page, we can define parameter as below
Note : We need to create operator(s) where we can specify to whom we wanted to send emails.

Step 4)     Click on Options Page, we can define parameter as below

Step 5) Click on OK, an alert has been configured and whenever log file size increases more than 5 GB, respected team members will get notified.

Today, My Dev team was looking for a solution where they wanted to access a Development server with alternate name of server instead of server’s original name because in an application connection string was using server name and team was not interested to change application configuration file. With SQL Server Configuration Manager there is a feature named as “Alias” – its a simply an alternative name given to the server. In other case this alias name can be used as easier for users to remember server instead of having a complicated name like DESKTOP-0P6BOHT ( for example my laptop name) , here we can simplify it to something like Virendra. I hope this approach can be applied in case of any server movement/migration from one server to another old/new server. Once an alias has been created and all relevant system objects have been updated to reference the alias, renaming or moving the server becomes a much less tedious process as only the alias need to be updated to reference the new server name and it can save huge time.

In order to configure an alias, Here I am using my PC, which name is “DESKTOP-0P6BOHT”

Step 1 ) Open SQL Server Configuration Manager and Select Alias from SQL Native Client Configuration

Step 2) A new dialog box “Alias – New” will be open


Set Alias Name as per your convenience (example – Virendra) , Your Server SQL port Number – here is default 1433 and Server Name as per your system name ( here my PC name)

Click on Apply /OK.

Great, now we should be able to connect to the instance of SQL Server using the newly created alias.  To check this Launch SQL Server Management Studio and enter the alias name as the server and try to connect.


Good News, you are connected with your server, this approach is harmonious if you are testing on same server, but if you want to access your server remotely, needs to add alias name in your organization’s DNS server.