Pengertian pemodelan data ciri, jenis, tekniknya | AnakTik.com
If you don’t get the data right, nothing else matters. However, the business focus on applications often overshadows the priority for a well-organized database design.
Several factors can lead to a poor database design — lack of experience, a shortage of the necessary skills, tight timelines, and insufficient resources can all contribute. Addressing some simple data modeling and design fundamentals lead to the right path. Here, I am trying to explain a few common databases design “sins” that can be easily avoided and ways to correct them in future projects.

1) Poor or missing documentation for databases in production: Documentation for databases usually falls into three categories: incomplete, inaccurate, or none at all. This causes developers, DBAs, architects, and business analysts to scramble to get on the same page. They are left up to their own imagination to interpret the meaning and usage of the data. The best approach is to place the data models into a central repository and spawn automated reports so that with minimal effort, everyone benefits. Producing a central store of models is only half the battle, though. Once that is done, executing validation and quality metrics will enhance the quality of the models over time. It will help in data management and can extend what metadata is captured in the models.

2) Inadequate or no normalization:
Sometimes it needs to de-normalize a database structure to achieve optimal performance but sacrificing flexibility will paint you in a corner. Despite the long-held belief by developers, one table to store everything is not always optimal. Another common mistake is repeating values stored in a table. This can greatly decrease flexibility and increase difficulty when updating the data. Understanding even the basics of normalization adds flexibility to a design while reducing redundant data.

3) Inappropriate data modeling:
There are numerous examples of customers performing the modeling upfront, but once the design is in production, all modeling ceases. To maintain flexibility and ensure consistency when the database changes, those modifications need to find their way back to the model.

4) Improper storage of reference data:
There are two main problems with reference data. It is either stored in many places or, even worse, embedded in the application code. Reference values provide valuable documentation which should be communicated in an appropriate location. The best chance is often via the model. The key is to have it defined in one place and used in other places.

5) Ignorance of foreign key or check constraints: End-users complain all the time about the lack of referential integrity – RI or validation checks defined in the database when reverse engineering databases. For older database systems, it was thought that foreign keys and check constraints slowed performance, thus, the RI and checks should be done via the application. If it is possible to validate the data in the database, can be done there. Error handling will be drastically simplified, and data quality will increase as a result.

6) Avoiding uses of domains and naming standards:
Domains and naming standards are probably two of the most important things that can be incorporated into modeling practices. Domains allow the creation of reusable attributes so that the same attributes are not created in different places with different properties. Naming standards allow to clearly identify those attributes consistently.

7) Inappropriate primary key: The simplest principle to remember when picking a primary key is SUMStatic, Unique, Minimal. It is not necessary to delve into the whole natural vs. surrogate key debate, however, it is important to know that although surrogate keys may uniquely identify the record, they do not always uniquely identify the data. There is a time and a place for both, and you can always create an alternate key for natural keys if a surrogate is used as the primary key.

8) Using a composite key: A composite primary key is a primary key that contains two or more columns. The primary key field serves a single purpose of uniquely identifying the row within the system, as a result, it’s used in other tables as foreign keys. Using a composite primary key means there is a need to add two or three columns in these other tables to link back to this table, which is not as easy or efficient as using a single column.

9) Poor indexing: Indexes on a database are objects that allow certain queries to run more efficiently. Indexes are not a silver bullet for performance – they don’t solve every issue. There are commonly three mistakes that are often done when it comes to indexes:

a) No indexes at all: Without the index, as the table grows with more data, then the queries will likely get slower and slower.

b) Too many indexes: Having too many indexes can also be an issue for databases. Indexes help with the reading of data from a table, but slow down the process of DML.

c) Indexes on every column: it might be tempting to add indexes to every field of the table, doing so can slow down the database

10) Incorrect Data Types: Due to incomplete knowledge about business flows, storing data in the wrong format can cause issues with storing the data and retrieving it later.

An immense volume of data is waving into and out in today’s businesses, but it becomes more complex to know how to convert this data into actionable insights. On the other side, data science has an incredible perspective for all types of businesses to design models that further define trends and use them as the foundation for transformative software, i.e. from locating IoT devices to predictive analytics. These models are used to augment customer experience, processing efficiency, user engagement, possible conditions where data can crack difficult problems. The market for Data Science services is increasing with the speed of light, it plays a vital and crucial role in helping to transform our business digitally when many companies are looking to unlock the strength of business data that lacks with the demanding proficiency and support.

Digital transformation is the all-embracing transformation of multiple activities that an organization control to leverage opportunities produced by digital technologies and data. It touches the ubiquitous era of digitalization regardless of the size and worthiness of the industry like,

  • It reflects the digital trends in terms of operations and policies that make severe changes in how businesses control and assist customers.
  • It depends on organizational data to achieve targets more efficiently and abandon values to customers, but how we catch in the next section.

The native components that are very likely to transform are its business models, operations, infrastructures, culture, sorted quantitative and qualitative modes of searching for new sources of customer values. No wonder, Digital transformation covered all the domains of business regarding product innovations, operations, finance, retailing marketing strategies, customer services, etc. The term “DIGITALIZATION” not only speeds up the business process and performance but also delivers business opportunities. It also improves the outpace of digital disruption and fixes the position of a person in the fast-growing business environment. Consider the situation where an individual wants to recognize

  • Which sections need to be transformed,
  • How to drop the risk factors,
  • How to withdraw unwanted pitfalls from resources.

Most of the industries have chosen data-driven approaches to digitally transform their businesses, infact various big data technologies are available to follow the appropriate data-driven approaches. In short, companies are using data science and associated technologies to make the environment completely digital, and BI for gathering, computing, and interrogating their business data that moreover can be turned out into actionable insights. The latest surveys show that more and more organizations are embracing data science as a service to reach a large resource of data experts for enhancing their decision-making. Experts are able enough to generate digital strategies and plans either in terms of increasing revenue and reducing costs or improving efficiency.

The below are the multiple ways when data science acts as services to add value in business.

Authorizing decision-making via a data-driven approach – Like data science, digital transformation is a convoluted process, i.e., customer data combined with appropriate business operations can leverage to make informed conclusions while restricting unwanted risks. With data science capabilities, we can find out how to transform business digitally and which area of business needs to transform.

Classifying warnings, opportunities, and scopes via data-insights – The volume of available information and insights are rapidly growing with the increased volume of data which indirectly initiates the opportunities and hence scope to grow for business as well as the individual. Data science services make organizations capable to cope with the deficiency of data experts and give a detailed description of their business environment. Data science is a technique that enables next-generation outcomes to predict what is going to happen and how to preserve it from risks if any. Data science enables organizations to have real-time visibility about their customers, support in making decisions to optimize the internal process for larger activity, expanded flexibility and reduce the cost.

Adding more values with Machine learning: Being a major part of the data science ecosystem, machine learning can stimulate digital transformation more effectively in bioinformatics and other industries. It supports to break massive data to identify trends and exceptions. One impactive approach is Artificial Intelligence which uses machine learning algorithms to deliver insights, designing timelines models and anticipating chances where disruptions occur.

Coding Artificial Intelligence GIF by Matthew Butler


AI has a whole host of practical uses not only in the fintech industry but in the wider finance world, and even the wider world beyond that. The general gist of AI is that it solves problems, it allows companies to save both time and money. According to the prediction from many Research, AI technology will allow financial institutions to reduce their operational costs by 22-~25% by 2030. Adopting AI enables the industry to create a better environment for the customer, providing better customer service through a variety of different business activities.
In many instances, the practical use of AI is to do with data and enable companies to analyze that data in an efficient strategic way. Organizations particularly financial institutions will often have streams of data on their consumers but will rarely do much with it due to the time it would take to go through and analyze in order to find anything meaningful. This is where artificial intelligence comes in, as AI and machine learning are very effective at analyzing large amounts of data in real-time, then taking that data and drawing conclusions or recommending actions.
One example of applying AI with data is for banks to decide whether someone is creditworthy. Banks and other financial institutions want to be able to offer credit to their customers, but they want to be able to price for it accordingly, i.e., they don’t want to overcharge trustworthy customers or undercharge customers that may be more of a risk. Traditionally, to determine someone’s creditworthiness you would look at their credit scores, credit bureau data kept by agencies like Experian. However, by utilizing AI these institutions can look at their own customer data that they have and draw conclusions from there. From these large portfolios of consumer data AI can infer different kinds of relationships. Details like your job, where you live, or where you work are more obvious sources.
Another way AI’s data analysis can be used is for fraud detection and prevention. AI and machine learning solutions can react to the data they are presented in real-time, finding patterns and relationships and even being able to recognize the fraudulent activity. As we can imagine, this is hugely beneficial to the financial world as an unbelievable amount of digital transactions take place every hour, with heightened cybersecurity and successful fraud detection a necessity. AI takes the brunt of the work away from fraud analysts, allowing them to focus on higher-level cases while the AI ticks along in the background identifying the smaller issues. An example of how AI can detect through is by detecting anomalies, so going back to our banking scenario, perhaps a person has tried to apply for 10 identical loans in 5 minutes, the AI computer would be able to detect this as an anomaly and flag it up as suspicious. The machine has a baseline sense of what is “Normal” and when something deviates from that it can identify it and review it.
Other use cases of AI include automated customer support. We are all used to seeing chat boxes pop up at the bottom of our screens when we are browsing the internet, and these are of course AI bots primed and ready to help out. Companies can simply load up their most commonly asked questions and tell the BOT what answers to give, also instructing it to refer the customer elsewhere on more complex issues. Being able to answer frequently asked questions about the company or the product/ service it provides gives a better experience for the customer, as they get the answer to their query straight away, as well as saving the company time and money from not having to employ someone to sit and type responses or can have a worker direct their attention elsewhere. 

A computer with a logo on the screen

Description automatically generated with low confidence
Ransomware is malicious software that allows a hacker to restrict access to an individual’s or company’s critical information in some way and then demand some form of payment to lift the restriction. The most common form of restriction today is encryption of important data on the computer or network, which essentially lets the attacker hold user data or a system hostage, and message may be looks like “!!! IMPORTANT INFORMATION!!! All of your files are encrypted with RSA-2048 and AES-128 ciphers.” or we might see a readme.txt stating, “Your files have been replaced by these encrypted containers and aren’t accessible; you will lose your files on [a Date value] unless you pay $2000 in Bitcoin.” As it is malware and installed covertly on a system and executes a Cryptovirology (Cryptovirology is a field that studies how to use cryptography to design powerful malicious software.) attack that locks or encrypts valuable files on the systems/networks. Without a comprehensive network segmentation or micro-segmentation policy, malicious actors can also move laterally within organization’s network, infect endpoints and servers, and demand a ransom for access to the valuable data.

Graphical user interface

Description automatically generated

The Current State of Ransomware

Below infographic presents eye-opening facts that demonstrate the danger behind this cyber threat.

Ransomware stats and numbers for 2021

Types of Ransomwares 

Ransomware attacks can be deployed in different forms. Some variants may be more harmful than others, but they all have one thing in common: a RANSOM.

1)      Crypto malware

2)      Lockers

3)      Scareware

4)      Doxware

5)      RaaS

6)      Mac ransomware – KeRanger

7)      Ransomware on mobile devices

Preventing Ransomware

There are many steps organizations can take to prevent ransomware with varying degrees of effectiveness. Below are few tips for actions can take to reduce risk of a ransomware attack: 

 

Action

Description

1

Staff Awareness

Raising awareness about ransomware is a baseline security measure. But it could only take one employee lowering their guard for an organization to be compromised. As training sessions have little influence over staff for every potential attack, it makes added security more imperative.

2

Spam Filter

Cybercriminals send millions of malicious emails to at-random organizations and users, but an effective spam filter that continually adapts alongside a cloud-based threat intelligence center can prevent more than 99% of these from ever reaching employees’ desktops.

3

Configure Desktops Extensions

Employees should be trained not to double-click on executable files with a .exe extension. However, Windows hides file extensions by default, allowing a malicious executable such as “evil.doc.exe” to appear to be a Word document called “evil.doc”.  Ensuring that extensions are always displayed can go a long way to countering that kind of threat.

4

Block Executables

Filtering files with a .exe extension from emails can prevent some malicious files from being delivered to employees, but bear in mind that this isn’t foolproof. Malicious emails can instruct employees to rename files, and ransomware is also increasingly being delivered as JavaScript files (see below).

5

Block Malicious JavaScript Files

Ransomware being delivered in .zip files containing malicious JavaScript files are common. These are disguised as text files with names like “readme.txt.js”  – and often  just visible as “readme.txt”, with a script icon for a text file. You can prevent this vulnerability for staff by disabling Windows Script Host.

6

Restrict Use of Elevated Privilege

Ransomware can only encrypt files that are accessible to a particular user on their system – unless it includes code that can elevate a user’s privileges as part of the attack, which is where patching and zero trust come into play.

7

Promptly Patch Software

It’s a basic security precaution to ensure that all software is updated with the latest security patches, but it’s worth reiterating because breaches continue due to prolonging updating. Just in 2020, the SolarWinds hack could’ve been prevented for organizations that promptly patch software.

8

Zero Trust

Moving toward zero trust offers visibility and control over your network, including stopping ransomware.  The next three actions: prioritize assets and evaluate traffic, microsegmentation, and adaptive monitoring are central steps of the zero trust architecture and greatly reduce your risks of an attack.

9

Prioritize Assets and Evaluate Traffic

With the use of inventory tools and IOC lists, an organization can identify its most valuable assets or segments. This full picture offers staff a look into how an attacker could infiltrate your network and gives needed visibility into traffic flows. This gives your team clear guidelines as to what segments need added protection or restrictions.

10

Microsegmentation

Microsegmentation is the ultimate solution to stopping lateral movement. By implementing strict policies at the application level, segmentation gateways and NGFWs can prevent ransomware from reaching what’s most important.

11

Adaptive Monitoring and Tagging

Once your micro-perimeters surround your most sensitive segments, there’s a need for ongoing monitoring and adaptive technology.  This includes active tagging of workloads, threat hunting, and virus assessments, and consistent evaluation of traffic for mission-critical applications, data, or services.

12

Utilize a CASB

A cloud access security broker (CASB) can help manage policy enforcement for your organization’s cloud infrastructure. CASBs provide added visibility, compliance, data security, and threat protection in securing your data.

13

Rapid Response Testing

In the event of a successful breach, your team must be ready to restore systems and data recovery. This includes pre-assigning roles and ensuring a plan is in place.

12

Sandbox Testing

A common method for security analysts to test new or unrecognized files is by utilizing a sandbox. Sandboxes provide a safe environment, disconnected from the greater network for testing the file.

13

Update Anti-Ransomware Software

As noted, consistent updating of network software is critical. This is especially true for your existing intrusion detection and prevention system (IDPS), antivirus, and anti-malware.

14

Offline Backups

While virtual backups are great, if you’re not storing data backups offline, you’re at risk of losing that data. This means regular backups, multiple copies saved, and monitoring to ensure backups hold true to the original. Restoring data after an attack is often your best approach.

15

Update Email Gateway

All email for your network typically travels through a secure web gateway (SWG). By actively updating this server, you can monitor email attachments, websites, and files for malware. This visibility into attacks trending for your organization can help inform staff moving forward of what to expect.

16

Block Ads

All devices and browsers should have extensions that automatically block pop-up ads. With the extensive use of the internet, malicious ads pose a long-lasting threat if not blocked.

17

Bring-Your-Own-Device (BYOD)Restrictions

If you have a remote work staff or just a loose policy surrounding devices acceptable for network access, it might be time to crack down. Unregulated use of new or unique devices poses an unnecessary risk to your network. Enterprise mobility management (EMM) is one solution.

18

Forensic Analysis

After any detection of ransomware, there needs to be an investigation into its entry point, time in the environment, and confirm that it’s been fully removed from all network devices. From there, the task of ensuring it never returns begins.

Network Security Monitoring Tools

a padlock representing network security

The most common Network Security monitoring tools are as,

Argus

https://www.qosient.com/argus/

POf

http://lcamtuf.coredump.cx/p0f3/#

Nagios

https://www.nagios.org/

Splunk

https://www.splunk.com/

OSSEC

https://www.ossec.net/

Encryption Tools
Best encryption software 2021: Protect your data | ZDNet 

Tor

https://www.torproject.org/

KeePass

https://keepass.info/

TrueCrypt

http://truecrypt.sourceforge.net/

Web Vulnerability Scanning Tools

2015 Bot Traffic Report: Humans Take Back the Web, Bad Bots Not Giving Any  Ground | Imperva

Burp Suite

https://portswigger.net/burp

Nikto

https://cirt.net/Nikto2

Paros Proxy

https://sourceforge.net/projects/paros/

Nmap

https://nmap.org/

Nessus

https://www.tenable.com/products/nessus/nessus-professional

Nexpose

https://www.rapid7.com/products/nexpose/

Penetration Testing

Web Applications Penetration Testing | RedForce - Always Stay Ahead!

Metasploit

https://www.metasploit.com/

Kali Linux

https://www.kali.org/

Password Auditing Tools

Networking – ITSaDC

John the Ripper

https://www.openwall.com/john/

Cain and Abel

http://www.oxid.it/cain.html

Tcpdump

http://www.tcpdump.org/

Wireshark

https://www.wireshark.org/

Network Defense Wireless Tools

5 cybersecurity tips for consumers: Lessons learned in the enterprise | CIO

Aircrack

https://www.aircrack-ng.org/

NetStumbler

http://www.netstumbler.com/downloads/

KisMAC

https://kismac-ng.org/

Network Intrusion & Detection

Securing the Internet of Things with Intrusion Detection Systems - BPI -  The destination for everything process related

Snort

https://www.snort.org/

Forcepoint

https://www.forcepoint.com/product/ngfw-next-generation-firewall

GFI LanGaurd

https://languard.gfi.com/

Acunetix

https://www.acunetix.com/

Ransomware attack solutions

One of the most important ways to stop ransomware is to have a strong endpoint security. This is a program that blocks malware from infecting your systems when installed on endpoint devices such as phones and computers. Just be sure that ransomware protection is included as many traditional anti-virus products are not equipped to defend against modern ransomware attacks. As ransomware is commonly delivered through email, email security is key in preventing ransomware. Secure email gateway technologies filter email communications with URL defenses and attachment sandboxing to identify threats and block them from being delivered to users. This stops ransomware from arriving on endpoint devices while blocking users from inadvertently installing malicious programs onto their machines. DNS web filtering solutions stop users from visiting dangerous websites and downloading malicious files, blocking ransomware that is spread through viruses downloaded from the Internet, including Trojan horse software. DNS filters also block malicious third-party adverts. Isolation technologies completely remove threats from users by isolating browsing activity in secure servers and displaying a safe render to users. Moreover, isolation does not affect the user experience, delivering high security efficacy and seamless browsing. Below are few key points to be noted and needs to implement as practice to prevent & limit the impact of Ransomware.

1)      Perform regular system backups

2)      Segment your network

3)      Conduct regular network security assessments

4)      Conduct employee security training

5)      Get your password security under control

6)      Ransomware Insurance

Ransomware : Infographic

Ransomware infographic

Hello !

I am excited to announce a free webinar on “SQL Server optimization and Performance Tuning”

Please save date and time on 6:00 PM IST / 7:30 AM CST / 8:30 AM EST June 20th, 2020.

Register : https://forms.gle/41iGy4UbLEHeMgDi9

Meeting Link : meet.google.com/tpu-stbh-ryh

Happy Learning!

The caching feature with SQL Server was introduced in SQL Server 2005. This functionality provides the caching of temporary objects (temp tables, table variables and TVFs) across repeated calls of objects like Stored procedures, Triggers and UDFs.

In-short, when a stored procedure execution ends, SQL Server truncates (few exceptions are there) and renames the table, keeping only one IAM and one data page. The structure will be used by the subsequent calls instead of allocating new pages from scratch when the object is created again.

If the temp objects are smaller than 8MB, the truncation happens immediately after module execution ends, for the larger temp objects, SQL Server performs “deferred drop” and immediately returns control to the application.

The caching mechanism works for the temp tables that is created by using CREATE TABLE or SELECT INTO statements. The caching is not possible when there is an explicit DDL on a temp table after it is created like ALTER #table ADD Constraint, CREATE STATISTICS on table columns or there is a named constraint on a temp table. Temporary tables are also not cached if they are part of dynamic SQL or ad-hoc batch.

Statistics created using an explicit CREATE STATISTICS statement are not linked to a cached temporary object. Auto-Created statistics are linked to a cached temp object. UPDATE STATISTICS does not prevent temp table caching.

As we can track temp table name by using SELECT OBJECT_ID(‘tempdb.dbo.#temp’) , This will show that temp table object_id never changes – an internal process renames temp table name to hexadecimal form at the end of the stored procedure. This would happen even if we explicitly dropped the table.

If a stored procedure is executed concurrently, multiple separate cached objects may be created in tempdb. There is a cached temp object per execution context.


For a Database Administrator managing Log Files growth is a consequential task and I hope every DBA have been faced this issue on very frequent basis. Depending on coding pattern and business logic implementation there might be n numbers of causes for Log files growth. Now a DBA wanted to get notify when a defined threshold regarding file growth breaches, a system generated alert should be triggered and the related team members should notify via email.

Here, to do this, we can create an alert in SSMS as per below steps for desired notification as,

Step 1) Right click on “Alert” from SQL Server Agent Section and select “New Alert”

Step 2)     From General Page, we can define parameter as below

a) Define Name as per your convenience , I have given Name as LogFileSize5GB, because here I want alert when Log file size increase after 5 GB in size.
b) Select Type as SQL Server Performance condition alert
    c) Just Set parameter in Performance condition alert definition section as
Object = Databases
Counter = Log File(s) Size (KB) —- 5242880 KB = 5 GB
Instance = Database Name which log files growth needs to be monitored, in this case I have select my test DB – VirendraTest

Step 3)     Click on Response Page, we can define parameter as below
Note : We need to create operator(s) where we can specify to whom we wanted to send emails.

Step 4)     Click on Options Page, we can define parameter as below

Step 5) Click on OK, an alert has been configured and whenever log file size increases more than 5 GB, respected team members will get notified.

Today, My Dev team was looking for a solution where they wanted to access a Development server with alternate name of server instead of server’s original name because in an application connection string was using server name and team was not interested to change application configuration file. With SQL Server Configuration Manager there is a feature named as “Alias” – its a simply an alternative name given to the server. In other case this alias name can be used as easier for users to remember server instead of having a complicated name like DESKTOP-0P6BOHT ( for example my laptop name) , here we can simplify it to something like Virendra. I hope this approach can be applied in case of any server movement/migration from one server to another old/new server. Once an alias has been created and all relevant system objects have been updated to reference the alias, renaming or moving the server becomes a much less tedious process as only the alias need to be updated to reference the new server name and it can save huge time.

In order to configure an alias, Here I am using my PC, which name is “DESKTOP-0P6BOHT”

Step 1 ) Open SQL Server Configuration Manager and Select Alias from SQL Native Client Configuration

Step 2) A new dialog box “Alias – New” will be open


Set Alias Name as per your convenience (example – Virendra) , Your Server SQL port Number – here is default 1433 and Server Name as per your system name ( here my PC name)

Click on Apply /OK.

Great, now we should be able to connect to the instance of SQL Server using the newly created alias.  To check this Launch SQL Server Management Studio and enter the alias name as the server and try to connect.


Good News, you are connected with your server, this approach is harmonious if you are testing on same server, but if you want to access your server remotely, needs to add alias name in your organization’s DNS server.


I hope, all SQL Server DBA or Developer have faced the scenario where they have to copy only SQL Server Database objects without data. To do this, commonly DBA/Developer script-out source database using generate script wizard and run that script at targeted instance. I have seen very few people are aware about a feature named as “DAC Package” which is available from SQL server 2008 R2 onwards. Using DAC Package, we can take all objects backup without data and restore it to any other instance. Here are the step-by-step details as

Step -1 ) Connect SQL Server instance using SSMS, from Object explorer, Right click on database-àTasksàExtract Data-tier Application
It will open Extract Data-tier Application wizard, click on next, Set Properties option will be open


Step 2) Just Set DAC Package file location and click on next, Check Validation and Summary, then Next

Step 3) Click on Finish, DAC Package will be create at specified location

Now you can restore this DAC package anywhere you wanted. Steps are as below
Step 4) Connect your SQL Instance using SSMS where you want to restore this DAC Package, Right click on Databasesà Deploy Data-tier Application…, Deploy wizard will be open, Select Next,

Step 5) Browse DAC package then click on next, Change desired Database Name as per requirement, click on next, Just review summary and click on finish,

That’s all , New Database restored without data….

Enjoy J !

 

I personally observed with various organisations or even within an organisation if multiple development team is working on projects , there might be the development team could follow ambiguous naming conventions. It’s one of the best development practices when we are creating any objects like Stored Procedure, Function, view or table, a proper prefixing should be in place to identify the objects. Take an example, suppose we have to create a stored procedure, so many developer guys start stored procedure name with ‘SP_’ which relate stored procedure name as System stored procedure, system will try to find this stored procedure within system databases and it could be cause for performance bottleneck, to avoid this type happening and management of these types of user created objects, we can enforce objects naming convention policy with our Server as well as database level depending on requirement. We can manage this using Policy Based management (PBM).

Here, I am taking an example to enforce naming convention policy for Stored Procedure at server level. Let consider we want to enforce a policy where all new stored procedure should be create with a prefix ‘USP_’. To do this, here are the details steps.

Step 1) Open SSMS, Connect SQL Server instance, Expand Managementà Policy Management and Right Click on Conditions and select New Condition as “Stored Procedure Naming Conventions”

Step 2) Define values as , Name = Stored Procedure Naming Convention, Facet = Stored Procedure, Expression , in Field section select @Name, Operator LIKE and Value as ‘USP[_]%’, click on OK.
Details are as below

Now PBM condition has been created, verify from Condition Tab, it will look like as below,

Step 3) From Policy Management, right click on Policies and select New policy

Step 4) Set Name = Stored Procedure Naming, Check Condition = Your created condition name – here I am taking our created condition from previous steps – Stored Procedure Naming Conventions, in Against targets just keep Every, Evaluation Mode = On Change : Prevent, Server Restriction = None (Default) and click OK.

We have created Policy for stored procedure name

Step 5) Enable policy

We have created a policy to restrict Stored procedure name prefix, as per policy, now every stored procedure name will be started with USP_, other prefix will be not accepted, Let see in below example I have taken SP name as CustomerList, PBM enforcing the same,

Now if I change SP name as USP_CustomerList, it will be create.

Thanks, and keep reading.