refactor: restructure .cursor directory for improved organization and clarity (#40196)

# refactor: restructure .cursor directory for improved organization and
clarity

## Description

This PR refactors the `.cursor` directory to enhance organization,
clarity, and maintainability.

### Problem

The existing `.cursor` directory lacked clear organization, making it
difficult to find specific files, understand their purpose, and add new
components consistently.

### Solution

A comprehensive restructuring:

#### New Directory Structure

```
.cursor/
├── settings.json                  # Main configuration file
├── docs/                          # Documentation
│   ├── guides/                    # In-depth guides
│   ├── references/                # Quick references
│   └── practices/                 # Best practices
├── rules/                         # Rule definitions
│   ├── commit/                    # Commit-related rules
│   ├── quality/                   # Code quality rules
│   ├── testing/                   # Testing rules
│   └── verification/              # Verification rules
└── hooks/                         # Git hooks and scripts
```

#### Key Changes

1. **Logical Categorization**: Organized files into clear categories
based on purpose
2. **Improved Documentation**: Added comprehensive README files for each
directory
3. **Standardized Naming**: Implemented consistent kebab-case naming
convention
4. **Reference Updates**: Updated all internal references to point to
new file locations

### Benefits

- **Easier Navigation**: Clear categorization makes finding files
intuitive
- **Improved Understanding**: Comprehensive documentation explains
purpose and usage
- **Simplified Maintenance**: Logical structure makes updates and
additions easier
- **Better Onboarding**: New team members can quickly understand the
system

This refactoring sets a solid foundation for all Cursor AI-related
configurations and rules, making it easier for the team to leverage
Cursor's capabilities.
This commit is contained in:
vivek-appsmith 2025-04-11 12:04:33 +05:30 committed by GitHub
parent ae66d74b87
commit d176e40726
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
29 changed files with 7072 additions and 2 deletions

View File

@ -1,6 +1,39 @@
# Cursor Rules # Appsmith Cursor Configuration
This directory contains configuration for cursor-specific rules and behaviors. This directory contains configuration for Cursor AI tools, rules, and guidelines for the Appsmith project.
## Directory Structure
```
.cursor/
├── settings.json # Main configuration file
├── docs/ # Documentation
│ ├── guides/ # In-depth guides
│ ├── references/ # Quick references
│ └── practices/ # Best practices
├── rules/ # Rule definitions
│ ├── commit/ # Commit-related rules
│ ├── quality/ # Code quality rules
│ ├── testing/ # Testing rules
│ └── verification/ # Verification rules
└── hooks/ # Git hooks and scripts
```
## Key Features
- **Commit Message Rules**: Guidelines for structured, informative commit messages
- **Code Quality Checks**: Automated validation of code quality standards
- **Testing Requirements**: Rules for test coverage and quality
- **Performance Guidelines**: Best practices for maintaining high performance
- **Documentation**: Comprehensive guides and references for the codebase
## Usage
- Use the rules in this directory to ensure consistent quality across the project
- Reference the documentation for best practices and technical details
- Hooks automate common tasks and enforce quality standards
For more information, see the specific README files in each subdirectory.
## Commit Message Rules ## Commit Message Rules

27
.cursor/docs/README.md Normal file
View File

@ -0,0 +1,27 @@
# Appsmith Documentation
This directory contains comprehensive documentation for the Appsmith project, organized by type and purpose.
## Structure
- **guides/**: Detailed, in-depth guides on specific topics
- `performance.md`: Performance optimization best practices
- `testing.md`: Comprehensive guide to testing in Appsmith
- `verification.md`: Workflow for verifying changes
- **references/**: Quick reference documents for daily use
- `codebase-map.md`: Overview of the codebase structure
- `testing-reference.md`: Quick testing reference
- `technical-details.md`: Technical specifications and architecture
- **practices/**: Best practices for development
- `react-hooks.md`: Best practices for using React hooks
## Using This Documentation
- **New developers**: Start with the codebase map and technical details
- **Feature development**: Reference the testing guide and feature verification workflows
- **Bug fixing**: Consult the verification workflow and bug fix verification guidelines
- **Performance optimization**: Follow the performance optimization guide
The documentation is designed to be comprehensive but approachable. If you need more specific information, check the individual files in each directory.

View File

@ -0,0 +1,353 @@
# Appsmith Performance Optimization Guide
This guide outlines approaches for identifying, analyzing, and resolving performance issues in the Appsmith codebase.
## Identifying Performance Issues
### Frontend Performance Metrics
1. **Page Load Time**
- Initial page load
- Time to first paint
- Time to interactive
2. **Rendering Performance**
- Component render times
- React render cycles
- Frame rate (FPS)
3. **Network Performance**
- API request latency
- Payload sizes
- Number of requests
4. **Memory Usage**
- Heap snapshots
- Memory leaks
- DOM node count
### Backend Performance Metrics
1. **Response Times**
- API endpoint latency
- Database query performance
- Worker thread utilization
2. **Resource Utilization**
- CPU usage
- Memory consumption
- I/O operations
3. **Database Performance**
- Query execution time
- Index utilization
- Connection pool efficiency
4. **Concurrency**
- Request throughput
- Thread pool utilization
- Blocking operations
## Performance Analysis Tools
### Frontend Tools
1. **Browser DevTools**
- Performance tab
- Network tab
- Memory tab
2. **React DevTools**
- Component profiler
- Highlight updates
3. **Lighthouse**
- Performance audits
- Optimization suggestions
4. **Custom Timing**
```javascript
// Performance measurement
performance.mark('start');
// ...code to measure...
performance.mark('end');
performance.measure('operation', 'start', 'end');
console.log(performance.getEntriesByName('operation')[0].duration);
```
### Backend Tools
1. **Profilers**
- JProfiler
- VisualVM
- YourKit
2. **Logging and Metrics**
- Log execution times
- Prometheus metrics
- Grafana dashboards
3. **Load Testing**
- JMeter
- K6
- Artillery
## Common Performance Issues and Solutions
### Frontend Performance Issues
1. **Unnecessary Re-renders**
Issue:
```jsx
function Component() {
// This creates a new object on every render
const options = { value: 'example' };
return <ChildComponent options={options} />;
}
```
Solution:
```jsx
function Component() {
// Memoize object
const options = useMemo(() => ({ value: 'example' }), []);
return <ChildComponent options={options} />;
}
```
2. **Unoptimized List Rendering**
Issue:
```jsx
function ItemList({ items }) {
return (
<div>
{items.map(item => (
<Item data={item} />
))}
</div>
);
}
```
Solution:
```jsx
function ItemList({ items }) {
return (
<div>
{items.map(item => (
<Item key={item.id} data={item} />
))}
</div>
);
}
// Memoize the Item component
const Item = React.memo(function Item({ data }) {
return <div>{data.name}</div>;
});
```
3. **Large Bundle Size**
Issue:
- Importing entire libraries
- Not code-splitting
Solution:
```javascript
// Before
import { map, filter, reduce } from 'lodash';
// After
import map from 'lodash/map';
import filter from 'lodash/filter';
import reduce from 'lodash/reduce';
// Code splitting with React.lazy
const HeavyComponent = React.lazy(() => import('./HeavyComponent'));
```
4. **Memory Leaks**
Issue:
```jsx
function Component() {
useEffect(() => {
const interval = setInterval(() => {
// Do something
}, 1000);
// No cleanup
}, []);
return <div>Component</div>;
}
```
Solution:
```jsx
function Component() {
useEffect(() => {
const interval = setInterval(() => {
// Do something
}, 1000);
// Cleanup
return () => clearInterval(interval);
}, []);
return <div>Component</div>;
}
```
### Backend Performance Issues
1. **N+1 Query Problem**
Issue:
```java
List<Workspace> workspaces = workspaceRepository.findAll().collectList().block();
for (Workspace workspace : workspaces) {
List<Application> apps = applicationRepository.findByWorkspaceId(workspace.getId()).collectList().block();
workspace.setApplications(apps);
}
```
Solution:
```java
// Use join query or batch loading
List<Workspace> workspaces = workspaceRepository.findAllWithApplications().collectList().block();
```
2. **Missing Database Indexes**
Issue:
```java
// Query without proper index
Mono<User> findByEmail(String email);
```
Solution:
```java
// Add index to database
@Document(collection = "users")
public class User {
@Indexed(unique = true)
private String email;
// ...
}
```
3. **Blocking Operations in Reactive Streams**
Issue:
```java
return Mono.fromCallable(() -> {
// Blocking file I/O operation
return Files.readAllBytes(Paths.get("path/to/file"));
});
```
Solution:
```java
return Mono.fromCallable(() -> {
// Blocking file I/O operation
return Files.readAllBytes(Paths.get("path/to/file"));
}).subscribeOn(Schedulers.boundedElastic());
```
4. **Inefficient Data Processing**
Issue:
```java
// Processing large amounts of data in memory
return repository.findAll()
.collectList()
.map(items -> {
// Process all items at once
return items.stream().map(this::transform).collect(Collectors.toList());
});
```
Solution:
```java
// Stream processing with backpressure
return repository.findAll()
.map(this::transform)
.collectList();
```
## Performance Optimization Workflow
### Step 1: Establish Baselines
1. Identify key metrics to track
2. Measure current performance
3. Set performance goals
### Step 2: Identify Bottlenecks
1. Use profiling tools
2. Analyze critical user paths
3. Focus on high-impact areas
### Step 3: Optimize
1. Make one change at a time
2. Measure impact of each change
3. Document optimizations
### Step 4: Verify
1. Compare to baseline metrics
2. Run performance tests
3. Check for regressions
### Step 5: Monitor
1. Set up continuous performance monitoring
2. Track trends over time
3. Set up alerts for degradations
## Performance Testing Best Practices
1. **Test with realistic data volumes**
2. **Simulate actual user behavior**
3. **Test on hardware similar to production**
4. **Include performance tests in CI/CD pipeline**
5. **Test in isolation and under load**
6. **Focus on critical user journeys**
7. **Set clear performance budgets**
8. **Compare results to previous baselines**
## Performance Optimization Checklist
### Frontend Checklist
- [ ] Use React.memo for expensive components
- [ ] Implement proper keys for list items
- [ ] Memoize callbacks with useCallback
- [ ] Memoize computed values with useMemo
- [ ] Code-split large bundles
- [ ] Lazy load components and routes
- [ ] Optimize images and assets
- [ ] Minimize CSS and JS bundle sizes
- [ ] Use virtualization for large lists
- [ ] Implement proper cleanup in useEffect
- [ ] Avoid prop drilling with Context API
- [ ] Optimize Redux selectors
### Backend Checklist
- [ ] Add appropriate database indexes
- [ ] Use pagination for large result sets
- [ ] Optimize database queries
- [ ] Avoid N+1 query problem
- [ ] Use reactive programming correctly
- [ ] Handle blocking operations properly
- [ ] Implement caching where appropriate
- [ ] Optimize serialization/deserialization
- [ ] Use connection pooling
- [ ] Configure thread pools appropriately
- [ ] Monitor and optimize GC behavior

View File

@ -0,0 +1,513 @@
# Appsmith Testing Guide
This guide outlines best practices for writing tests for the Appsmith codebase.
## Frontend Testing
### Unit Tests with Jest
Appsmith uses Jest for frontend unit tests. Unit tests should be written for individual components, utility functions, and Redux slices.
#### Test File Structure
Create test files with the `.test.ts` or `.test.tsx` extension in the same directory as the source file:
```
src/
components/
Button/
Button.tsx
Button.test.tsx
utils/
helpers.ts
helpers.test.ts
```
#### Writing React Component Tests
```typescript
import React from "react";
import { render, screen, fireEvent } from "@testing-library/react";
import Button from "./Button";
describe("Button component", () => {
it("renders correctly with default props", () => {
render(<Button>Click me</Button>);
expect(screen.getByText("Click me")).toBeInTheDocument();
});
it("calls onClick handler when clicked", () => {
const handleClick = jest.fn();
render(<Button onClick={handleClick}>Click me</Button>);
fireEvent.click(screen.getByText("Click me"));
expect(handleClick).toHaveBeenCalledTimes(1);
});
});
```
#### Redux Testing
```typescript
import { configureStore } from "@reduxjs/toolkit";
import reducer, {
setUserInfo,
fetchUserInfo
} from "./userSlice";
describe("User reducer", () => {
it("should handle initial state", () => {
expect(reducer(undefined, { type: "unknown" })).toEqual({
userInfo: null,
isLoading: false,
error: null
});
});
it("should handle setUserInfo", () => {
const userInfo = { name: "Test User", email: "test@example.com" };
expect(
reducer(
{ userInfo: null, isLoading: false, error: null },
setUserInfo(userInfo)
)
).toEqual({
userInfo,
isLoading: false,
error: null
});
});
});
```
### Testing Redux/React Safety Patterns
Safety when accessing deeply nested properties in Redux state is critical for application reliability. Here are patterns for testing these safety mechanisms:
#### Testing Redux Selectors with Incomplete State
```typescript
import { configureStore } from '@reduxjs/toolkit';
import reducer, { selectNestedData } from './dataSlice';
import { renderHook } from '@testing-library/react-hooks';
import { Provider } from 'react-redux';
import { useSelector } from 'react-redux';
describe("selectNestedData", () => {
it("returns default value when state is incomplete", () => {
// Set up store with incomplete state
const store = configureStore({
reducer: {
data: reducer,
},
preloadedState: {
data: {
// Missing expected nested properties
},
},
});
// Wrap the hook with the Redux provider
const wrapper = ({ children }) => (
<Provider store={store}>{children}</Provider>
);
// Render the hook with the selector
const { result } = renderHook(() => useSelector(selectNestedData), { wrapper });
// Verify the selector returns the fallback/default value
expect(result.current).toEqual(/* expected default value */);
});
it("returns actual data when state is complete", () => {
// Set up store with complete state
const expectedData = { value: "test" };
const store = configureStore({
reducer: {
data: reducer,
},
preloadedState: {
data: {
entities: {
items: {
123: {
details: expectedData,
},
},
},
},
},
});
const wrapper = ({ children }) => (
<Provider store={store}>{children}</Provider>
);
const { result } = renderHook(() => useSelector(state =>
selectNestedData(state, '123')
), { wrapper });
// Verify the selector returns the actual data
expect(result.current).toEqual(expectedData);
});
});
```
#### Testing Components with Error Boundaries
```typescript
import React from 'react';
import { render, screen } from '@testing-library/react';
import { ErrorBoundary } from 'react-error-boundary';
import ComponentWithDeepAccess from './ComponentWithDeepAccess';
describe('ComponentWithDeepAccess with error boundary', () => {
it('renders fallback UI when data is invalid', () => {
// Define invalid data that would cause property access errors
const invalidData = {
// Missing required nested structure
};
const FallbackComponent = () => <div>Error occurred</div>;
render(
<ErrorBoundary FallbackComponent={FallbackComponent}>
<ComponentWithDeepAccess data={invalidData} />
</ErrorBoundary>
);
// Verify the fallback component is rendered
expect(screen.getByText('Error occurred')).toBeInTheDocument();
});
it('renders normally with valid data', () => {
// Define valid data with complete structure
const validData = {
user: {
profile: {
name: 'Test User'
}
}
};
const FallbackComponent = () => <div>Error occurred</div>;
render(
<ErrorBoundary FallbackComponent={FallbackComponent}>
<ComponentWithDeepAccess data={validData} />
</ErrorBoundary>
);
// Verify the component renders normally
expect(screen.getByText('Test User')).toBeInTheDocument();
});
});
```
#### Testing Safe Property Access Utilities
```typescript
import { safeGet } from './propertyAccessUtils';
describe('safeGet utility', () => {
it('returns the value when the path exists', () => {
const obj = {
a: {
b: {
c: 'value'
}
}
};
expect(safeGet(obj, 'a.b.c')).toBe('value');
});
it('returns default value when path does not exist', () => {
const obj = {
a: {}
};
expect(safeGet(obj, 'a.b.c', 'default')).toBe('default');
});
it('handles array indices in path', () => {
const obj = {
users: [
{ id: 1, name: 'User 1' },
{ id: 2, name: 'User 2' }
]
};
expect(safeGet(obj, 'users.1.name')).toBe('User 2');
});
it('handles null and undefined input', () => {
expect(safeGet(null, 'a.b.c', 'default')).toBe('default');
expect(safeGet(undefined, 'a.b.c', 'default')).toBe('default');
});
});
```
### Integration Tests with Cypress
Cypress is used for integration and end-to-end testing. These tests should verify the functionality of the application from a user's perspective.
#### Test File Structure
```
cypress/
integration/
Editor/
Canvas.spec.ts
PropertyPane.spec.ts
Workspace/
Applications.spec.ts
```
#### Writing Cypress Tests
```typescript
describe("Application Canvas", () => {
before(() => {
cy.visit("/applications/my-app/pages/page-1/edit");
});
it("should allow adding a widget to the canvas", () => {
cy.get("[data-cy=entity-explorer]").should("be.visible");
cy.get("[data-cy=widget-button]").drag("[data-cy=canvas-drop-zone]");
cy.get("[data-cy=widget-card-button]").should("exist");
});
it("should open property pane when widget is selected", () => {
cy.get("[data-cy=widget-card-button]").click();
cy.get("[data-cy=property-pane]").should("be.visible");
cy.get("[data-cy=property-pane-title]").should("contain", "Button");
});
});
```
## Backend Testing
### Unit Tests with JUnit
Backend unit tests should validate individual components and services.
#### Test File Structure
```
src/test/java/com/appsmith/server/
services/
ApplicationServiceTest.java
UserServiceTest.java
controllers/
ApplicationControllerTest.java
```
#### Writing Java Unit Tests
```java
@RunWith(SpringRunner.class)
@SpringBootTest
public class ApplicationServiceTest {
@Autowired
private ApplicationService applicationService;
@MockBean
private WorkspaceService workspaceService;
@Test
public void testCreateApplication() {
// Arrange
Application application = new Application();
application.setName("Test Application");
Workspace workspace = new Workspace();
workspace.setId("workspace-id");
Mono<Workspace> workspaceMono = Mono.just(workspace);
when(workspaceService.findById(any())).thenReturn(workspaceMono);
// Act
Mono<Application> result = applicationService.createApplication(application, "workspace-id");
// Assert
StepVerifier.create(result)
.assertNext(app -> {
assertThat(app.getId()).isNotNull();
assertThat(app.getName()).isEqualTo("Test Application");
assertThat(app.getWorkspaceId()).isEqualTo("workspace-id");
})
.verifyComplete();
}
}
```
### Integration Tests
Backend integration tests should verify interactions between different components of the system.
```java
@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class ApplicationControllerIntegrationTest {
@Autowired
private WebTestClient webTestClient;
@Autowired
private ApplicationRepository applicationRepository;
@Before
public void setUp() {
applicationRepository.deleteAll().block();
}
@Test
public void testGetAllApplications() {
// Test implementation
}
}
```
## Best Practices
### General Test Guidelines
1. **Test Isolation**: Each test should be independent of others.
2. **Test Coverage**: Aim for 80%+ coverage for critical code paths.
3. **Avoid Implementation Details**: Test behavior, not implementation.
4. **Concise Tests**: Keep tests focused on one behavior or functionality.
5. **Descriptive Names**: Use clear test names that describe what is being tested.
### Redux/React Safety Best Practices
1. **Always Check Property Existence**: Test edge cases where properties might not exist.
2. **Use Defensive Programming**: Design components and selectors to handle incomplete data gracefully.
3. **Test Error Boundaries**: Verify that error boundaries correctly catch and handle errors from property access.
4. **Test Default Values**: Ensure selectors return appropriate defaults when data is missing.
5. **Test Different State Permutations**: Create tests with various combinations of missing or incomplete state to ensure robustness.
### Performance Considerations
1. **Mock Heavy Dependencies**: Use mocks for API calls, databases, etc.
2. **Optimize Test Speed**: Keep tests fast to encourage frequent testing.
3. **Use Focused Tests**: Test only what needs to be tested.
## Troubleshooting Tests
### Common Issues
1. **Flaky Tests**: Tests that sometimes pass and sometimes fail.
- Solution: Make tests more deterministic, avoid race conditions.
2. **Memory Leaks**: Tests that consume increasing memory.
- Solution: Clean up resources, avoid global state.
3. **Slow Tests**: Tests that take too long to run.
- Solution: Mock heavy dependencies, parallelize when possible.
### React-Specific Issues
1. **Component State Issues**: Components not updating as expected.
- Solution: Use `act()` for state updates, wait for async operations.
2. **Redux State Access Errors**: Errors when accessing nested properties.
- Solution: Use optional chaining, lodash/get, or default values in selectors.
3. **Rendering Errors**: Components not rendering as expected.
- Solution: Verify props, check for conditionals that might prevent rendering.
## Advanced Testing Techniques
### Property-Based Testing
Test with a wide range of automatically generated inputs to find edge cases.
### Snapshot Testing
Useful for detecting unintended changes in UI components.
### Visual Regression Testing
Compare screenshots of components to detect visual changes.
### Load and Performance Testing
Test system behavior under high load or stress conditions.
### A/B Testing
Compare different implementations to determine which performs better.
## Test Data Best Practices
### Creating Test Fixtures
- Create reusable fixtures for common test data
- Use descriptive names for test fixtures
- Keep test data minimal but sufficient
### Mocking External Services
- Mock external API calls and dependencies
- Use realistic mock responses
- Consider edge cases and error conditions
## Testing Standards
### Frontend Testing Standards
1. Aim for 80%+ test coverage for utility functions
2. Test all Redux slices thoroughly
3. Focus on critical user journeys in integration tests
4. Test responsive behavior for key components
5. Include accessibility tests for UI components
### Backend Testing Standards
1. Test all public service methods
2. Test both successful and error cases
3. Test database interactions with real repositories
4. Test API endpoints with WebTestClient
5. Mock external services to isolate tests
## Running Tests
### Frontend Tests
```bash
# Run all Jest tests
cd app/client
yarn run test:unit
# Run a specific test file
yarn jest src/path/to/test.ts
# Run Cypress tests
npx cypress run
```
### Backend Tests
```bash
# Run all backend tests
cd app/server
./mvnw test
# Run a specific test class
./mvnw test -Dtest=ApplicationServiceTest
```
## Best Practices for Test-Driven Development
1. Write failing tests first
2. Start with simple test cases
3. Refactor after tests pass
4. Use descriptive test names
5. Keep tests independent
6. Avoid test interdependence
7. Test edge cases and error conditions
8. Keep tests fast
9. Avoid testing implementation details
10. Review and update tests when requirements change

View File

@ -0,0 +1,180 @@
# Appsmith Verification Workflow
This document outlines the process that Cursor should follow when verifying changes to the Appsmith codebase.
## Bug Fix Verification
When fixing a bug, follow these steps:
1. **Reproduce the issue**
- Understand the reported bug and its root cause
- Create a minimal reproduction of the issue
- Identify the components affected
2. **Write test(s) for the bug**
- Create failing tests that demonstrate the bug's existence
- For frontend bugs: write Jest unit tests and/or Cypress integration tests
- For backend bugs: write JUnit tests
3. **Implement the fix**
- Make minimal, targeted changes to address the root cause
- Ensure the tests now pass
- Check for any unintended side effects
4. **Verify the fix**
- Confirm the original issue is resolved
- Run the full test suite to ensure no regressions
- Check both development and production builds
5. **Quality checks**
- Run type checking: `yarn run check-types`
- Run linting: `yarn run lint`
- Check for cyclic dependencies in client code
- For backend: run Spotless checks
- Verify Redux/React safety guidelines:
- Use optional chaining (`?.`) or lodash/get for deep property access
- Check for null/undefined before accessing nested properties
- Avoid direct deep property chains (e.g., `obj.prop1.prop2.prop3`)
- Handle potential nulls in Redux state access
6. **Performance verification**
- Ensure the fix doesn't negatively impact performance
- Check for memory leaks or increased resource usage
- Verify response times aren't degraded
7. **CI/CD verification**
- Ensure all GitHub workflow checks would pass with the changes
- Verify both client and server builds
## Feature Implementation Verification
When implementing a new feature, follow these steps:
1. **Understand requirements**
- Clearly define the feature's acceptance criteria
- Identify all components that need to be modified
2. **Design test approach**
- Plan unit, integration, and end-to-end tests before implementation
- Create test scenarios that cover the feature's functionality
- Consider edge cases and error handling
3. **Implement test cases**
- Write tests for new functionality
- Include positive and negative test cases
- Cover edge cases and error conditions
4. **Implement the feature**
- Develop the feature to pass the tests
- Follow code style and patterns established in the project
- Document the new functionality as needed
5. **Verify against acceptance criteria**
- Confirm the feature meets all acceptance criteria
- Perform manual testing for user experience
- Get stakeholder sign-off if applicable
6. **Quality checks**
- Same checks as for bug fixes
- Additional check for documentation updates if needed
- Verify UI/UX consistency
7. **Performance testing**
- Check performance implications of the new feature
- Ensure the feature is optimized for efficiency
- Test under different load conditions if applicable
8. **CI/CD verification**
- Same checks as for bug fixes
- Additional check for new assets or dependencies
## Incrementally Learning From Changes
For each code change, Cursor should:
1. Analyze patterns in successful implementations
2. Record common pitfalls and how they were resolved
3. Update its understanding of the codebase structure
4. Note the relationships between components
5. Learn from test cases how different modules should interact
6. Understand the project's coding standards and conventions
7. Track performance considerations for different features
8. Maintain a knowledge graph of the codebase to provide better context
## Pre-commit Verification Checklist
Before considering a change complete, verify:
- [ ] All tests pass locally
- [ ] No linting issues reported
- [ ] Type checking passes
- [ ] No performance degradation
- [ ] Code follows project conventions
- [ ] Documentation is updated if needed
- [ ] No sensitive data is included
- [ ] The change satisfies the original requirements
- [ ] GitHub workflows would pass if the changes were committed
- [ ] React/Redux code follows safety best practices
- [ ] Uses optional chaining or lodash/get for nested properties
- [ ] Handles potential null values in state access
- [ ] No direct deep object chaining without safety checks
- [ ] Redux selectors properly handle state structure changes
## React/Redux Safety Guidelines
When working with React and Redux code, follow these guidelines:
### Safe Property Access
- **Avoid direct deep property access**:
```jsx
// Unsafe
const value = state.entities.users[userId].profile.preferences;
// Safe - using optional chaining
const value = state.entities?.users?.[userId]?.profile?.preferences;
// Safe - using lodash/get with default value
const value = get(state, `entities.users.${userId}.profile.preferences`, defaultValue);
```
### Redux State Access
- **Use selectors for all state access**:
```jsx
// Define selector
const getUserPreferences = (state, userId) =>
get(state, ['entities', 'users', userId, 'profile', 'preferences'], {});
// Use selector
const preferences = useSelector(state => getUserPreferences(state, userId));
```
### Error Boundary Usage
- **Wrap components that access complex data structures**:
```jsx
<ErrorBoundary fallback={<FallbackComponent />}>
<ComponentWithComplexDataAccess />
</ErrorBoundary>
```
### Data Validation
- **Validate data structure before usage**:
```jsx
const isValidUserData = (userData) =>
userData &&
typeof userData === 'object' &&
userData.profile !== undefined;
// Use validation before accessing
if (isValidUserData(userData)) {
// Now safe to use userData.profile
}
```
Following these guidelines will help prevent common issues like:
- Runtime errors from accessing properties of undefined
- Unexpected application crashes due to null property access
- Hard-to-debug errors in deeply nested state structures

View File

@ -0,0 +1,202 @@
# Lessons from Fixing Circular Dependencies in React Hooks
## Background
While working on the Appsmith codebase, we encountered a critical issue in the `useSyncParamsToPath` React hook that caused infinite re-renders and circular dependency problems. This hook was responsible for synchronizing URL paths and query parameters bidirectionally in the API panel. When a user changed the URL, parameters would get extracted and populated in the form, and when parameters were changed, the URL would get updated.
## The Problem
The initial implementation of the hook had several issues:
1. **Improper property access**: The hook was directly accessing nested properties like `values.actionConfiguration.path` without properly handling the case where these nested paths might not exist.
2. **Missing flexible configuration**: The hook couldn't be reused with different property paths since it had hardcoded property paths.
3. **Circular updates**: When the hook updated the path, it would trigger a re-render which would then trigger the hook again, causing an infinite loop.
4. **Missing safeguards**: The hook didn't have proper tracking of previous values or early exits to prevent unnecessary updates.
## The Solution
We implemented several patterns to fix these issues:
### 1. Safe nested property access using lodash/get
Instead of directly accessing nested properties:
```jsx
// Before
const path = values.actionConfiguration?.path;
const queryParameters = values.actionConfiguration?.queryParameters;
```
We used lodash's `get` function with default values:
```jsx
// After
import get from 'lodash/get';
const path = get(values, `${configProperty}.path`, "");
const queryParameters = get(values, `${configProperty}.params`, []);
```
This approach provides several benefits:
- Safely handles undefined or null intermediate values
- Provides sensible default values
- Makes the property path configurable using the `configProperty` parameter
### 2. Tracking previous values with useRef
We implemented a pattern to track previous values and prevent unnecessary updates:
```jsx
// Refs to track the last values to prevent infinite loops
const lastPathRef = useRef("");
const lastParamsRef = useRef<Property[]>([]);
useEffect(
function syncParamsEffect() {
// Early return if nothing has changed
if (path === lastPathRef.current && isEqual(queryParameters, lastParamsRef.current)) {
return;
}
// Update refs to current values
lastPathRef.current = path;
lastParamsRef.current = [...queryParameters];
// Rest of the effect logic
},
[formValues, dispatch, formName, configProperty],
);
```
### 3. Directional updates
To prevent circular updates, we implemented a pattern where the hook would only process one update direction per effect execution:
```jsx
// Only one sync direction per effect execution to prevent loops
// Path changed - update params from path if needed
if (pathChanged) {
// Logic to update params from path
// Exit early after updating
return;
}
// Params changed - update path from params if needed
if (paramsChanged) {
// Logic to update path from params
}
```
### 4. Deep comparisons for complex objects
For comparing arrays of parameters, we implemented custom comparison logic that compares the actual values rather than just checking references:
```jsx
// Helper function to check if two arrays of params are functionally equivalent
const areParamsEquivalent = (params1: Property[], params2: Property[]): boolean => {
if (params1.length !== params2.length) return false;
// Create a map of key-value pairs for easier comparison
const paramsMap1 = params1.reduce((map, param) => {
if (param.key) map[param.key] = param.value;
return map;
}, {} as Record<string, any>);
const paramsMap2 = params2.reduce((map, param) => {
if (param.key) map[param.key] = param.value;
return map;
}, {} as Record<string, any>);
return isEqual(paramsMap1, paramsMap2);
};
```
## Key Takeaways for React Hook Development
1. **Always use safe property access**:
- For deep nested properties, use lodash's `get` with default values
- Alternatively, use optional chaining (`?.`) but remember it doesn't provide default values
2. **Track previous values to prevent infinite loops**:
- Use `useRef` to store previous values between renders
- Compare new values against previous values before making updates
3. **Implement early exits**:
- If nothing has changed, return early from your hook
- Use deep equality checks for objects and arrays (e.g., `isEqual` from lodash)
4. **Make effects unidirectional in a single execution**:
- In bidirectional sync, handle only one direction per effect execution
- Exit early after making updates in one direction
5. **Make hooks flexible and reusable**:
- Use parameters for configuration (e.g., `configProperty`)
- Don't hardcode property paths or selectors
6. **Test bidirectional hooks thoroughly**:
- Write tests for both directions of data flow
- Test edge cases (undefined values, empty arrays, etc.)
- Verify the hook prevents infinite loops with nearly identical input
## Implementation Example
The `useSyncParamsToPath` hook provides a real-world example of these patterns in action:
```tsx
// Hook to sync query parameters with URL path in both directions
export const useSyncParamsToPath = (formName: string, configProperty: string) => {
const dispatch = useDispatch();
const formValues = useSelector((state) => getFormData(state, formName));
// Refs to track the last values to prevent infinite loops
const lastPathRef = useRef("");
const lastParamsRef = useRef<Property[]>([]);
useEffect(
function syncParamsEffect() {
if (!formValues || !formValues.values) return;
const values = formValues.values;
const actionId = values.id;
if (!actionId) return;
// Correctly access nested properties using lodash's get
const path = get(values, `${configProperty}.path`, "");
const queryParameters = get(values, `${configProperty}.params`, []);
// Early return if nothing has changed
if (path === lastPathRef.current && isEqual(queryParameters, lastParamsRef.current)) {
return;
}
// Check if params have changed but path hasn't - indicating params tab update
const paramsChanged = !isEqual(queryParameters, lastParamsRef.current);
const pathChanged = path !== lastPathRef.current;
// Update refs to current values
lastPathRef.current = path;
lastParamsRef.current = [...queryParameters];
// Only one sync direction per effect execution to prevent loops
// Path changed - update params from path if needed
if (pathChanged) {
// Logic to update params from path
// Exit early to prevent circular updates
return;
}
// Params changed - update path from params if needed
if (paramsChanged) {
// Logic to update path from params
}
},
[formValues, dispatch, formName, configProperty],
);
};
```
By implementing these patterns, we fixed the circular dependency and infinite loop issues while making the hook more reusable and robust.

View File

@ -0,0 +1,226 @@
# Appsmith Codebase Map
This document provides a comprehensive overview of the Appsmith codebase structure to help Cursor AI better understand the organization and relationships between different components.
## Project Overview
Appsmith is a low-code platform that allows developers to build internal tools and dashboards by connecting to databases, APIs, and other data sources. The application consists of:
1. A React-based frontend (client)
2. A Java Spring Boot backend (server)
3. Various plugins for connecting to external data sources
4. A self-contained deployment architecture
## Directory Structure
The codebase is organized into the following main directories:
- `app/` - Contains the main application code
- `client/` - Frontend application (React)
- `server/` - Backend application (Java Spring Boot)
- `util/` - Shared utilities
- `monitoring/` - Monitoring and metrics
## Frontend Architecture (app/client)
The frontend is built with React, Redux, and TypeScript. Key directories include:
### Core Structure (app/client/src)
- `actions/` - Redux actions
- `reducers/` - Redux reducers
- `sagas/` - Redux sagas for side effects and async operations
- `selectors/` - Redux selectors
- `store.ts` - Redux store configuration
### UI Components
- `components/` - Reusable UI components
- `pages/` - Page-level components
- `widgets/` - Draggable widgets for the page builder
- `theme/` - Styling and theme definitions
- `icons/` - SVG icons and icon components
### Data and APIs
- `api/` - API client and service functions
- `constants/` - Application constants and configuration
- `utils/` - Utility functions
- `entities/` - Data models and entity definitions
### Edition-specific Code
- `ee/` - Enterprise Edition specific code
- `ce/` - Community Edition specific code
### Testing
- `test/` - Test utilities and mocks
- `cypress/` - End-to-end testing with Cypress
## Backend Architecture (app/server)
The backend is built with Java Spring Boot and MongoDB. Key packages include:
### Core Structure (app/server/appsmith-server/src/main/java/com/appsmith/server)
- `ServerApplication.java` - Main application entry point
### API Layer
- `controllers/` - REST API controllers
- `dtos/` - Data Transfer Objects
- `exceptions/` - Custom exception classes
### Business Logic
- `services/` - Business logic and service implementations
- `helpers/` - Helper classes and utilities
- `domains/` - Domain models
### Data Access
- `repositories/` - Data access repositories
- `configurations/` - Database and application configuration
### Features
- `applications/` - Application management
- `pages/` - Page management
- `actions/` - Action management (API, DB queries)
- `plugins/` - Plugin system for external integrations
- `datasources/` - Data source management
- `authentication/` - Authentication and authorization
- `organization/` - Organization management
### Extensions
- `appsmith-plugins/` - Plugin implementations
- `appsmith-git/` - Git integration features
- `appsmith-interfaces/` - Core interfaces
- `appsmith-ai/` - AI features implementation
## Key Concepts
### Frontend Concepts
1. **Widgets**: Draggable UI components that users can place on their pages
2. **Actions**: API calls, DB queries, or JS code that widgets can trigger
3. **Datasources**: Connections to external data sources like databases or APIs
4. **Pages**: Containers for widgets representing different views in an application
5. **Theme**: Visual styling applied to the entire application
### Backend Concepts
1. **Applications**: Container for pages and other resources
2. **Organizations**: Groups of users and applications
3. **Plugins**: Connectors to external services
4. **Actions**: Executable code blocks (API calls, DB queries)
5. **Datasources**: Connection configurations for external data systems
## Code Patterns
### Frontend Patterns
1. **Redux for State Management**:
- Actions define state changes
- Reducers implement state updates
- Sagas handle side effects
- Selectors extract state
2. **Component Structure**:
- Functional components with hooks
- Container/Presentation separation
- Styled-components for styling
- Typescript interfaces for type safety
3. **API Communication**:
- Axios-based API clients
- Redux sagas for async operations
- Error handling middleware
### Backend Patterns
1. **Spring Boot Architecture**:
- Controller -> Service -> Repository pattern
- DTO pattern for API requests/responses
- Reactive programming with Reactor
2. **Security**:
- JWT-based authentication
- RBAC (Role-Based Access Control)
- Permission checks with Spring Security
3. **Database**:
- MongoDB as primary datastore
- Reactive repositories
## Common Workflows
### Frontend Development Workflow
1. Define Redux actions in `actions/`
2. Implement reducers in `reducers/`
3. Create sagas for async operations in `sagas/`
4. Build UI components in `components/` or `pages/`
5. Connect components to Redux using selectors
### Backend Development Workflow
1. Define DTOs in `dtos/`
2. Create domain models in `domains/`
3. Implement repositories in `repositories/`
4. Add business logic in `services/`
5. Expose APIs in `controllers/`
## Testing Approach
### Frontend Testing
- Unit tests with Jest and React Testing Library
- End-to-end tests with Cypress
- Visual regression tests
### Backend Testing
- Unit tests with JUnit
- Integration tests with Spring Boot Test
- API tests with RestAssured
## Performance Considerations
### Frontend Performance
- Memoization of heavy computations
- Code splitting for page loads
- Virtualization for large lists
- Optimized rendering with React.memo
### Backend Performance
- Query optimization in MongoDB
- Caching strategies
- Reactive programming for non-blocking operations
## Security Model
1. **Authentication**: JWT-based auth with refresh tokens
2. **Authorization**: RBAC with granular permissions
3. **Data Isolation**: Multi-tenancy support
## Enterprise vs Community Edition
The codebase is separated into:
- `ee/` - Enterprise features
- `ce/` - Community features
Key differences:
1. Enterprise: SSO, audit logs, role-based access
2. Community: Basic features, self-hosted option
## Important Files
### Frontend
- `client/src/index.tsx` - Application entry point
- `client/src/store.ts` - Redux store configuration
- `client/src/App.tsx` - Main application component
### Backend
- `server/appsmith-server/src/main/java/com/appsmith/server/ServerApplication.java` - Main entry point
- `server/appsmith-server/src/main/resources/application.yml` - Application configuration
## Development Guidelines
1. Follow the established patterns in the existing codebase
2. Use TypeScript interfaces for type safety in frontend
3. Add appropriate tests for all new features
4. Document complex logic with comments
5. Use reactive programming patterns in backend
6. Follow established file naming conventions
This map should help Cursor better understand the Appsmith codebase structure and provide more contextual assistance when working with the code.

View File

@ -0,0 +1,699 @@
# Appsmith Technical Details
This document provides in-depth technical information about the Appsmith codebase, focusing on implementation details, design patterns, and technologies used. This information will help Cursor AI better understand the code at a deeper level.
## Technology Stack
### Frontend
- **Framework**: React 17+
- **State Management**: Redux with Redux-Saga
- **Language**: TypeScript 4+
- **Styling**: Styled Components with Tailwind CSS
- **Build Tool**: Webpack
- **Testing**: Jest, React Testing Library, Cypress
- **Form Management**: Formik
- **API Client**: Axios
- **UI Components**: Custom component library
### Backend
- **Framework**: Spring Boot 2.x
- **Language**: Java 11+
- **Database**: MongoDB
- **Reactive Programming**: Project Reactor
- **Security**: Spring Security
- **API Documentation**: Swagger/OpenAPI
- **Caching**: Redis
## Key Frontend Implementation Details
### State Management
The application uses Redux with a sophisticated structure:
```typescript
// Example action
export const fetchDatasources = (applicationId: string) => ({
type: ReduxActionTypes.FETCH_DATASOURCES,
payload: { applicationId },
});
// Example reducer
const datasourceReducer = (state = initialState, action: ReduxAction<any>) => {
switch (action.type) {
case ReduxActionTypes.FETCH_DATASOURCES_SUCCESS:
return { ...state, list: action.payload };
// ...
}
};
// Example saga
function* fetchDatasourcesSaga(action: ReduxAction<{ applicationId: string }>) {
try {
const response = yield call(DatasourcesApi.fetchDatasources, action.payload.applicationId);
yield put({
type: ReduxActionTypes.FETCH_DATASOURCES_SUCCESS,
payload: response.data,
});
} catch (error) {
yield put({
type: ReduxActionTypes.FETCH_DATASOURCES_ERROR,
payload: { error },
});
}
}
```
### Widget System
Widgets are the building blocks of the application UI. They follow a standard structure:
```typescript
export type WidgetProps = {
widgetId: string;
type: string;
widgetName: string;
parentId?: string;
renderMode: RenderMode;
version: number;
// ...other properties
};
export default class ButtonWidget extends BaseWidget<ButtonWidgetProps, WidgetState> {
static getPropertyPaneConfig() {
return [
{
sectionName: "General",
children: [
{
propertyName: "text",
label: "Label",
controlType: "INPUT_TEXT",
// ...
},
// ...other properties
],
},
// ...other sections
];
}
getPageView() {
return (
<ButtonComponent
// ...props
onClick={this.handleClick}
/>
);
}
handleClick = () => {
if (this.props.onClick) {
super.executeAction({
triggerPropertyName: "onClick",
dynamicString: this.props.onClick,
event: {
type: EventType.ON_CLICK,
// ...
},
});
}
};
}
```
### Property Pane System
The property pane is dynamically generated based on the widget configuration:
```typescript
export const PropertyPaneView = (props: PropertyPaneViewProps) => {
const { config, panel } = props;
// Render property sections
return (
<PropertyPaneContainer>
{config.map((section) => (
<PropertySection key={section.sectionName} title={section.sectionName}>
{section.children.map((property) => (
<PropertyControl
key={property.propertyName}
propertyName={property.propertyName}
controlType={property.controlType}
// ...other props
/>
))}
</PropertySection>
))}
</PropertyPaneContainer>
);
};
```
### Data Binding
The app uses a JS evaluation engine to bind data to widgets:
```typescript
export function evaluateDynamicValue(
dynamicValue: string,
data: Record<string, unknown>,
): any {
// Set up execution environment
const scriptToEvaluate = `
function evaluation() {
const $ = ${JSON.stringify(data)};
try {
return ${dynamicValue};
} catch (e) {
return undefined;
}
}
evaluation();
`;
try {
return eval(scriptToEvaluate);
} catch (e) {
return undefined;
}
}
```
### API Integration
The API client is set up with Axios and handles authentication:
```typescript
const axiosInstance = axios.create({
baseURL: "/api/v1",
headers: {
"Content-Type": "application/json",
},
});
// Request interceptor for adding auth token
axiosInstance.interceptors.request.use((config) => {
const token = localStorage.getItem("AUTH_TOKEN");
if (token) {
// Basic validation - check if token is a valid JWT format
if (token.split('.').length === 3) {
config.headers.Authorization = `Bearer ${token}`;
} else {
// Handle invalid token - could log user out or refresh token
store.dispatch(refreshToken());
}
}
return config;
});
// Response interceptor for handling errors
axiosInstance.interceptors.response.use(
(response) => response,
(error) => {
if (error.response && error.response.status === 401) {
// Handle unauthorized
store.dispatch(logoutUser());
}
return Promise.reject(error);
}
);
```
## Key Backend Implementation Details
### Repository Pattern
Mongo repositories use reactive programming:
```java
@Repository
public interface DatasourceRepository extends ReactiveMongoRepository<Datasource, String> {
Mono<Datasource> findByNameAndOrganizationId(String name, String organizationId);
Flux<Datasource> findAllByOrganizationId(String organizationId);
Mono<Long> countByNameAndOrganizationId(String name, String organizationId);
}
```
### Service Layer
Services handle business logic:
```java
@Service
@RequiredArgsConstructor
public class DatasourceServiceImpl implements DatasourceService {
private final DatasourceRepository repository;
private final PluginService pluginService;
@Override
public Mono<Datasource> create(Datasource datasource) {
return repository.save(datasource)
.flatMap(saved -> pluginService.getById(datasource.getPluginId())
.map(plugin -> {
saved.setPlugin(plugin);
return saved;
}));
}
// Other methods...
}
```
### Controller Layer
Controllers expose REST APIs:
```java
@RestController
@RequestMapping("/api/v1/datasources")
@RequiredArgsConstructor
public class DatasourceController {
private final DatasourceService service;
@PostMapping
public Mono<ResponseDTO<Datasource>> create(@RequestBody DatasourceDTO dto) {
Datasource datasource = new Datasource();
BeanUtils.copyProperties(dto, datasource);
return service.create(datasource)
.map(created -> new ResponseDTO<>(HttpStatus.CREATED.value(), created, null));
}
// Other endpoints...
}
```
### Security Configuration
Spring Security setup:
```java
@Configuration
@EnableWebFluxSecurity
@EnableReactiveMethodSecurity
public class SecurityConfig {
@Bean
public SecurityWebFilterChain securityWebFilterChain(ServerHttpSecurity http) {
return http
.csrf().disable()
.formLogin().disable()
.httpBasic().disable()
.authorizeExchange()
.pathMatchers("/api/v1/public/**").permitAll()
.pathMatchers("/api/v1/auth/**").permitAll()
.anyExchange().authenticated()
.and()
.addFilterAt(jwtAuthenticationFilter, SecurityWebFiltersOrder.AUTHENTICATION)
.build();
}
// Other beans...
}
```
### Query Execution
Action execution is handled in a structured way:
```java
@Service
@RequiredArgsConstructor
public class ActionExecutionServiceImpl implements ActionExecutionService {
private final ActionExecutorFactory executorFactory;
@Override
public Mono<ActionExecutionResult> executeAction(ActionDTO action) {
ActionExecutor executor = executorFactory.getExecutor(action.getPluginType());
if (executor == null) {
return Mono.error(new AppsmithException(AppsmithError.UNSUPPORTED_PLUGIN_ACTION));
}
return executor.execute(action);
}
}
```
## Advanced Patterns
### Code Splitting
```typescript
// Lazy loading of components
const ApplicationPage = React.lazy(() => import("pages/Applications"));
const EditorPage = React.lazy(() => import("pages/Editor"));
// Router setup
const routes = [
{
path: "/applications",
component: ApplicationPage,
},
{
path: "/app/editor/:applicationId/:pageId",
component: EditorPage,
},
// ...
];
```
### Plugin System
The plugin system allows extensibility:
```java
public interface PluginExecutor<T, U> {
Mono<ActionExecutionResult> execute(T connection, U datasourceConfiguration, Object executeActionDTO);
Mono<T> datasourceCreate(U datasourceConfiguration);
void datasourceDestroy(T connection);
Set<String> getHintMessages(U datasourceConfiguration);
// ...
}
```
### Reactive Caching
```java
@Service
public class CacheableRepositoryHelper {
private final Map<String, Cache<String, Object>> cacheMap = new ConcurrentHashMap<>();
public <T> Mono<T> fetchFromCache(String cacheName, String key, Supplier<Mono<T>> fetchFunction) {
Cache<String, Object> cache = cacheMap.computeIfAbsent(cacheName, k ->
Caffeine.newBuilder().expireAfterWrite(30, TimeUnit.MINUTES).build());
Object cachedValue = cache.getIfPresent(key);
if (cachedValue != null) {
return Mono.just((T) cachedValue);
}
return fetchFunction.get()
.doOnNext(value -> cache.put(key, value));
}
}
```
### Action Collection System
Actions are grouped into collections for better organization:
```java
@Document
public class ActionCollection {
@Id
private String id;
private String name;
private String applicationId;
private String organizationId;
private String pageId;
private List<ActionDTO> actions;
private List<String> actionIds;
private String body;
// ...
}
```
## Common Code Patterns
### Error Handling
Frontend error handling:
```typescript
// Global error boundary
export class AppErrorBoundary extends React.Component<{}, { hasError: boolean }> {
constructor(props: {}) {
super(props);
this.state = { hasError: false };
}
static getDerivedStateFromError() {
return { hasError: true };
}
componentDidCatch(error: Error, errorInfo: React.ErrorInfo) {
logError(error, errorInfo);
}
render() {
if (this.state.hasError) {
return <ErrorPage />;
}
return this.props.children;
}
}
```
Backend error handling:
```java
@ExceptionHandler(AppsmithException.class)
public Mono<ResponseEntity<ResponseDTO<Object>>> handleAppsmithException(AppsmithException exception) {
log.error("Application error: {}", exception.getMessage(), exception);
ResponseDTO<Object> response = new ResponseDTO<>(
exception.getHttpStatus().value(),
null,
new ErrorDTO(exception.getAppErrorCode(), exception.getMessage())
);
return Mono.just(ResponseEntity
.status(exception.getHttpStatus())
.body(response));
}
```
### Validation
Frontend validation:
```typescript
const validateWidgetName = (widgetName: string) => {
const nameRegex = /^[a-zA-Z][a-zA-Z0-9_]*$/;
if (!nameRegex.test(widgetName)) {
return "Widget name must start with a letter and can contain only letters, numbers, and underscore";
}
if (widgetName.length > 30) {
return "Widget name must be less than 30 characters";
}
return undefined;
};
```
Backend validation:
```java
@Validated
@Service
public class UserServiceImpl implements UserService {
@Override
public Mono<User> create(@Valid UserDTO userDTO) {
// Implementation
}
}
public class UserDTO {
@NotBlank(message = "Email is mandatory")
@Email(message = "Invalid email format")
private String email;
@NotBlank(message = "Password is mandatory")
@Size(min = 8, message = "Password must be at least 8 characters")
private String password;
// Other fields...
}
```
### Internationalization
```typescript
// i18n setup
const i18n = createI18n({
locale: getBrowserLocale(),
messages: {
en: enMessages,
fr: frMessages,
// Other languages...
},
});
// Usage in components
const MyComponent = () => {
const { t } = useTranslation();
return (
<div>
<h1>{t('welcome.title')}</h1>
<p>{t('welcome.message')}</p>
</div>
);
};
```
## Enterprise-specific Features
### Audit Logging
```java
@Service
@ConditionalOnProperty(prefix = "appsmith", name = "audit.enabled", havingValue = "true")
public class AuditServiceImpl implements AuditService {
private final AuditRepository repository;
@Override
public Mono<AuditLog> log(String action, String resourceId, String resourceType, User user) {
AuditLog log = new AuditLog();
log.setAction(action);
log.setResourceId(resourceId);
log.setResourceType(resourceType);
log.setUserId(user.getId());
log.setUsername(user.getUsername());
log.setTimestamp(Instant.now());
return repository.save(log);
}
}
```
### Role-Based Access Control
```java
@Service
public class PermissionServiceImpl implements PermissionService {
@Override
public Mono<Boolean> hasPermission(User user, String resourceId, PermissionType permission) {
return userGroupRepository.findByUserIdAndOrganizationId(user.getId(), user.getCurrentOrganizationId())
.flatMap(userGroup -> {
if (userGroup.getRole() == UserRole.ORGANIZATION_ADMIN) {
return Mono.just(true);
}
return resourcePermissionRepository
.findByResourceIdAndPermission(resourceId, permission)
.any(resourcePermission -> resourcePermission.getUserGroupId().equals(userGroup.getId()));
});
}
}
```
### SSO Integration
```java
@Configuration
@ConditionalOnProperty(prefix = "appsmith.oauth2", name = "enabled", havingValue = "true")
public class OAuth2Config {
@Bean
public ReactiveClientRegistrationRepository clientRegistrationRepository() {
List<ClientRegistration> registrations = new ArrayList<>();
registrations.add(googleClientRegistration());
registrations.add(githubClientRegistration());
// Other providers...
return new InMemoryReactiveClientRegistrationRepository(registrations);
}
private ClientRegistration googleClientRegistration() {
return ClientRegistration.withRegistrationId("google")
.clientId(googleClientId)
.clientSecret(googleClientSecret)
.clientAuthenticationMethod(ClientAuthenticationMethod.CLIENT_SECRET_BASIC)
.authorizationGrantType(AuthorizationGrantType.AUTHORIZATION_CODE)
.redirectUri("{baseUrl}/api/v1/oauth2/callback/{registrationId}")
.scope("openid", "email", "profile")
.authorizationUri("https://accounts.google.com/o/oauth2/v2/auth")
.tokenUri("https://www.googleapis.com/oauth2/v4/token")
.userInfoUri("https://www.googleapis.com/oauth2/v3/userinfo")
.userNameAttributeName(IdTokenClaimNames.SUB)
.jwkSetUri("https://www.googleapis.com/oauth2/v3/certs")
.clientName("Google")
.build();
}
}
```
## Performance Optimization Techniques
### Frontend Optimizations
1. **Memoization**:
```typescript
const MemoizedComponent = React.memo(MyComponent);
// Using useMemo for computed values
const computedValue = useMemo(() => {
return expensiveComputation(a, b);
}, [a, b]);
// Using useCallback for functions
const handleClick = useCallback(() => {
doSomething(a, b);
}, [a, b]);
```
2. **Virtualization for Large Lists**:
```typescript
import { FixedSizeList } from 'react-window';
const MyList = ({ items }) => (
<FixedSizeList
height={500}
width={500}
itemCount={items.length}
itemSize={50}
>
{({ index, style }) => (
<div style={style}>
{items[index].name}
</div>
)}
</FixedSizeList>
);
```
### Backend Optimizations
1. **Batch Processing**:
```java
@Service
public class BatchImportServiceImpl implements BatchImportService {
@Override
public Mono<ImportResult> importEntities(List<Entity> entities) {
return Flux.fromIterable(entities)
.flatMap(this::validateEntity)
.collectList()
.flatMap(validatedEntities ->
repository.saveAll(validatedEntities)
.collectList()
.map(savedEntities -> new ImportResult(savedEntities.size(), null))
);
}
}
```
2. **Query Optimization**:
```java
// Using MongoDB indexes
@Document
public class Application {
// ...
@Indexed
private String name;
@Indexed
private String organizationId;
@CompoundIndex(def = "{'organizationId': 1, 'name': 1}", unique = true)
// ...
}
```
## This document should provide Cursor with a deeper understanding of the technical implementation details of Appsmith, allowing for more accurate and contextual assistance when working with the codebase.

View File

@ -0,0 +1,219 @@
# Appsmith Testing Quick Reference
## Testing Requirements
### For Bug Fixes
1. **Unit Tests (Required)**
- Reproduce the bug scenario
- Verify the fix works correctly
- Test edge cases and potential regressions
- Place in the same directory as the fixed code with `.test.ts` extension
2. **End-to-End Tests (Required for user-facing changes)**
- Create a Cypress test that simulates the user action that would trigger the bug
- Verify the fix works correctly in the application context
- Place in `app/client/cypress/e2e/Regression/`
3. **Redux/React Safety Tests (For Redux/React code)**
- Test with both valid and null/undefined state structures
- Verify that property access is handled safely
- Test edge cases where nested properties might not exist
### For Feature Development
1. **Unit Tests (Required)**
- Test each component or function individually
- Cover the main functionality, edge cases, and error conditions
- Place alongside the implemented code
2. **Integration Tests (For complex features)**
- Test interactions between components
- Verify that data flows correctly between components
3. **End-to-End Tests (For user-facing features)**
- Simulate user interactions with the feature
- Verify that the feature works correctly in the application context
## Test File Locations
- **Unit Tests:** Same directory as the code being tested (e.g., `Component.test.tsx`)
- **Cypress E2E Tests:** `app/client/cypress/e2e/Regression/[Category]/[Feature]_spec.js`
- **Backend Tests:** `app/server/src/test/java/com/appsmith/server/...`
## Templates
### Unit Test for Bug Fix (React Component)
```typescript
import React from "react";
import { render, screen, fireEvent } from "@testing-library/react";
import MyComponent from "./MyComponent";
describe("MyComponent bug fix", () => {
it("should reproduce the bug scenario", () => {
// Arrange: Setup the conditions that trigger the bug
render(<MyComponent prop="value" />);
// Act: Perform the action that triggers the bug
fireEvent.click(screen.getByText("Button"));
// Assert: Verify the bug is fixed
expect(screen.getByText("Expected Result")).toBeInTheDocument();
});
it("should maintain existing functionality", () => {
// Test that related functionality still works
render(<MyComponent prop="otherValue" />);
expect(screen.getByText("Other Result")).toBeInTheDocument();
});
});
```
### E2E Test for Bug Fix
```javascript
describe("Feature Bug Fix", { tags: ["@tag.Bugfix", "@tag.Regression"] }, function() {
before(() => {
cy.login();
cy.createTestWorkspace();
});
it("should no longer exhibit the bug", () => {
// Steps to reproduce the bug
cy.get("[data-cy=element]").click();
cy.get("[data-cy=other-element]").type("value");
// Verify the bug is fixed
cy.get("[data-cy=result]").should("have.text", "Expected Result");
});
});
```
### Redux Safety Test
```typescript
import { configureStore } from '@reduxjs/toolkit';
import reducer, { selectUserData } from './userSlice';
import { renderHook } from '@testing-library/react-hooks';
import { Provider } from 'react-redux';
import React from 'react';
import { useSelector } from 'react-redux';
describe("Redux safety tests", () => {
// Test with missing nested properties
it("should handle missing nested properties safely", () => {
// Create store with incomplete state
const store = configureStore({
reducer: {
user: reducer
},
preloadedState: {
user: {
// Missing nested user data structure
}
}
});
// Test selector with missing data
const wrapper = ({ children }) => (
<Provider store={store}>{children}</Provider>
);
const { result } = renderHook(() => useSelector(selectUserData), { wrapper });
// Should not throw an error and return default/fallback value
expect(result.current).toEqual(/* expected default value */);
});
it("should handle deep property access safely", () => {
// Similar setup but with different state permutations
// Test various incomplete state structures
});
});
```
### Unit Test for Redux Sagas
```typescript
import { runSaga } from 'redux-saga';
import { mySaga } from './mySaga';
describe("mySaga", () => {
it("should dispatch expected actions", async () => {
// Mock dependencies
const dispatched = [];
const mockStore = {
dispatch: (action) => dispatched.push(action),
getState: () => ({ data: 'mock data' }),
};
// Run the saga
await runSaga(mockStore, mySaga, { payload: { id: '123' } }).toPromise();
// Assert expected actions were dispatched
expect(dispatched).toEqual([
{ type: 'SOME_ACTION', payload: 'mock data' },
]);
});
});
```
## Best Practices
1. **Test the User Experience**
- Focus on testing what the user sees and experiences
- Don't test implementation details unless necessary
2. **Use Descriptive Test Names**
- Tests should clearly describe what they're testing
- Use format: `should [expected behavior] when [condition]`
3. **Isolate Tests**
- Each test should be independent
- Don't rely on state from other tests
4. **Test Edge Cases**
- Empty input, invalid input, boundary conditions
- Error states and recovery
- Null/undefined in Redux state trees
- Missing nested properties
5. **Keep Tests Fast**
- Tests should run quickly to encourage frequent testing
- Use mocks for slow dependencies
6. **Test Coverage Guidelines**
- 80%+ coverage for critical paths
- Focus on business logic rather than UI details
7. **Redux/React Safety Testing**
- Test selectors with incomplete state structures
- Verify error boundaries catch property access errors
- Test with various state permutations to ensure robustness
## Running Tests
### Frontend Unit Tests
```bash
cd app/client
yarn test # Run all tests
yarn test:watch # Run in watch mode
yarn test:coverage # Generate coverage report
```
### Cypress E2E Tests
```bash
cd app/client
yarn cypress:open # Open Cypress UI
yarn cypress:run # Run headless
```
### Backend Tests
```bash
cd app/server
./mvnw test
```

43
.cursor/hooks/README.md Normal file
View File

@ -0,0 +1,43 @@
# Appsmith Cursor Hooks
This directory contains hooks and scripts that automate tasks and enforce standards in the Appsmith development workflow.
## Available Hooks
### scripts/update-docs.sh
Automatically updates documentation based on code changes, ensuring that the documentation stays in sync with the codebase.
## How Hooks Work
Hooks are triggered by specific events in the development workflow, such as:
- Creating a pull request
- Pushing code to a branch
- Running specific commands
Each hook performs specific actions to maintain code quality, enforce standards, or automate routine tasks.
## Installing Hooks
To install these hooks in your local development environment:
1. Navigate to the root of the project
2. Run the following command:
```bash
cp .cursor/hooks/scripts/* .git/hooks/
chmod +x .git/hooks/*
```
This will copy the hooks to your local Git hooks directory and make them executable.
## Manual Execution
You can also run these hooks manually as needed:
```bash
# Update documentation based on code changes
.cursor/hooks/scripts/update-docs.sh
```
## Customizing Hooks
If you need to customize a hook for your specific development environment, copy it to your local `.git/hooks` directory and modify it as needed. Avoid changing the hooks in this directory directly, as they will be overwritten when you pull changes from the repository.

View File

@ -0,0 +1,225 @@
---
description:
globs:
alwaysApply: true
---
---
description:
globs:
alwaysApply: true
---
---
name: Auto Update Documentation
description: Automatically updates Cursor documentation based on code changes
author: Cursor AI
version: 1.0.0
tags:
- appsmith
- documentation
- maintenance
- automation
activation:
always: true
events:
- pull_request
- push
- command
triggers:
- pull_request.created
- pull_request.updated
- push.after
- command.update_docs
---
# Auto Update Documentation Rule
This rule ensures that the Cursor documentation files are kept up-to-date as the codebase evolves. It analyzes code changes and automatically updates the relevant documentation files.
## Functionality
- Monitors changes to key parts of the codebase
- Updates the codebase map when structure changes
- Updates technical details when implementation changes
- Notifies developers when documentation needs manual review
## Implementation
```javascript
const fs = require('fs');
const path = require('path');
// File paths
const codebaseMapPath = '.cursor/appsmith-codebase-map.md';
const technicalDetailsPath = '.cursor/appsmith-technical-details.md';
const cursorIndexPath = '.cursor/index.mdc';
/**
* Get modified files from a pull request or push
* @param {Object} event - The trigger event
* @returns {Array<string>} List of modified file paths
*/
function getModifiedFiles(event) {
// For actual implementation, use the appropriate API to get modified files
// This is a simplified version
if (event.type === 'pull_request') {
return cursor.git.getChangedFiles(event.pullRequest.base, event.pullRequest.head);
} else if (event.type === 'push') {
return cursor.git.getChangedFiles(event.push.before, event.push.after);
}
return [];
}
/**
* Checks if any structural files have been modified
* @param {Array<string>} files - List of modified files
* @returns {boolean} True if structural files were modified
*/
function hasStructuralChanges(files) {
const structuralPatterns = [
/^app\/[^\/]+\//, // Top-level app directories
/package\.json$/, // Package definitions
/pom\.xml$/, // Maven project files
/^app\/client\/src\/reducers\//, // Redux structure
/^app\/server\/appsmith-server\/src\/main\/java\/com\/appsmith\/server\//, // Main server structure
/^\.cursor\/rules\/.*\.mdc$/, // Cursor rule files
/^\.cursor\/.*\.mdc$/, // Top-level Cursor rule files
];
return files.some(file =>
structuralPatterns.some(pattern => pattern.test(file))
);
}
/**
* Checks if any implementation files have been modified
* @param {Array<string>} files - List of modified files
* @returns {boolean} True if implementation files were modified
*/
function hasImplementationChanges(files) {
const implementationPatterns = [
/\.java$/, // Java files
/\.ts(x)?$/, // TypeScript files
/\.js(x)?$/, // JavaScript files
/^app\/client\/src\/sagas\//, // Redux sagas
/^app\/server\/appsmith-server\/src\/main\/java\/com\/appsmith\/server\/services/, // Server services
/^\.cursor\/rules\/.*\.mdc$/, // Cursor rule files
/^\.cursor\/.*\.mdc$/, // Top-level Cursor rule files
];
return files.some(file =>
implementationPatterns.some(pattern => pattern.test(file))
);
}
/**
* Checks if any rule files have been modified
* @param {Array<string>} files - List of modified files
* @returns {boolean} True if rule files were modified
*/
function hasRuleChanges(files) {
const rulePatterns = [
/^\.cursor\/rules\/.*\.mdc$/, // Rules in the rules directory
/^\.cursor\/.*\.mdc$/ // Top-level rules
];
return files.some(file =>
rulePatterns.some(pattern => pattern.test(file))
);
}
/**
* Updates a documentation file with a notice
* @param {string} filePath - Path to the documentation file
* @param {string} message - Message to add to the file
*/
function updateDocumentationFile(filePath, message) {
try {
let content = fs.readFileSync(filePath, 'utf8');
const timestamp = new Date().toISOString();
const updateNote = `\n\n> **Update Notice (${timestamp})**: ${message}`;
// Add the update notice near the top, after any headers
const headerEndIndex = content.indexOf('\n\n');
if (headerEndIndex !== -1) {
content = content.substring(0, headerEndIndex + 2) + updateNote + content.substring(headerEndIndex + 2);
fs.writeFileSync(filePath, content);
return true;
}
return false;
} catch (error) {
console.error(`Failed to update ${filePath}:`, error);
return false;
}
}
/**
* Main function to handle documentation updates
* @param {Object} event - The trigger event
*/
function handleDocumentationUpdates(event) {
const modifiedFiles = getModifiedFiles(event);
let updates = 0;
if (hasStructuralChanges(modifiedFiles)) {
const updated = updateDocumentationFile(
codebaseMapPath,
'Structural changes detected in the codebase. This document may need to be updated to reflect the new structure.'
);
if (updated) updates++;
}
if (hasImplementationChanges(modifiedFiles)) {
const updated = updateDocumentationFile(
technicalDetailsPath,
'Implementation changes detected in the codebase. This document may need to be updated to reflect the new implementation details.'
);
if (updated) updates++;
}
if (hasRuleChanges(modifiedFiles)) {
const updated = updateDocumentationFile(
cursorIndexPath,
'Cursor rule changes detected. This document may need to be updated to reflect the new rules or rule updates.'
);
if (updated) updates++;
}
return {
success: true,
data: {
filesUpdated: updates,
message: updates > 0 ? 'Documentation update notices added.' : 'No documentation updates needed.'
}
};
}
// Register the event handlers
function activate() {
cursor.on('pull_request.created', handleDocumentationUpdates);
cursor.on('pull_request.updated', handleDocumentationUpdates);
cursor.on('push.after', handleDocumentationUpdates);
cursor.registerCommand('update_docs', () => {
return handleDocumentationUpdates({ type: 'command' });
});
}
module.exports = {
activate,
handleDocumentationUpdates
};
```
## Usage
This rule runs automatically on pull request creation/updates and after pushes. You can also manually trigger it with the command `update_docs`.
When it detects significant changes to the codebase structure or implementation, it will add update notices to the top of the relevant documentation files, indicating that they may need to be reviewed and updated.
## Configuration
No specific configuration is required. The rule automatically monitors key file patterns that indicate structural or implementation changes.
## Example
After a significant refactoring of the frontend directory structure:

View File

@ -0,0 +1,123 @@
#!/bin/bash
# update-cursor-docs.sh
# Pre-commit hook script to update Cursor documentation based on code changes
set -e
CURSOR_DIR=".cursor"
CODEBASE_MAP="${CURSOR_DIR}/appsmith-codebase-map.md"
TECHNICAL_DETAILS="${CURSOR_DIR}/appsmith-technical-details.md"
RULES_DIR="${CURSOR_DIR}/rules"
echo "🔍 Checking for updates to Cursor documentation..."
# Function to check if file needs to be included in the commit
add_to_commit_if_changed() {
local file=$1
if git diff --name-only --cached | grep -q "$file"; then
echo "$file is already staged for commit"
elif git diff --name-only | grep -q "$file"; then
echo "📝 Adding modified $file to commit"
git add "$file"
fi
}
# Get list of changed files in this commit
CHANGED_FILES=$(git diff --cached --name-only)
# Check if we need to update the codebase map
update_codebase_map() {
local need_update=false
# Check for directory structure changes
if echo "$CHANGED_FILES" | grep -q "^app/.*/$"; then
need_update=true
fi
# Check for major file additions
if echo "$CHANGED_FILES" | grep -q -E '\.(java|tsx?|jsx?)$' | wc -l | grep -q -v "^0$"; then
need_update=true
fi
if [ "$need_update" = true ]; then
echo "🔄 Updating Codebase Map documentation..."
# Append a timestamp to the file to mark it as updated
echo -e "\n\n> Last updated: $(date)" >> "$CODEBASE_MAP"
# In a real implementation, you would call an external script or Cursor API
# to analyze the codebase and update the map file
echo " Codebase Map should be manually reviewed to ensure accuracy"
add_to_commit_if_changed "$CODEBASE_MAP"
else
echo "✅ Codebase Map does not need updates"
fi
}
# Check if we need to update the technical details
update_technical_details() {
local need_update=false
# Check for framework changes
if echo "$CHANGED_FILES" | grep -q -E 'package\.json|pom\.xml|build\.gradle'; then
need_update=true
fi
# Check for core component changes
if echo "$CHANGED_FILES" | grep -q -E 'src/(components|widgets|services)/.*\.(tsx?|java)$'; then
need_update=true
fi
if [ "$need_update" = true ]; then
echo "🔄 Updating Technical Details documentation..."
# Append a timestamp to the file to mark it as updated
echo -e "\n\n> Last updated: $(date)" >> "$TECHNICAL_DETAILS"
# In a real implementation, you would call an external script or Cursor API
# to analyze the codebase and update the technical details file
echo " Technical Details should be manually reviewed to ensure accuracy"
add_to_commit_if_changed "$TECHNICAL_DETAILS"
else
echo "✅ Technical Details do not need updates"
fi
}
# Check if we need to update Cursor rules
update_cursor_rules() {
local need_update=false
# Update rules if specific patterns are found in changed files
if echo "$CHANGED_FILES" | grep -q -E 'app/client/src/(widgets|components)|app/server/.*/(controllers|services)/'; then
need_update=true
fi
if [ "$need_update" = true ]; then
echo "🔄 Checking Cursor rules for updates..."
# In a real implementation, you would call an external script or Cursor API
# to analyze rule relevance and update rules
# Add timestamp to index.mdc to mark rules as checked
if [ -f "$RULES_DIR/index.mdc" ]; then
echo -e "\n\n> Rules checked: $(date)" >> "$RULES_DIR/index.mdc"
add_to_commit_if_changed "$RULES_DIR/index.mdc"
fi
echo " Cursor rules should be manually reviewed to ensure they're up to date"
else
echo "✅ Cursor rules do not need updates"
fi
}
# Main execution
update_codebase_map
update_technical_details
update_cursor_rules
echo "✅ Cursor documentation check complete"
exit 0

View File

@ -0,0 +1,136 @@
{
"learning": {
"enabled": true,
"incremental": true,
"sources": [
{
"type": "code_changes",
"patterns": ["**/*.java", "**/*.ts", "**/*.tsx", "**/*.js", "**/*.jsx"],
"strategy": "diff_based"
},
{
"type": "test_additions",
"patterns": ["**/*.test.ts", "**/*.test.tsx", "**/*Test.java", "**/*Tests.java", "cypress/integration/**/*.spec.ts"],
"strategy": "full_context"
},
{
"type": "workflow_modifications",
"patterns": [".github/workflows/*.yml"],
"strategy": "full_context"
},
{
"type": "documentation_updates",
"patterns": ["**/*.md", "docs/**/*"],
"strategy": "full_context"
},
{
"type": "build_changes",
"patterns": ["**/package.json", "**/pom.xml", "**/build.gradle", "Dockerfile", "docker-compose.yml"],
"strategy": "full_context"
}
],
"indexing": {
"frequency": "on_change",
"depth": "full",
"include_dependencies": true
},
"retention": {
"max_items_per_category": 1000,
"max_total_items": 10000,
"prioritization": "recency_and_relevance"
}
},
"pattern_detection": {
"enabled": true,
"categories": [
{
"name": "coding_patterns",
"description": "Common implementation patterns in the codebase",
"examples_to_track": 50
},
{
"name": "test_patterns",
"description": "Patterns for writing tests in this codebase",
"examples_to_track": 50
},
{
"name": "architectural_patterns",
"description": "Patterns related to the system's architecture",
"examples_to_track": 30
},
{
"name": "performance_optimizations",
"description": "Patterns for performance improvements",
"examples_to_track": 30
},
{
"name": "bug_fixes",
"description": "Common patterns in bug fixes",
"examples_to_track": 50
}
]
},
"knowledge_graph": {
"enabled": true,
"entity_types": [
"component", "service", "repository", "controller", "utility",
"model", "workflow", "test", "configuration"
],
"relationship_types": [
"imports", "extends", "implements", "calls", "uses", "tests",
"defines", "configures"
],
"max_depth": 3,
"update_frequency": "on_change"
},
"context_building": {
"strategies": {
"new_file": [
"analyze_related_files",
"study_similar_patterns",
"check_test_coverage"
],
"bug_fix": [
"understand_root_cause",
"analyze_test_gaps",
"check_similar_issues"
],
"feature_addition": [
"understand_requirements",
"analyze_affected_components",
"plan_test_strategy"
],
"refactoring": [
"understand_current_implementation",
"identify_improvement_opportunities",
"ensure_test_coverage"
]
}
},
"verification_strategies": {
"bug_fix": [
"run_targeted_tests",
"verify_fix_addresses_root_cause",
"check_regression_tests"
],
"feature_addition": [
"verify_meets_requirements",
"run_new_tests",
"check_for_performance_impact"
],
"refactoring": [
"verify_identical_behavior",
"check_test_coverage",
"verify_performance_impact"
]
},
"feedback_loop": {
"sources": [
"code_review_comments",
"test_results",
"build_status",
"performance_metrics"
],
"integration": "continuous"
}
}

97
.cursor/index.mdc Normal file
View File

@ -0,0 +1,97 @@
---
description:
globs:
alwaysApply: true
---
# Appsmith Cursor Rules
```yaml
name: Appsmith Cursor Rules
description: A comprehensive set of rules for Cursor AI to enhance development for Appsmith
author: Cursor AI
version: 1.0.0
tags:
- appsmith
- development
- quality
- verification
activation:
always: true
event:
- pull_request
- file_change
- command
triggers:
- pull_request.created
- pull_request.updated
- file.created
- file.modified
- command: "cursor_help"
```
## Overview
This is the main entry point for Cursor AI rules for the Appsmith codebase. These rules help enforce consistent coding standards, verify bug fixes and features, generate appropriate tests, and optimize performance.
## Directory Structure
```
.cursor/
├── settings.json # Main configuration file
├── docs/ # Documentation
│ ├── guides/ # In-depth guides
│ ├── references/ # Quick references
│ └── practices/ # Best practices
├── rules/ # Rule definitions
│ ├── commit/ # Commit-related rules
│ ├── quality/ # Code quality rules
│ ├── testing/ # Testing rules
│ └── verification/ # Verification rules
└── hooks/ # Git hooks and scripts
```
## Available Rules
### 1. Commit Rules
- [Semantic PR Validator](rules/commit/semantic-pr-validator.mdc): Ensures pull request titles follow the Conventional Commits specification.
- [Semantic PR Guidelines](rules/commit/semantic-pr.md): Guidelines for writing semantic commit messages.
### 2. Quality Rules
- [Performance Optimizer](rules/quality/performance-optimizer.mdc): Identifies performance bottlenecks in code and suggests optimizations.
- [Pre-commit Quality Checks](rules/quality/pre-commit-checks.mdc): Quality checks that run before commits.
### 3. Testing Rules
- [Test Generator](rules/testing/test-generator.mdc): Analyzes code changes and generates appropriate test cases.
### 4. Verification Rules
- [Bug Fix Verifier](rules/verification/bug-fix-verifier.mdc): Guides developers through proper bug fixing steps.
- [Feature Verifier](rules/verification/feature-verifier.mdc): Verifies that new features are properly implemented and tested.
- [Feature Implementation Validator](rules/verification/feature-implementation-validator.mdc): Validates feature implementations.
- [Workflow Validator](rules/verification/workflow-validator.mdc): Validates development workflows.
## Documentation
- [Guides](docs/guides/): In-depth guides on specific topics
- [References](docs/references/): Quick reference documents
- [Practices](docs/practices/): Best practices for development
## Command Examples
- `validate_pr_title` - Check if a PR title follows conventional commits format
- `verify_bug_fix --pullRequest=123` - Verify a bug fix implementation
- `generate_tests --file=src/utils/helpers.js` - Generate tests for a specific file
- `optimize_performance --file=src/components/Table.tsx` - Analyze and optimize performance
- `validate_feature --pullRequest=123` - Validate a feature implementation
- `cursor_help` - Display available commands and provide guidance
## Configuration
The behavior of these rules can be customized in the `.cursor/settings.json` file.
## Activation
To activate all rules, run `cursor_help` in the command palette. This will display available commands and provide guidance on using the rules for your specific task.

View File

@ -37,6 +37,70 @@
"usage": "cypress/e2e/..." "usage": "cypress/e2e/..."
} }
} }
},
"reactHooks": {
"bestPractices": {
"required": true,
"rules": {
"safePropertyAccess": {
"required": true,
"description": "Use lodash/get or optional chaining for nested property access",
"examples": ".cursor/docs/react_hooks_circular_dependency_lessons.md#1.-Safe-nested-property-access-using-lodash/get"
},
"preventCircularDependencies": {
"required": true,
"description": "Use useRef to track previous values and implement directional updates",
"examples": ".cursor/docs/react_hooks_circular_dependency_lessons.md#2.-Tracking-previous-values-with-useRef"
},
"earlyReturns": {
"required": true,
"description": "Implement early returns when values haven't changed to prevent unnecessary updates",
"examples": ".cursor/docs/react_hooks_circular_dependency_lessons.md#3.-Directional-updates"
},
"deepComparisons": {
"required": true,
"description": "Use deep equality checks for comparing objects and arrays",
"examples": ".cursor/docs/react_hooks_circular_dependency_lessons.md#4.-Deep-comparisons-for-complex-objects"
}
},
"analyzer": ".cursor/rules/react_hook_best_practices.mdc"
}
},
"testingRequirements": {
"bugFixes": {
"required": true,
"tests": {
"unit": {
"required": true,
"description": "Unit tests must verify the specific fix and ensure no regressions"
},
"e2e": {
"required": "for user-facing changes",
"description": "End-to-end tests must verify the fix works in the application context"
}
},
"examples": {
"unit": ".cursor/rules/test_generator.mdc#Bug-Fix-Test-Example-(Unit-Test)",
"e2e": ".cursor/rules/test_generator.mdc#Bug-Fix-Test-Example-(E2E-Test)"
}
},
"features": {
"required": true,
"tests": {
"unit": {
"required": true,
"description": "Unit tests must cover core functionality and edge cases"
},
"integration": {
"required": "for complex features",
"description": "Integration tests must verify interactions between components"
},
"e2e": {
"required": "for user-facing features",
"description": "End-to-end tests must verify the feature works in the application context"
}
}
}
} }
} }
} }

43
.cursor/rules/README.md Normal file
View File

@ -0,0 +1,43 @@
# Appsmith Cursor Rules
This directory contains the rules that Cursor AI uses to validate and improve code quality in the Appsmith project.
## Rule Categories
- **commit/**: Rules for validating commit messages and pull requests
- `semantic-pr.md`: Guidelines for semantic pull request titles
- **quality/**: Rules for ensuring code quality
- `performance.mdc`: Rules for optimizing performance
- `pre-commit-checks.mdc`: Quality checks that run before commits
- **testing/**: Rules for test coverage and quality
- `test-generator.mdc`: Automated test generation based on code changes
- **verification/**: Rules for verifying changes and implementations
- `bug-fix-verifier.mdc`: Validation for bug fix implementations
- `feature-verifier.mdc`: Validation for feature implementations
- `workflow-validator.mdc`: Validation for development workflows
## How Rules Work
Each rule is defined in a Markdown Cursor (`.mdc`) file that includes:
1. **Metadata**: Name, description, and trigger conditions
2. **Logic**: JavaScript code that implements the rule
3. **Documentation**: Usage examples and explanations
Rules are automatically triggered based on events like:
- Creating or updating pull requests
- Modifying files
- Running specific commands in Cursor
## Using Rules
You can manually trigger rules using Cursor commands, such as:
- `validate_pr_title`: Check if a PR title follows conventions
- `verify_bug_fix`: Validate a bug fix implementation
- `generate_tests`: Generate tests for changed code
- `optimize_performance`: Analyze code for performance issues
Refer to each rule's documentation for specific usage information.

View File

@ -0,0 +1,174 @@
---
description:
globs:
alwaysApply: true
---
# Semantic PR Validator
```yaml
name: Semantic PR Validator
description: Validates that PR titles follow the Conventional Commits specification
author: Cursor AI
version: 1.0.0
tags:
- git
- pull-request
- semantic
- conventional-commits
activation:
always: true
event:
- pull_request
- pull_request_title_edit
- command
triggers:
- pull_request.created
- pull_request.edited
- command: "validate_pr_title"
```
## Rule Definition
This rule ensures that pull request titles follow the [Conventional Commits](mdc:https:/www.conventionalcommits.org) specification.
## Validation Logic
```javascript
// Function to validate PR titles against Conventional Commits spec
function validatePRTitle(title) {
// Regular expression for conventional commits format
const conventionalCommitRegex = /^(feat|fix|docs|style|refactor|perf|test|build|ci|chore|revert)(\([a-z0-9-_]+\))?(!)?: [a-z0-9].+$/i;
if (!conventionalCommitRegex.test(title)) {
return {
valid: false,
errors: [
"PR title doesn't follow the Conventional Commits format: type(scope): description",
"Example valid titles:",
"- feat(widget): add new table component",
"- fix: resolve login issue",
"- docs(readme): update installation instructions"
]
};
}
// Check for empty scope in parentheses
if (title.includes('()')) {
return {
valid: false,
errors: [
"Empty scope provided. Either include a scope value or remove the parentheses."
]
};
}
// Extract parts
const match = title.match(/^([a-z]+)(?:\(([a-z0-9-_]+)\))?(!)?:/i);
if (!match || !match[1]) {
return {
valid: false,
errors: [
"Failed to parse PR title format. Please follow the pattern: type(scope): description"
]
};
}
const type = match[1].toLowerCase();
// Validate type
const validTypes = ["feat", "fix", "docs", "style", "refactor",
"perf", "test", "build", "ci", "chore", "revert"];
if (!validTypes.includes(type)) {
return {
valid: false,
errors: [
`Invalid type "${type}". Valid types are: ${validTypes.join(', ')}`
]
};
}
return { valid: true };
}
// Triggered when a PR is created or the title is changed
function onPRTitleChange(prTitle) {
const validation = validatePRTitle(prTitle);
if (!validation.valid) {
return {
status: "failure",
message: "PR title doesn't follow Conventional Commits format",
details: validation.errors.join('\n'),
suggestions: [
{
label: "Fix PR title format",
description: "Update title to follow type(scope): description format"
}
]
};
}
return {
status: "success",
message: "PR title follows Conventional Commits format"
};
}
// Run on activation
function activate(context) {
// Register event handlers
context.on('pull_request.created', (event) => {
const prTitle = event.pull_request.title;
return onPRTitleChange(prTitle);
});
context.on('pull_request.edited', (event) => {
const prTitle = event.pull_request.title;
return onPRTitleChange(prTitle);
});
context.registerCommand('validate_pr_title', (args) => {
const prTitle = args.title || context.currentPR?.title;
if (!prTitle) {
return {
status: "error",
message: "No PR title provided"
};
}
return onPRTitleChange(prTitle);
});
}
// Export the functions
module.exports = {
activate,
onPRTitleChange,
validatePRTitle
};
```
## When It Runs
This rule automatically runs in the following scenarios:
- When a new pull request is created
- When a pull request title is edited
- When a developer asks for validation via Cursor command: `validate_pr_title`
## Usage Example
To validate a PR title before submitting:
1. Create a branch and make your changes
2. Prepare to create a PR
3. Use the command: `validate_pr_title` in Cursor
4. Cursor will check your title and suggest corrections if needed
## Examples of Valid PR Titles
- `feat(widgets): add new table widget capabilities`
- `fix(auth): resolve login redirect issue`
- `docs: update README with setup instructions`
- `refactor(api): simplify error handling logic`
- `chore: update dependencies to latest versions`

View File

@ -0,0 +1,82 @@
# Semantic PR Guidelines for Appsmith
This guide outlines how to ensure your pull requests follow the Conventional Commits specification, which is enforced in this project using the [semantic-prs](https://github.com/Ezard/semantic-prs) GitHub app.
## Current Configuration
The project uses the following semantic PR configuration in `.github/semantic.yml`:
```yaml
# Always validate the PR title, and ignore the commits
titleOnly: true
```
This means that only the PR title needs to follow the Conventional Commits spec, and commit messages are not validated.
## Pull Request Title Format
PR titles should follow this format:
```
type(scope): description
```
### Types
Common types according to Conventional Commits:
- `feat`: A new feature
- `fix`: A bug fix
- `docs`: Documentation changes
- `style`: Changes that don't affect the code's meaning (formatting, etc.)
- `refactor`: Code changes that neither fix a bug nor add a feature
- `perf`: Performance improvements
- `test`: Adding or fixing tests
- `build`: Changes to build process, dependencies, etc.
- `ci`: Changes to CI configuration files and scripts
- `chore`: Other changes that don't modify source or test files
- `revert`: Reverts a previous commit
### Scope
The scope is optional and represents the section of the codebase affected by the change (e.g., `client`, `server`, `widgets`, `plugins`).
### Description
A brief description of the changes. Should:
- Use imperative, present tense (e.g., "add" not "added" or "adds")
- Not capitalize the first letter
- Not end with a period
## Examples of Valid PR Titles
- `feat(widgets): add new table widget capabilities`
- `fix(auth): resolve login redirect issue`
- `docs: update README with new setup instructions`
- `refactor(api): simplify error handling logic`
- `chore: update dependencies to latest versions`
## Examples of Invalid PR Titles
- `Added new feature` (missing type)
- `fix - login bug` (improper format, missing scope)
- `feat(client): Added new component.` (description should use imperative mood and not end with period)
## Automated Validation
The semantic-prs GitHub app will automatically check your PR title when you create or update a pull request. If your PR title doesn't follow the conventions, the check will fail, and you'll need to update your title.
## Cursor Assistance
Cursor will help enforce these rules by:
1. Suggesting conventional PR titles when creating branches
2. Validating PR titles against the conventional format
3. Providing feedback on non-compliant PR titles
4. Suggesting corrections for PR titles that don't meet the requirements
## Resources
- [Conventional Commits Specification](https://www.conventionalcommits.org/)
- [semantic-prs GitHub App](https://github.com/Ezard/semantic-prs)
- [Angular Commit Message Guidelines](https://github.com/angular/angular/blob/main/CONTRIBUTING.md#commit)

45
.cursor/rules/index.md Normal file
View File

@ -0,0 +1,45 @@
# Appsmith Cursor Rules
This index provides an overview of all the rules available for Cursor AI in the Appsmith project.
## Commit Rules
- [Semantic PR Validator](commit/semantic-pr-validator.mdc): Validates that PR titles follow the Conventional Commits specification
- [Semantic PR Guidelines](commit/semantic-pr.md): Guidelines for writing semantic PR titles and commit messages
## Quality Rules
- [Performance Optimizer](quality/performance-optimizer.mdc): Analyzes code for performance issues and suggests improvements
- [Pre-commit Quality Checks](quality/pre-commit-checks.mdc): Checks code quality before commits
## Testing Rules
- [Test Generator](testing/test-generator.mdc): Automatically generates appropriate tests for code changes
## Verification Rules
- [Bug Fix Verifier](verification/bug-fix-verifier.mdc): Guides developers through proper bug fixing steps and verifies fix quality
- [Feature Verifier](verification/feature-verifier.mdc): Verifies that new features are properly implemented and tested
- [Feature Implementation Validator](verification/feature-implementation-validator.mdc): Validates that new features are completely and correctly implemented
- [Workflow Validator](verification/workflow-validator.mdc): Validates development workflows
## Available Commands
| Command | Description | Rule |
|---------|-------------|------|
| `validate_pr_title` | Validates PR title format | Semantic PR Validator |
| `verify_bug_fix` | Verifies bug fix quality | Bug Fix Verifier |
| `validate_feature` | Validates feature implementation | Feature Implementation Validator |
| `verify_feature` | Verifies feature implementation quality | Feature Verifier |
| `generate_tests` | Generates tests for code changes | Test Generator |
| `optimize_performance` | Analyzes code for performance issues | Performance Optimizer |
| `update_docs` | Updates documentation based on code changes | [Auto Update Docs](../hooks/scripts/auto-update-docs.mdc) |
## Triggering Rules
Rules can be triggered:
1. Automatically based on events (PR creation, file modification, etc.)
2. Manually via commands in Cursor
3. From CI/CD pipelines
See each rule's documentation for specific trigger conditions and parameters.

136
.cursor/rules/index.mdc Normal file
View File

@ -0,0 +1,136 @@
---
description:
globs:
alwaysApply: true
---
# Appsmith Cursor Rules
```yaml
name: Appsmith Cursor Rules
description: A comprehensive set of rules for Cursor AI to enhance development for Appsmith
author: Cursor AI
version: 1.0.0
tags:
- appsmith
- development
- quality
- verification
activation:
always: true
event:
- pull_request
- file_change
- command
triggers:
- pull_request.created
- pull_request.updated
- file.created
- file.modified
- command: "cursor_help"
```
## Overview
This is the main entry point for Cursor AI rules for the Appsmith codebase. These rules help enforce consistent coding standards, verify bug fixes and features, generate appropriate tests, and optimize performance.
## Available Rules
### 1. [Semantic PR Validator](mdc:semantic_pr_validator.mdc)
Ensures pull request titles follow the Conventional Commits specification.
```javascript
// Example usage
const semanticPR = require('./semantic_pr_validator');
const validation = semanticPR.onPRTitleChange('feat(widgets): add new table component');
console.log(validation.status); // 'success'
```
### 2. [Bug Fix Verifier](mdc:bug_fix_verifier.mdc)
Guides developers through the proper steps to fix bugs and verifies that fixes meet quality standards.
```javascript
// Example usage
const bugFixVerifier = require('./bug_fix_verifier');
const verification = bugFixVerifier.verifyBugFix(changedFiles, testFiles, issueDetails);
console.log(verification.score); // The verification score
```
### 3. [Test Generator](mdc:test_generator.mdc)
Analyzes code changes and generates appropriate test cases to ensure proper test coverage.
```javascript
// Example usage
const testGenerator = require('./test_generator');
const testPlan = testGenerator.generateTests(changedFiles, codebase);
console.log(testPlan.summary); // 'Generated X test plans'
```
### 4. [Performance Optimizer](mdc:performance_optimizer.mdc)
Identifies performance bottlenecks in code and suggests optimizations to improve efficiency.
```javascript
// Example usage
const performanceOptimizer = require('./performance_optimizer');
const analysis = performanceOptimizer.analyzePerformance(changedFiles, codebase);
console.log(analysis.score); // The performance score
```
### 5. [Feature Implementation Validator](mdc:feature_implementation_validator.mdc)
Ensures new feature implementations meet all requirements, follow best practices, and include appropriate tests.
```javascript
// Example usage
const featureValidator = require('./feature_implementation_validator');
const validation = featureValidator.validateFeature(
implementationFiles,
codebase,
pullRequest
);
console.log(validation.score); // The feature implementation score
```
## Integration
These rules are automatically integrated into Cursor AI when working with the Appsmith codebase. They will be triggered based on context and can also be manually invoked through commands.
## Command Examples
- `validate_pr_title` - Check if a PR title follows conventional commits format
- `verify_bug_fix --pullRequest=123` - Verify a bug fix implementation
- `generate_tests --file=src/utils/helpers.js` - Generate tests for a specific file
- `optimize_performance --file=src/components/Table.tsx` - Analyze and optimize performance
- `validate_feature --pullRequest=123` - Validate a feature implementation
## Configuration
The behavior of these rules can be customized in the `.cursor/settings.json` file. For example:
```json
{
"development": {
"gitWorkflow": {
"semanticPR": {
"enabled": true,
"titleFormat": "type(scope): description",
"validTypes": [
"feat", "fix", "docs", "style", "refactor",
"perf", "test", "build", "ci", "chore", "revert"
]
}
}
}
}
```
## Documentation
For more detailed information about each rule and how to use it effectively, please refer to the individual rule files linked above.
## Activation
To activate all rules, run `cursor_help` in the command palette. This will display available commands and provide guidance on using the rules for your specific task.

View File

@ -0,0 +1,358 @@
---
description:
globs:
alwaysApply: true
---
# Performance Optimizer
```yaml
name: Performance Optimizer
description: Analyzes code for performance issues and suggests improvements
author: Cursor AI
version: 1.0.0
tags:
- performance
- optimization
- analysis
activation:
always: true
event:
- file_change
- pull_request
- command
triggers:
- file.modified
- pull_request.created
- pull_request.updated
- command: "optimize_performance"
```
## Rule Definition
This rule analyzes code changes for potential performance issues and suggests optimizations.
## Performance Analysis Logic
```javascript
// Main function to analyze code for performance issues
function analyzePerformance(files, codebase) {
const issues = [];
const suggestions = [];
let score = 100; // Start with perfect score
// Process each file
for (const file of files) {
const fileIssues = findPerformanceIssues(file, codebase);
if (fileIssues.length > 0) {
issues.push(...fileIssues);
// Reduce score based on severity of issues
score -= fileIssues.reduce((total, issue) => total + issue.severity, 0);
// Generate optimization suggestions
const fileSuggestions = generateOptimizationSuggestions(file, fileIssues, codebase);
suggestions.push(...fileSuggestions);
}
}
// Ensure score doesn't go below 0
score = Math.max(0, score);
return {
score,
issues,
suggestions,
summary: generatePerformanceSummary(score, issues, suggestions)
};
}
// Find performance issues in a file
function findPerformanceIssues(file, codebase) {
const issues = [];
// Check file based on its type
if (file.path.includes('.ts') || file.path.includes('.js')) {
issues.push(...findJavaScriptPerformanceIssues(file));
} else if (file.path.includes('.java')) {
issues.push(...findJavaPerformanceIssues(file));
} else if (file.path.includes('.css')) {
issues.push(...findCssPerformanceIssues(file));
}
return issues;
}
// Find performance issues in JavaScript/TypeScript files
function findJavaScriptPerformanceIssues(file) {
const issues = [];
const content = file.content;
// Check for common JavaScript performance issues
// 1. Nested loops with high complexity (O(n²) or worse)
if (/for\s*\([^)]*\)\s*\{[^}]*for\s*\([^)]*\)/g.test(content)) {
issues.push({
type: 'nested_loops',
lineNumber: findLineNumber(content, /for\s*\([^)]*\)\s*\{[^}]*for\s*\([^)]*\)/g),
description: 'Nested loops detected, which may cause O(n²) time complexity',
severity: 8,
suggestion: 'Consider refactoring to reduce time complexity, possibly using maps/sets'
});
}
// 2. Large array operations without memoization
if (/\.map\(.*\.filter\(|\.filter\(.*\.map\(/g.test(content)) {
issues.push({
type: 'chained_array_operations',
lineNumber: findLineNumber(content, /\.map\(.*\.filter\(|\.filter\(.*\.map\(/g),
description: 'Chained array operations may cause performance issues with large datasets',
severity: 5,
suggestion: 'Consider combining operations or using a different data structure'
});
}
// 3. Frequent DOM manipulations
if (/document\.getElement(s?)By|querySelector(All)?/g.test(content) &&
content.match(/document\.getElement(s?)By|querySelector(All)?/g).length > 5) {
issues.push({
type: 'frequent_dom_manipulation',
lineNumber: findLineNumber(content, /document\.getElement(s?)By|querySelector(All)?/g),
description: 'Frequent DOM manipulations can cause layout thrashing',
severity: 7,
suggestion: 'Batch DOM manipulations or use DocumentFragment'
});
}
// 4. Memory leaks in event listeners
if (/addEventListener\(/g.test(content) &&
!/removeEventListener\(/g.test(content)) {
issues.push({
type: 'potential_memory_leak',
lineNumber: findLineNumber(content, /addEventListener\(/g),
description: 'Event listener without corresponding removal may cause memory leaks',
severity: 6,
suggestion: 'Add corresponding removeEventListener calls where appropriate'
});
}
// Add more JavaScript-specific performance checks here
return issues;
}
// Find performance issues in Java files
function findJavaPerformanceIssues(file) {
const issues = [];
const content = file.content;
// Check for common Java performance issues
// 1. Inefficient string concatenation
if (/String.*\+= |String.*= .*\+ /g.test(content)) {
issues.push({
type: 'inefficient_string_concat',
lineNumber: findLineNumber(content, /String.*\+= |String.*= .*\+ /g),
description: 'Inefficient string concatenation in loops',
severity: 4,
suggestion: 'Use StringBuilder instead of string concatenation'
});
}
// 2. Unclosed resources
if (/new FileInputStream|new Connection/g.test(content) &&
!/try\s*\([^)]*\)/g.test(content)) {
issues.push({
type: 'unclosed_resources',
lineNumber: findLineNumber(content, /new FileInputStream|new Connection/g),
description: 'Resources may not be properly closed',
severity: 7,
suggestion: 'Use try-with-resources to ensure proper resource closure'
});
}
// Add more Java-specific performance checks here
return issues;
}
// Find performance issues in CSS files
function findCssPerformanceIssues(file) {
const issues = [];
const content = file.content;
// Check for common CSS performance issues
// 1. Overqualified selectors
if (/div\.[a-zA-Z0-9_-]+|ul\.[a-zA-Z0-9_-]+/g.test(content)) {
issues.push({
type: 'overqualified_selectors',
lineNumber: findLineNumber(content, /div\.[a-zA-Z0-9_-]+|ul\.[a-zA-Z0-9_-]+/g),
description: 'Overqualified selectors may impact rendering performance',
severity: 3,
suggestion: 'Use more efficient selectors, avoiding element type with class'
});
}
// 2. Universal selectors
if (/\*\s*\{/g.test(content)) {
issues.push({
type: 'universal_selector',
lineNumber: findLineNumber(content, /\*\s*\{/g),
description: 'Universal selectors can be very slow, especially in large documents',
severity: 5,
suggestion: 'Replace universal selectors with more specific ones'
});
}
// Add more CSS-specific performance checks here
return issues;
}
// Find line number for a regex match
function findLineNumber(content, regex) {
const match = content.match(regex);
if (!match) return 0;
const index = content.indexOf(match[0]);
return content.substring(0, index).split('\n').length;
}
// Generate optimization suggestions based on issues found
function generateOptimizationSuggestions(file, issues, codebase) {
const suggestions = [];
for (const issue of issues) {
const suggestion = {
file: file.path,
issue: issue.type,
description: issue.suggestion,
lineNumber: issue.lineNumber,
code: issue.suggestion // This would be actual code in a real implementation
};
suggestions.push(suggestion);
}
return suggestions;
}
// Generate a summary of the performance analysis
function generatePerformanceSummary(score, issues, suggestions) {
const criticalIssues = issues.filter(issue => issue.severity >= 7).length;
const majorIssues = issues.filter(issue => issue.severity >= 4 && issue.severity < 7).length;
const minorIssues = issues.filter(issue => issue.severity < 4).length;
return {
score,
issues: {
total: issues.length,
critical: criticalIssues,
major: majorIssues,
minor: minorIssues
},
suggestions: suggestions.length,
recommendation: getPerformanceRecommendation(score)
};
}
// Get a recommendation based on the performance score
function getPerformanceRecommendation(score) {
if (score >= 90) {
return "Code looks good performance-wise. Only minor optimizations possible.";
} else if (score >= 70) {
return "Some performance issues found. Consider addressing them before deploying.";
} else if (score >= 50) {
return "Significant performance issues detected. Optimizations strongly recommended.";
} else {
return "Critical performance issues found. The code may perform poorly in production.";
}
}
// Run on activation
function activate(context) {
// Register event handlers
context.on('file.modified', (event) => {
const file = context.getFileContent(event.file.path);
const codebase = context.getCodebase();
return analyzePerformance([file], codebase);
});
context.on('pull_request.created', (event) => {
const files = context.getPullRequestFiles(event.pullRequest.id);
const codebase = context.getCodebase();
return analyzePerformance(files, codebase);
});
context.on('pull_request.updated', (event) => {
const files = context.getPullRequestFiles(event.pullRequest.id);
const codebase = context.getCodebase();
return analyzePerformance(files, codebase);
});
context.registerCommand('optimize_performance', (args) => {
const filePath = args.file || context.getCurrentFilePath();
if (!filePath) {
return {
status: "error",
message: "No file specified"
};
}
const file = context.getFileContent(filePath);
const codebase = context.getCodebase();
return analyzePerformance([file], codebase);
});
}
// Export functions
module.exports = {
activate,
analyzePerformance,
findPerformanceIssues,
generateOptimizationSuggestions,
generatePerformanceSummary
};
```
## When It Runs
This rule can be triggered:
- When code changes might impact performance
- When a pull request is created or updated
- When a developer runs the `optimize_performance` command in Cursor
- Before deploying to production environments
## Usage Example
1. Make code changes to a file
2. Run `optimize_performance` in Cursor
3. Review the performance analysis
4. Implement the suggested optimizations
5. Re-run the analysis to confirm improvements
## Performance Optimization Best Practices
### JavaScript/TypeScript
- Avoid nested loops when possible
- Use appropriate data structures (Map, Set) for lookups
- Minimize DOM manipulations
- Use event delegation instead of multiple event listeners
- Memoize expensive function calls
- Use requestAnimationFrame for animations
### Java
- Use StringBuilder for string concatenation
- Use try-with-resources for proper resource management
- Avoid excessive object creation
- Choose appropriate collections (ArrayList, HashMap) based on use-case
- Use primitive types where possible instead of wrapper classes
### CSS
- Avoid universal selectors and deeply nested selectors
- Minimize the use of expensive properties (box-shadow, border-radius, etc.)
- Prefer class selectors over tag selectors
- Use CSS containment where appropriate

View File

@ -0,0 +1,310 @@
---
description:
globs:
alwaysApply: true
---
# Pre-Commit Quality Checks
```yaml
name: Pre-Commit Quality Checks
description: Runs quality checks similar to GitHub Actions locally before commits
author: Cursor AI
version: 1.0.0
tags:
- quality
- pre-commit
- testing
- linting
activation:
always: true
events:
- pre_commit
- command
triggers:
- pre_commit
- command: "run_quality_checks"
```
## Rule Definition
This rule runs the same quality checks locally that would normally run in CI, preventing commits that would fail in the GitHub workflow quality-checks.yml.
## Implementation
```javascript
const { execSync } = require('child_process');
const fs = require('fs');
const path = require('path');
/**
* Determines which checks to run based on changed files
* @param {string[]} changedFiles - List of changed files
* @returns {Object} Object indicating which checks to run
*/
function determineChecksToRun(changedFiles) {
const checks = {
serverChecks: false,
clientChecks: false,
};
// Check if server files have changed
checks.serverChecks = changedFiles.some(file =>
file.startsWith('app/server/')
);
// Check if client files have changed
checks.clientChecks = changedFiles.some(file =>
file.startsWith('app/client/')
);
return checks;
}
/**
* Gets a list of changed files in the current git staging area
* @returns {string[]} List of changed files
*/
function getChangedFiles() {
try {
const output = execSync('git diff --cached --name-only', { encoding: 'utf8' });
return output.split('\n').filter(Boolean);
} catch (error) {
console.error('Error getting changed files:', error.message);
return [];
}
}
/**
* Runs client-side quality checks
* @returns {Object} Results of the checks
*/
function runClientChecks() {
const results = {
success: true,
errors: [],
output: []
};
try {
// Run client lint
console.log('Running client lint checks...');
try {
const lintOutput = execSync('cd app/client && yarn lint', { encoding: 'utf8' });
results.output.push('✅ Client lint passed');
} catch (error) {
results.success = false;
results.errors.push('Client lint failed');
results.output.push(`❌ Client lint failed: ${error.message}`);
}
// Run client unit tests
console.log('Running client unit tests...');
try {
const testOutput = execSync('cd app/client && yarn test', { encoding: 'utf8' });
results.output.push('✅ Client unit tests passed');
} catch (error) {
results.success = false;
results.errors.push('Client unit tests failed');
results.output.push(`❌ Client unit tests failed: ${error.message}`);
}
// Check for cyclic dependencies
console.log('Checking for cyclic dependencies...');
try {
const cyclicCheckOutput = execSync('cd app/client && yarn check-circular-deps', { encoding: 'utf8' });
results.output.push('✅ No cyclic dependencies found');
} catch (error) {
results.success = false;
results.errors.push('Cyclic dependencies check failed');
results.output.push(`❌ Cyclic dependencies found: ${error.message}`);
}
// Run prettier check
console.log('Running prettier check...');
try {
const prettierOutput = execSync('cd app/client && yarn prettier', { encoding: 'utf8' });
results.output.push('✅ Prettier check passed');
} catch (error) {
results.success = false;
results.errors.push('Prettier check failed');
results.output.push(`❌ Prettier check failed: ${error.message}`);
}
} catch (error) {
results.success = false;
results.errors.push(`General error in client checks: ${error.message}`);
}
return results;
}
/**
* Runs server-side quality checks
* @returns {Object} Results of the checks
*/
function runServerChecks() {
const results = {
success: true,
errors: [],
output: []
};
try {
// Run server unit tests
console.log('Running server unit tests...');
try {
const testOutput = execSync('cd app/server && ./gradlew test', { encoding: 'utf8' });
results.output.push('✅ Server unit tests passed');
} catch (error) {
results.success = false;
results.errors.push('Server unit tests failed');
results.output.push(`❌ Server unit tests failed: ${error.message}`);
}
// Run server spotless check
console.log('Running server spotless check...');
try {
const spotlessOutput = execSync('cd app/server && ./gradlew spotlessCheck', { encoding: 'utf8' });
results.output.push('✅ Server spotless check passed');
} catch (error) {
results.success = false;
results.errors.push('Server spotless check failed');
results.output.push(`❌ Server spotless check failed: ${error.message}`);
}
} catch (error) {
results.success = false;
results.errors.push(`General error in server checks: ${error.message}`);
}
return results;
}
/**
* Runs all quality checks
* @param {Object} context - The execution context
* @returns {Object} Results of the checks
*/
function runQualityChecks(context) {
console.log('Running pre-commit quality checks...');
const changedFiles = getChangedFiles();
if (!changedFiles.length) {
return {
status: 'success',
message: 'No files to check'
};
}
const checksToRun = determineChecksToRun(changedFiles);
const results = {
success: true,
output: ['Starting quality checks for staged files...'],
clientChecks: null,
serverChecks: null
};
// Run client checks if client files have changed
if (checksToRun.clientChecks) {
results.output.push('\n=== Client Checks ===');
results.clientChecks = runClientChecks();
results.output = results.output.concat(results.clientChecks.output);
results.success = results.success && results.clientChecks.success;
}
// Run server checks if server files have changed
if (checksToRun.serverChecks) {
results.output.push('\n=== Server Checks ===');
results.serverChecks = runServerChecks();
results.output = results.output.concat(results.serverChecks.output);
results.success = results.success && results.serverChecks.success;
}
// If no checks were run, note that
if (!checksToRun.clientChecks && !checksToRun.serverChecks) {
results.output.push('No client or server files were changed, skipping checks');
}
if (results.success) {
return {
status: 'success',
message: 'All quality checks passed',
details: results.output.join('\n')
};
} else {
return {
status: 'failure',
message: 'Quality checks failed',
details: results.output.join('\n')
};
}
}
/**
* Register command and hooks
* @param {Object} context - The cursor context
*/
function activate(context) {
// Register pre-commit hook
context.on('pre_commit', (event) => {
return runQualityChecks(context);
});
// Register command for manual validation
context.registerCommand('run_quality_checks', () => {
return runQualityChecks(context);
});
}
module.exports = {
activate,
runQualityChecks
};
```
## Usage
This rule runs automatically on pre-commit events. You can also manually trigger it with the command `run_quality_checks`.
### What It Checks
1. **For Client-side Changes:**
- Runs linting checks
- Runs unit tests
- Checks for cyclic dependencies
- Runs prettier formatting validation
2. **For Server-side Changes:**
- Runs unit tests
- Runs spotless formatting checks
### Behavior
- Only runs checks relevant to the files being committed (client and/or server)
- Prevents commits if any checks fail
- Provides detailed output about which checks passed or failed
### Requirements
- Node.js and yarn for client-side checks
- Java and Gradle for server-side checks
- Git for determining changed files
### Customization
You can customize which checks are run by modifying the `runClientChecks` and `runServerChecks` functions.
### Example Output
```
Running pre-commit quality checks...
Starting quality checks for staged files...
=== Client Checks ===
✅ Client lint passed
✅ Client unit tests passed
✅ No cyclic dependencies found
✅ Prettier check passed
=== Server Checks ===
✅ Server unit tests passed
✅ Server spotless check passed
```

View File

@ -0,0 +1 @@

View File

@ -0,0 +1,295 @@
---
description:
globs:
alwaysApply: true
---
# Test Generator
```yaml
name: Test Generator
description: Automatically generates appropriate tests for code changes
author: Cursor AI
version: 1.0.0
tags:
- testing
- automation
- quality
activation:
always: true
event:
- file_change
- command
triggers:
- file.created
- file.modified
- command: "generate_tests"
```
## Rule Definition
This rule analyzes code changes and generates appropriate test cases to ensure proper test coverage.
## Test Generation Logic
```javascript
// Main function to generate tests for code changes
function generateTests(files, codebase) {
const testPlans = [];
// Process each changed file
for (const file of files) {
if (shouldGenerateTestsFor(file)) {
const testPlan = createTestPlan(file, codebase);
testPlans.push(testPlan);
}
}
return {
testPlans,
summary: `Generated ${testPlans.length} test plans`
};
}
// Determine if we should generate tests for a file
function shouldGenerateTestsFor(file) {
// Skip test files, configuration files, etc.
if (file.path.includes('.test.') || file.path.includes('.spec.')) {
return false;
}
// Skip certain file types
const skipExtensions = ['.md', '.json', '.yml', '.yaml', '.svg', '.png', '.jpg'];
if (skipExtensions.some(ext => file.path.endsWith(ext))) {
return false;
}
return true;
}
// Create a test plan for the file
function createTestPlan(file, codebase) {
const testType = determineTestType(file);
const testCases = analyzeFileForTestCases(file, codebase);
return {
sourceFile: file.path,
testType,
testFile: generateTestFilePath(file, testType),
testCases,
testFramework: selectTestFramework(file)
};
}
// Determine the appropriate type of test
function determineTestType(file) {
if (file.path.includes('app/client')) {
if (file.path.includes('/components/')) {
return 'component';
} else if (file.path.includes('/utils/')) {
return 'unit';
} else if (file.path.includes('/api/')) {
return 'integration';
}
return 'unit';
} else if (file.path.includes('app/server')) {
if (file.path.includes('/controllers/')) {
return 'controller';
} else if (file.path.includes('/services/')) {
return 'service';
} else if (file.path.includes('/repositories/')) {
return 'repository';
}
return 'unit';
}
return 'unit'; // Default
}
// Analyze file to determine test cases needed
function analyzeFileForTestCases(file, codebase) {
// This would contain complex analysis of the file
// to determine appropriate test cases
const testCases = [];
// Example test cases for different file types
if (file.path.includes('.tsx') || file.path.includes('.jsx')) {
testCases.push(
{ type: 'render', description: 'should render correctly' },
{ type: 'props', description: 'should handle props correctly' },
{ type: 'interaction', description: 'should handle user interactions' }
);
} else if (file.path.includes('.java')) {
testCases.push(
{ type: 'normal', description: 'should execute successfully with valid input' },
{ type: 'exception', description: 'should handle exceptions with invalid input' }
);
}
return testCases;
}
// Generate path for the test file
function generateTestFilePath(file, testType) {
if (file.path.includes('app/client')) {
const basePath = file.path.replace(/\.(ts|tsx|js|jsx)$/, '');
return `${basePath}.test.${file.path.split('.').pop()}`;
} else if (file.path.includes('app/server')) {
return file.path.replace(/\.java$/, 'Test.java');
}
return file.path + '.test';
}
// Select appropriate test framework
function selectTestFramework(file) {
if (file.path.includes('app/client')) {
if (file.path.includes('/cypress/')) {
return 'cypress';
}
return 'jest';
} else if (file.path.includes('app/server')) {
return 'junit';
}
return 'jest'; // Default
}
// Generate actual test code based on the test plan
function generateTestCode(testPlan) {
// This would create the actual test code based on the framework and test cases
// This is a placeholder that would contain complex logic to generate tests
return "// Generated test code would go here";
}
// Run on activation
function activate(context) {
// Register event handlers
context.on('file.created', (event) => {
const file = context.getFileContent(event.file.path);
const codebase = context.getCodebase();
return generateTests([file], codebase);
});
context.on('file.modified', (event) => {
const file = context.getFileContent(event.file.path);
const codebase = context.getCodebase();
return generateTests([file], codebase);
});
context.registerCommand('generate_tests', (args) => {
const filePath = args.file || context.getCurrentFilePath();
if (!filePath) {
return {
status: "error",
message: "No file specified"
};
}
const file = context.getFileContent(filePath);
const codebase = context.getCodebase();
return generateTests([file], codebase);
});
}
// Export functions
module.exports = {
activate,
generateTests,
generateTestCode,
shouldGenerateTestsFor,
createTestPlan
};
```
## When It Runs
This rule can be triggered:
- When code changes are made and tests need to be created
- When a new file is created
- When an existing file is modified
- When a developer runs the `generate_tests` command in Cursor
- When submitting a PR with code changes that lack tests
## Usage Example
1. Make code changes to a file
2. Run `generate_tests` in Cursor
3. Review the generated test plan
4. Accept or modify the suggested tests
5. Run the tests to verify your changes
## Test Generation Best Practices
### Frontend Tests
For React components, tests should typically verify:
- Component renders without crashing
- Props are correctly handled
- User interactions work as expected
- Edge cases are handled properly
Example Jest test for a React component:
```jsx
import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import Button from './Button';
describe('Button component', () => {
it('renders correctly', () => {
render(<Button label="Click me" />);
expect(screen.getByText('Click me')).toBeInTheDocument();
});
it('calls onClick handler when clicked', () => {
const handleClick = jest.fn();
render(<Button label="Click me" onClick={handleClick} />);
fireEvent.click(screen.getByText('Click me'));
expect(handleClick).toHaveBeenCalledTimes(1);
});
});
```
### Backend Tests
For Java services, tests should typically verify:
- Methods return expected results for valid inputs
- Proper exception handling for invalid inputs
- Business logic is correctly implemented
- Edge cases are handled properly
Example JUnit test for a Java service:
```java
@RunWith(SpringRunner.class)
@SpringBootTest
public class UserServiceTest {
@Autowired
private UserService userService;
@Test
public void testCreateUser_ValidInput_ReturnsCreatedUser() {
User user = new User("test@example.com", "password");
User result = userService.createUser(user).block();
assertNotNull(result);
assertNotNull(result.getId());
assertEquals("test@example.com", result.getEmail());
}
@Test
public void testCreateUser_DuplicateEmail_ThrowsException() {
User user = new User("existing@example.com", "password");
// First creation should succeed
userService.createUser(user).block();
// Second attempt with same email should fail
StepVerifier.create(userService.createUser(user))
.expectError(DuplicateUserException.class)
.verify();
}
}

View File

@ -0,0 +1,717 @@
---
description:
globs:
alwaysApply: true
---
---
description:
globs:
alwaysApply: true
---
# Bug Fix Verifier
```yaml
name: Bug Fix Verifier
description: Guides developers through proper bug fixing steps and verifies fix quality
author: Cursor AI
version: 1.0.0
tags:
- bug
- fixes
- verification
- testing
activation:
always: true
event:
- pull_request
- command
- file_change
triggers:
- pull_request.created
- pull_request.updated
- pull_request.labeled:bug
- pull_request.labeled:fix
- command: "verify_bug_fix"
```
## Rule Definition
This rule guides developers through the proper steps to fix bugs and verifies that the fix meets quality standards.
## Bug Fix Verification Logic
```javascript
// Main function to verify bug fixes
function verifyBugFix(files, tests, issue) {
const results = {
reproduction: checkReproduction(issue),
testCoverage: checkTestCoverage(files, tests),
implementation: checkImplementation(files, issue),
regressionTesting: checkRegressionTesting(tests),
performance: checkPerformanceImplications(files),
score: 0,
issues: [],
recommendations: []
};
// Calculate overall score
results.score = calculateScore(results);
// Generate issues and recommendations
results.issues = identifyIssues(results);
results.recommendations = generateRecommendations(results.issues);
return {
...results,
summary: generateSummary(results)
};
}
// Check if the bug is properly reproduced in tests
function checkReproduction(issue) {
const results = {
hasReproductionSteps: false,
hasReproductionTest: false,
clearStepsToReproduce: false,
missingElements: []
};
if (!issue) {
results.missingElements.push('issue reference');
return results;
}
// Check if there are clear steps to reproduce
results.hasReproductionSteps =
issue.description &&
(issue.description.includes('Steps to reproduce') ||
issue.description.includes('Reproduction steps'));
if (!results.hasReproductionSteps) {
results.missingElements.push('clear reproduction steps');
}
// Check if reproduction steps are clear
if (results.hasReproductionSteps) {
const stepsSection = extractReproductionSteps(issue.description);
results.clearStepsToReproduce = stepsSection && stepsSection.split('\n').length >= 3;
}
if (!results.clearStepsToReproduce) {
results.missingElements.push('detailed reproduction steps');
}
// Check if there's a test that reproduces the bug
results.hasReproductionTest = issue.tests && issue.tests.some(test =>
test.includes('test') && test.includes('reproduce')
);
if (!results.hasReproductionTest) {
results.missingElements.push('test that reproduces the bug');
}
return results;
}
// Check test coverage of the bug fix
function checkTestCoverage(files, tests) {
const results = {
hasTestsForFix: false,
testsVerifyFix: false,
hasRegressionTests: false,
hasUnitTests: false,
hasE2ETests: false,
testQuality: 0,
missingTests: []
};
if (!tests || tests.length === 0) {
results.missingTests.push('any tests for this fix');
return results;
}
// Check if there are tests for the fix
results.hasTestsForFix = true;
// Check if tests verify the fix
results.testsVerifyFix = tests.some(test =>
(test.includes('assert') || test.includes('expect')) &&
!test.includes('.skip') &&
!test.includes('.todo')
);
if (!results.testsVerifyFix) {
results.missingTests.push('tests that verify the fix works');
}
// Check for regression tests
results.hasRegressionTests = tests.some(test =>
test.includes('regression') ||
test.includes('should not') ||
test.includes('should still')
);
if (!results.hasRegressionTests) {
results.missingTests.push('regression tests');
}
// Check for unit tests
results.hasUnitTests = tests.some(test =>
test.includes('.test.') ||
test.includes('Test.java') ||
test.includes('__tests__')
);
if (!results.hasUnitTests) {
results.missingTests.push('unit tests to verify the specific fix');
}
// Check for end-to-end tests for user-facing changes
const isUserFacingChange = files.some(file =>
file.path.includes('/components/') ||
file.path.includes('/pages/') ||
file.path.includes('/ui/') ||
file.path.includes('/views/')
);
if (isUserFacingChange) {
results.hasE2ETests = tests.some(test =>
test.includes('/e2e/') ||
test.includes('/cypress/')
);
if (!results.hasE2ETests) {
results.missingTests.push('end-to-end tests for this user-facing change');
}
}
// Evaluate test quality (improved)
let qualityScore = 0;
if (results.hasTestsForFix) qualityScore += 20;
if (results.testsVerifyFix) qualityScore += 25;
if (results.hasRegressionTests) qualityScore += 20;
if (results.hasUnitTests) qualityScore += 20;
if (results.hasE2ETests || !isUserFacingChange) qualityScore += 15;
results.testQuality = qualityScore;
return results;
}
// Check the quality of the implementation
function checkImplementation(files, issue) {
const results = {
addressesRootCause: false,
isMinimalChange: false,
hasNoHardcodedValues: true,
followsGoodPractices: true,
concerns: []
};
if (!files || files.length === 0) {
results.concerns.push('no implementation files found');
return results;
}
// Check if the implementation addresses the root cause
if (issue && issue.title) {
const keywords = extractKeywords(issue.title);
const filesContent = files.map(file => file.content).join(' ');
results.addressesRootCause = keywords.some(keyword =>
filesContent.includes(keyword)
);
}
if (!results.addressesRootCause) {
results.concerns.push('may not address the root cause');
}
// Check if changes are minimal
const totalChangedLines = files.reduce((sum, file) => {
return sum + countChangedLines(file);
}, 0);
results.isMinimalChange = totalChangedLines < 50;
if (!results.isMinimalChange) {
results.concerns.push('changes are not minimal');
}
// Check for hardcoded values
const hardcodedPattern = /'[a-zA-Z0-9]{10,}'/;
results.hasNoHardcodedValues = !files.some(file =>
hardcodedPattern.test(file.content)
);
if (!results.hasNoHardcodedValues) {
results.concerns.push('contains hardcoded values');
}
// Check for unsafe property access in Redux/React applications
const unsafePropertyAccess = files.some(file => {
// Check if this is a Redux/React file
const isReduxReactFile = file.path.includes('.jsx') ||
file.path.includes('.tsx') ||
file.content.includes('import { useSelector }') ||
file.content.includes('import { connect }');
if (!isReduxReactFile) return false;
// Check for potentially unsafe deep property access
const hasNestedProps = file.content.includes('?.');
const hasObjectChaining = /\w+\.\w+\.\w+/.test(file.content);
const usesLodashGet = file.content.includes('import get from') ||
file.content.includes('lodash/get');
// If file has nested properties but doesn't use lodash get or optional chaining
return (hasObjectChaining && !hasNestedProps && !usesLodashGet);
});
if (unsafePropertyAccess) {
results.followsGoodPractices = false;
results.concerns.push('contains unsafe nested property access, consider using lodash/get or optional chaining');
}
// Check for good practices
const badPractices = [
{ pattern: /\/\/ TODO:/, message: 'contains TODO comments' },
{ pattern: /console\.log\(/, message: 'contains debug logging' },
{ pattern: /Thread\.sleep\(/, message: 'contains blocking calls' },
{ pattern: /alert\(/, message: 'contains alert() calls' }
];
badPractices.forEach(practice => {
if (files.some(file => practice.pattern.test(file.content))) {
results.followsGoodPractices = false;
results.concerns.push(practice.message);
}
});
return results;
}
// Check regression testing
function checkRegressionTesting(tests) {
const results = {
hasRegressionTests: false,
coversRelatedFunctionality: false,
hasEdgeCaseTests: false,
missingTestAreas: []
};
if (!tests || tests.length === 0) {
results.missingTestAreas.push('regression tests');
results.missingTestAreas.push('related functionality tests');
results.missingTestAreas.push('edge case tests');
return results;
}
// Check for regression tests
results.hasRegressionTests = tests.some(test =>
test.includes('regression') ||
test.includes('should not') ||
test.includes('should still')
);
if (!results.hasRegressionTests) {
results.missingTestAreas.push('regression tests');
}
// Check if tests cover related functionality
results.coversRelatedFunctionality = tests.some(test =>
test.includes('related') ||
test.includes('integration') ||
test.includes('with') ||
test.includes('when used')
);
if (!results.coversRelatedFunctionality) {
results.missingTestAreas.push('tests for related functionality');
}
// Check for edge case tests
results.hasEdgeCaseTests = tests.some(test =>
test.includes('edge case') ||
test.includes('boundary') ||
test.includes('limit') ||
test.includes('extreme')
);
if (!results.hasEdgeCaseTests) {
results.missingTestAreas.push('edge case tests');
}
return results;
}
// Check performance implications of the fix
function checkPerformanceImplications(files) {
const results = {
noRegressions: true,
analyzedPerformance: false,
potentialIssues: []
};
if (!files || files.length === 0) {
return results;
}
// Check for performance regressions
const regressionPatterns = [
{ pattern: /for\s*\([^)]*\)\s*\{[^}]*for\s*\([^)]*\)/, message: 'nested loops may cause performance issues' },
{ pattern: /Thread\.sleep\(|setTimeout\(/, message: 'blocking calls may affect performance' },
{ pattern: /new [A-Z][a-zA-Z0-9]*\(.*\)/g, message: 'excessive object creation may affect memory usage' }
];
regressionPatterns.forEach(pattern => {
if (files.some(file => pattern.pattern.test(file.content))) {
results.noRegressions = false;
results.potentialIssues.push(pattern.message);
}
});
// Check if performance was analyzed
results.analyzedPerformance = files.some(file =>
file.content.includes('performance') ||
file.content.includes('benchmark') ||
file.content.includes('optimize')
);
return results;
}
// Calculate overall score for the bug fix
function calculateScore(results) {
let score = 100;
// Deduct for missing reproduction elements
score -= results.reproduction.missingElements.length * 10;
// Deduct for missing tests
score -= results.testCoverage.missingTests.length * 15;
// Deduct for implementation concerns
score -= results.implementation.concerns.length * 10;
// Deduct for missing regression test areas
score -= results.regressionTesting.missingTestAreas.length * 5;
// Deduct for performance issues
score -= results.performance.potentialIssues.length * 8;
return Math.max(0, Math.round(score));
}
// Identify issues from all verification checks
function identifyIssues(results) {
const issues = [];
// Add reproduction issues
results.reproduction.missingElements.forEach(element => {
issues.push({
type: 'reproduction',
severity: 'high',
message: `Missing ${element}`
});
});
// Add test coverage issues
results.testCoverage.missingTests.forEach(test => {
issues.push({
type: 'testing',
severity: 'high',
message: `Missing ${test}`
});
});
// Add implementation issues
results.implementation.concerns.forEach(concern => {
issues.push({
type: 'implementation',
severity: 'medium',
message: `Implementation ${concern}`
});
});
// Add regression testing issues
results.regressionTesting.missingTestAreas.forEach(area => {
issues.push({
type: 'regression',
severity: 'medium',
message: `Missing ${area}`
});
});
// Add performance issues
results.performance.potentialIssues.forEach(issue => {
issues.push({
type: 'performance',
severity: 'medium',
message: `Performance concern: ${issue}`
});
});
return issues;
}
// Generate recommendations based on identified issues
function generateRecommendations(issues) {
const recommendations = [];
// Group issues by type
const issuesByType = {};
issues.forEach(issue => {
if (!issuesByType[issue.type]) {
issuesByType[issue.type] = [];
}
issuesByType[issue.type].push(issue);
});
// Generate recommendations for requirements issues
if (issuesByType.requirements) {
recommendations.push({
type: 'requirements',
title: 'Complete the implementation of requirements',
steps: [
'Review the missing requirements and ensure they are implemented',
'Verify that the implementation matches the acceptance criteria',
'Update the code to address all missing requirements'
]
});
}
// Generate recommendations for testing issues
if (issuesByType.testing) {
recommendations.push({
type: 'testing',
title: 'Improve test coverage',
steps: [
'Add unit tests for untested files',
'Create integration tests where appropriate',
'Ensure all edge cases are covered in tests'
]
});
}
// Generate recommendations for code quality issues
if (issuesByType.code_quality) {
recommendations.push({
type: 'code_quality',
title: 'Address code quality issues',
steps: [
'Remove debugging code (console.log, TODO comments)',
'Fix formatting and indentation issues',
'Follow project coding standards and best practices',
'Use proper data access methods like lodash/get for deeply nested objects',
'Consider data nullability and use optional chaining or default values'
]
});
}
// Generate recommendations for documentation issues
if (issuesByType.documentation) {
recommendations.push({
type: 'documentation',
title: 'Improve documentation',
steps: [
'Add JSDoc or JavaDoc comments to public APIs and classes',
'Document complex components and their usage',
'Ensure all services have proper documentation'
]
});
}
return recommendations;
}
// Generate a summary of the verification results
function generateSummary(results) {
const score = results.score;
let status = '';
if (score >= 90) {
status = 'EXCELLENT';
} else if (score >= 70) {
status = 'GOOD';
} else if (score >= 50) {
status = 'NEEDS IMPROVEMENT';
} else {
status = 'POOR';
}
return {
score,
status,
issues: results.issues.length,
critical: results.issues.filter(issue => issue.severity === 'high').length,
recommendations: results.recommendations.length,
message: generateSummaryMessage(score, status, results)
};
}
// Generate a summary message based on results
function generateSummaryMessage(score, status, results) {
if (status === 'EXCELLENT') {
return 'The bug fix meets or exceeds all quality standards. Good job!';
} else if (status === 'GOOD') {
return `The bug fix is good overall but has ${results.issues.length} issues to address.`;
} else if (status === 'NEEDS IMPROVEMENT') {
const critical = results.issues.filter(issue => issue.severity === 'high').length;
return `The bug fix needs significant improvement with ${critical} critical issues.`;
} else {
return 'The bug fix is incomplete and does not meet minimum quality standards.';
}
}
// Helper function to extract reproduction steps from issue description
function extractReproductionSteps(description) {
if (!description) return null;
const stepSectionMarkers = [
'Steps to reproduce',
'Reproduction steps',
'To reproduce',
'How to reproduce'
];
for (const marker of stepSectionMarkers) {
const markerIndex = description.indexOf(marker);
if (markerIndex >= 0) {
const startIndex = markerIndex + marker.length;
let endIndex = description.indexOf('\n\n', startIndex);
if (endIndex < 0) endIndex = description.length;
return description.substring(startIndex, endIndex).trim();
}
}
return null;
}
// Helper function to extract keywords from issue title
function extractKeywords(title) {
if (!title) return [];
// Remove common words
const commonWords = ['a', 'an', 'the', 'in', 'on', 'at', 'to', 'for', 'with', 'by', 'is'];
let words = title.split(/\s+/)
.map(word => word.toLowerCase().replace(/[^\w]/g, ''))
.filter(word => word.length > 2 && !commonWords.includes(word));
return [...new Set(words)]; // Remove duplicates
}
// Helper function to count changed lines in a file
function countChangedLines(file) {
if (!file.diff) return file.content.split('\n').length;
let changedLines = 0;
const diffLines = file.diff.split('\n');
for (const line of diffLines) {
if (line.startsWith('+') || line.startsWith('-')) {
changedLines++;
}
}
return changedLines;
}
// Run on activation
function activate(context) {
// Register event handlers
context.on('pull_request.created', (event) => {
const files = context.getPullRequestFiles(event.pullRequest.id);
const tests = context.getPullRequestTests(event.pullRequest.id);
const issue = context.getLinkedIssue(event.pullRequest);
return verifyBugFix(files, tests, issue);
});
context.on('pull_request.updated', (event) => {
const files = context.getPullRequestFiles(event.pullRequest.id);
const tests = context.getPullRequestTests(event.pullRequest.id);
const issue = context.getLinkedIssue(event.pullRequest);
return verifyBugFix(files, tests, issue);
});
context.on('pull_request.labeled:bug', (event) => {
const files = context.getPullRequestFiles(event.pullRequest.id);
const tests = context.getPullRequestTests(event.pullRequest.id);
const issue = context.getLinkedIssue(event.pullRequest);
return verifyBugFix(files, tests, issue);
});
context.on('pull_request.labeled:fix', (event) => {
const files = context.getPullRequestFiles(event.pullRequest.id);
const tests = context.getPullRequestTests(event.pullRequest.id);
const issue = context.getLinkedIssue(event.pullRequest);
return verifyBugFix(files, tests, issue);
});
context.registerCommand('verify_bug_fix', (args) => {
const prId = args.pullRequest;
if (!prId) {
return {
status: "error",
message: "No pull request specified"
};
}
const files = context.getPullRequestFiles(prId);
const tests = context.getPullRequestTests(prId);
const issue = context.getLinkedIssue({id: prId});
return verifyBugFix(files, tests, issue);
});
}
// Export functions
module.exports = {
activate,
verifyBugFix,
checkReproduction,
checkTestCoverage,
checkImplementation,
checkRegressionTesting,
checkPerformanceImplications
};
```
## When It Runs
This rule can be triggered:
- When a bug fix pull request is created
- When a pull request is updated
- When a pull request is labeled with 'bug' or 'fix'
- When a developer runs the `verify_bug_fix` command in Cursor
- Before committing changes meant to fix a bug
## Usage Example
1. Create a pull request for a bug fix
2. Run `verify_bug_fix --pullRequest=123` in Cursor
3. Review the verification results
4. Address any identified issues
5. Re-run verification to confirm all issues are resolved
## Bug Fix Best Practices
### Reproduction Checklist
- [ ] Document clear steps to reproduce the bug
- [ ] Create a test that reproduces the bug before fixing
- [ ] Ensure the reproduction is reliable and consistent
### Fix Implementation Checklist
- [ ] Address the root cause, not just symptoms
- [ ] Make changes as minimal and focused as possible
- [ ] Avoid introducing new bugs or regressions
- [ ] Follow project coding standards and patterns
### Testing Checklist
- [ ] Verify the fix resolves the issue
- [ ] Test related functionality that might be affected
- [ ] Consider edge cases and boundary conditions
- [ ] Ensure all tests pass after the fix

View File

@ -0,0 +1,744 @@
---
description:
globs:
alwaysApply: true
---
# Feature Implementation Validator
```yaml
name: Feature Implementation Validator
description: Validates that new features are completely and correctly implemented
author: Cursor AI
version: 1.0.0
tags:
- feature
- implementation
- quality
- validation
activation:
always: true
event:
- pull_request
- command
triggers:
- pull_request.created
- pull_request.updated
- pull_request.labeled:feature
- command: "validate_feature"
```
## Rule Definition
This rule ensures that new feature implementations meet quality standards, including proper testing, documentation, and adherence to best practices.
## Feature Validation Logic
```javascript
// Main function to validate feature implementation
function validateFeature(files, codebase, pullRequest) {
const results = {
completeness: checkCompleteness(files, codebase),
tests: checkTestCoverage(files, codebase),
documentation: checkDocumentation(files, codebase),
bestPractices: checkBestPractices(files, codebase),
accessibility: checkAccessibility(files, codebase),
score: 0,
issues: [],
recommendations: []
};
// Calculate overall score
results.score = calculateScore(results);
// Generate issues and recommendations
results.issues = identifyIssues(results);
results.recommendations = generateRecommendations(results.issues);
return {
...results,
summary: generateSummary(results)
};
}
// Check if the feature implementation is complete
function checkCompleteness(files, codebase) {
const results = {
hasImplementation: false,
hasTests: false,
hasDocumentation: false,
missingComponents: []
};
// Check for implementation files
const implementationFiles = files.filter(file => {
return !file.path.includes('.test.') &&
!file.path.includes('.spec.') &&
!file.path.includes('docs/') &&
!file.path.endsWith('.md');
});
results.hasImplementation = implementationFiles.length > 0;
// Check for test files
const testFiles = files.filter(file => {
return file.path.includes('.test.') || file.path.includes('.spec.');
});
results.hasTests = testFiles.length > 0;
// Check for documentation
const docFiles = files.filter(file => {
return file.path.includes('docs/') || file.path.endsWith('.md');
});
results.hasDocumentation = docFiles.length > 0;
// Identify missing components
if (!results.hasImplementation) {
results.missingComponents.push('implementation');
}
if (!results.hasTests) {
results.missingComponents.push('tests');
}
if (!results.hasDocumentation) {
results.missingComponents.push('documentation');
}
// Check for missing components based on feature type
const featureType = identifyFeatureType(files);
if (featureType === 'ui' && !hasUiComponents(files)) {
results.missingComponents.push('UI components');
}
if (featureType === 'api' && !hasApiEndpoints(files)) {
results.missingComponents.push('API endpoints');
}
return results;
}
// Check test coverage of the feature
function checkTestCoverage(files, codebase) {
const results = {
hasFunctionalTests: false,
hasUnitTests: false,
hasIntegrationTests: false,
coverage: 0,
untested: []
};
// Get all non-test implementation files
const implFiles = files.filter(file => {
return !file.path.includes('.test.') &&
!file.path.includes('.spec.') &&
!file.path.endsWith('.md');
});
// Check for different test types
const testFiles = files.filter(file => {
return file.path.includes('.test.') || file.path.includes('.spec.');
});
results.hasFunctionalTests = testFiles.some(file =>
file.content.includes('test(') &&
(file.content.includes('render(') || file.content.includes('fireEvent'))
);
results.hasUnitTests = testFiles.some(file =>
file.content.includes('test(') &&
!file.content.includes('render(')
);
results.hasIntegrationTests = testFiles.some(file =>
file.content.includes('describe(') &&
file.content.includes('integration')
);
// Calculate rough coverage
let testedFunctions = 0;
let totalFunctions = 0;
implFiles.forEach(file => {
const functions = extractFunctions(file.content);
totalFunctions += functions.length;
functions.forEach(func => {
// Check if function is tested in any test file
const isTested = testFiles.some(testFile =>
testFile.content.includes(func.name)
);
if (isTested) {
testedFunctions++;
} else {
results.untested.push(func.name);
}
});
});
results.coverage = totalFunctions ? (testedFunctions / totalFunctions) * 100 : 0;
return results;
}
// Check if documentation is complete
function checkDocumentation(files, codebase) {
const results = {
hasUserDocs: false,
hasDeveloperDocs: false,
hasApiDocs: false,
missingDocs: []
};
// Check for documentation files
const docFiles = files.filter(file => {
return file.path.includes('docs/') || file.path.endsWith('.md');
});
// Check for user documentation
results.hasUserDocs = docFiles.some(file =>
file.content.includes('user') ||
file.content.includes('guide') ||
file.path.includes('user')
);
// Check for developer documentation
results.hasDeveloperDocs = docFiles.some(file =>
file.content.includes('developer') ||
file.content.includes('implementation') ||
file.path.includes('dev')
);
// Check for API documentation
results.hasApiDocs = docFiles.some(file =>
file.content.includes('API') ||
file.content.includes('endpoint') ||
file.path.includes('api')
);
// Identify missing documentation
if (!results.hasUserDocs) {
results.missingDocs.push('user documentation');
}
if (!results.hasDeveloperDocs) {
results.missingDocs.push('developer documentation');
}
const hasApiCode = files.some(file =>
file.path.includes('api') ||
file.content.includes('axios') ||
file.content.includes('fetch')
);
if (hasApiCode && !results.hasApiDocs) {
results.missingDocs.push('API documentation');
}
return results;
}
// Check adherence to best practices
function checkBestPractices(files, codebase) {
const results = {
followsNamingConventions: true,
followsArchitecture: true,
hasCleanCode: true,
violations: []
};
// Check for naming convention violations
files.forEach(file => {
if (file.path.includes('.tsx') || file.path.includes('.jsx')) {
// React component should use PascalCase
const filename = file.path.split('/').pop().split('.')[0];
if (!/^[A-Z][a-zA-Z0-9]*$/.test(filename)) {
results.followsNamingConventions = false;
results.violations.push(`React component "${filename}" should use PascalCase`);
}
}
if (file.path.includes('.java')) {
// Java classes should use PascalCase
const filename = file.path.split('/').pop().split('.')[0];
if (!/^[A-Z][a-zA-Z0-9]*$/.test(filename)) {
results.followsNamingConventions = false;
results.violations.push(`Java class "${filename}" should use PascalCase`);
}
}
});
// Check for architectural violations
files.forEach(file => {
if (file.path.includes('app/client/components') && file.content.includes('fetch(')) {
results.followsArchitecture = false;
results.violations.push('Components should not make API calls directly, use services instead');
}
if (file.path.includes('app/server/controllers') && file.content.includes('Repository')) {
results.followsArchitecture = false;
results.violations.push('Controllers should not access repositories directly, use services instead');
}
});
// Check for clean code issues
files.forEach(file => {
// Check for long functions (more than 50 lines)
const functions = extractFunctions(file.content);
functions.forEach(func => {
if (func.lines > 50) {
results.hasCleanCode = false;
results.violations.push(`Function "${func.name}" is too long (${func.lines} lines)`);
}
});
// Check for high complexity (nested conditionals)
if (/if\s*\([^)]*\)\s*\{[^{}]*if\s*\([^)]*\)/g.test(file.content)) {
results.hasCleanCode = false;
results.violations.push('Nested conditionals detected, consider refactoring');
}
// Check for commented-out code
if (/\/\/\s*[a-zA-Z0-9]+.*\(.*\).*\{/g.test(file.content)) {
results.hasCleanCode = false;
results.violations.push('Commented-out code detected, remove or refactor');
}
});
return results;
}
// Check accessibility (for UI features)
function checkAccessibility(files, codebase) {
const results = {
hasA11yAttributes: false,
hasKeyboardNavigation: false,
hasSemanticsHtml: false,
issues: []
};
// Only check UI files
const uiFiles = files.filter(file => {
return (file.path.includes('.tsx') || file.path.includes('.jsx')) &&
file.path.includes('component');
});
if (uiFiles.length === 0) {
// Not a UI feature, mark as not applicable
return {
notApplicable: true
};
}
// Check for accessibility attributes
results.hasA11yAttributes = uiFiles.some(file =>
file.content.includes('aria-') ||
file.content.includes('role=')
);
if (!results.hasA11yAttributes) {
results.issues.push('No ARIA attributes found in UI components');
}
// Check for keyboard navigation
results.hasKeyboardNavigation = uiFiles.some(file =>
file.content.includes('onKeyDown') ||
file.content.includes('onKeyPress')
);
if (!results.hasKeyboardNavigation) {
results.issues.push('No keyboard navigation handlers found');
}
// Check for semantic HTML
results.hasSemanticsHtml = uiFiles.some(file =>
file.content.includes('<nav') ||
file.content.includes('<main') ||
file.content.includes('<section') ||
file.content.includes('<article') ||
file.content.includes('<aside') ||
file.content.includes('<header') ||
file.content.includes('<footer')
);
if (!results.hasSemanticsHtml) {
results.issues.push('No semantic HTML elements found');
}
return results;
}
// Calculate overall score based on all checks
function calculateScore(results) {
let score = 100;
// Deduct for missing components
score -= results.completeness.missingComponents.length * 15;
// Deduct for low test coverage
if (results.tests.coverage < 80) {
score -= (80 - results.tests.coverage) / 4;
}
// Deduct for missing documentation
score -= results.documentation.missingDocs.length * 10;
// Deduct for best practice violations
score -= results.bestPractices.violations.length * 5;
// Deduct for accessibility issues (if applicable)
if (!results.accessibility.notApplicable) {
score -= results.accessibility.issues.length * 10;
}
return Math.max(0, Math.round(score));
}
// Identify issues from all validation checks
function identifyIssues(results) {
const issues = [];
// Add missing components as issues
results.completeness.missingComponents.forEach(component => {
issues.push({
type: 'completeness',
severity: 'high',
message: `Missing ${component}`
});
});
// Add test coverage issues
if (results.tests.coverage < 80) {
issues.push({
type: 'testing',
severity: 'high',
message: `Low test coverage (${Math.round(results.tests.coverage)}%)`
});
}
results.tests.untested.forEach(func => {
issues.push({
type: 'testing',
severity: 'medium',
message: `Function "${func}" lacks tests`
});
});
// Add documentation issues
results.documentation.missingDocs.forEach(doc => {
issues.push({
type: 'documentation',
severity: 'medium',
message: `Missing ${doc}`
});
});
// Add best practice violations
results.bestPractices.violations.forEach(violation => {
issues.push({
type: 'best_practice',
severity: 'medium',
message: violation
});
});
// Add accessibility issues
if (!results.accessibility.notApplicable) {
results.accessibility.issues.forEach(issue => {
issues.push({
type: 'accessibility',
severity: 'high',
message: issue
});
});
}
return issues;
}
// Generate recommendations based on identified issues
function generateRecommendations(issues) {
const recommendations = [];
// Group issues by type
const issuesByType = {};
issues.forEach(issue => {
if (!issuesByType[issue.type]) {
issuesByType[issue.type] = [];
}
issuesByType[issue.type].push(issue);
});
// Generate recommendations for completeness issues
if (issuesByType.completeness) {
recommendations.push({
type: 'completeness',
title: 'Complete the feature implementation',
steps: issuesByType.completeness.map(issue => issue.message.replace('Missing ', 'Add '))
});
}
// Generate recommendations for testing issues
if (issuesByType.testing) {
recommendations.push({
type: 'testing',
title: 'Improve test coverage',
steps: [
'Write more unit tests for untested functions',
'Add integration tests for component interactions',
'Ensure all edge cases are covered'
]
});
}
// Generate recommendations for documentation issues
if (issuesByType.documentation) {
recommendations.push({
type: 'documentation',
title: 'Complete the documentation',
steps: issuesByType.documentation.map(issue => issue.message.replace('Missing ', 'Add '))
});
}
// Generate recommendations for best practice issues
if (issuesByType.best_practice) {
recommendations.push({
type: 'best_practice',
title: 'Follow best practices',
steps: issuesByType.best_practice.map(issue => issue.message)
});
}
// Generate recommendations for accessibility issues
if (issuesByType.accessibility) {
recommendations.push({
type: 'accessibility',
title: 'Improve accessibility',
steps: [
'Add appropriate ARIA attributes to UI components',
'Implement keyboard navigation for all interactive elements',
'Use semantic HTML elements to improve screen reader experience'
]
});
}
return recommendations;
}
// Generate a summary of the validation results
function generateSummary(results) {
const score = results.score;
let status = '';
if (score >= 90) {
status = 'EXCELLENT';
} else if (score >= 70) {
status = 'GOOD';
} else if (score >= 50) {
status = 'NEEDS IMPROVEMENT';
} else {
status = 'INCOMPLETE';
}
return {
score,
status,
issues: results.issues.length,
critical: results.issues.filter(issue => issue.severity === 'high').length,
recommendations: results.recommendations.length,
message: generateSummaryMessage(score, status, results)
};
}
// Generate a summary message based on results
function generateSummaryMessage(score, status, results) {
if (status === 'EXCELLENT') {
return 'Feature implementation meets or exceeds all quality standards. Good job!';
} else if (status === 'GOOD') {
return `Feature implementation is good overall but has ${results.issues.length} issues to address.`;
} else if (status === 'NEEDS IMPROVEMENT') {
const critical = results.issues.filter(issue => issue.severity === 'high').length;
return `Feature implementation needs significant improvement with ${critical} critical issues.`;
} else {
return 'Feature implementation is incomplete and does not meet minimum quality standards.';
}
}
// Helper function to extract functions from code
function extractFunctions(content) {
const functions = [];
// JavaScript/TypeScript functions
const jsMatches = content.match(/function\s+([a-zA-Z0-9_]+)\s*\([^)]*\)\s*\{/g) || [];
jsMatches.forEach(match => {
const name = match.match(/function\s+([a-zA-Z0-9_]+)/)[1];
const startIndex = content.indexOf(match);
const endIndex = findClosingBrace(content, startIndex + match.indexOf('{'));
const functionBody = content.substring(startIndex, endIndex + 1);
const lines = functionBody.split('\n').length;
functions.push({ name, lines });
});
// Java methods
const javaMatches = content.match(/public|private|protected\s+[a-zA-Z0-9_<>]+\s+([a-zA-Z0-9_]+)\s*\([^)]*\)\s*\{/g) || [];
javaMatches.forEach(match => {
const nameParts = match.match(/\s+([a-zA-Z0-9_]+)\s*\(/);
if (nameParts && nameParts[1]) {
const name = nameParts[1];
const startIndex = content.indexOf(match);
const endIndex = findClosingBrace(content, startIndex + match.indexOf('{'));
const functionBody = content.substring(startIndex, endIndex + 1);
const lines = functionBody.split('\n').length;
functions.push({ name, lines });
}
});
return functions;
}
// Helper function to find closing brace
function findClosingBrace(content, openBraceIndex) {
let braceCount = 1;
for (let i = openBraceIndex + 1; i < content.length; i++) {
if (content[i] === '{') {
braceCount++;
} else if (content[i] === '}') {
braceCount--;
if (braceCount === 0) {
return i;
}
}
}
return content.length - 1;
}
// Helper function to identify feature type
function identifyFeatureType(files) {
const uiFiles = files.filter(file => {
return (file.path.includes('.tsx') || file.path.includes('.jsx')) &&
(file.path.includes('component') || file.path.includes('page'));
});
const apiFiles = files.filter(file => {
return (file.path.includes('controller') || file.path.includes('service')) &&
(file.path.includes('.java') || file.path.includes('.ts'));
});
if (uiFiles.length > apiFiles.length) {
return 'ui';
} else if (apiFiles.length > 0) {
return 'api';
} else {
return 'other';
}
}
// Helper function to check for UI components
function hasUiComponents(files) {
return files.some(file => {
return (file.path.includes('.tsx') || file.path.includes('.jsx')) &&
file.path.includes('component');
});
}
// Helper function to check for API endpoints
function hasApiEndpoints(files) {
return files.some(file => {
return (file.path.includes('controller') || file.path.includes('route')) &&
(file.path.includes('.java') || file.path.includes('.ts'));
});
}
// Run on activation
function activate(context) {
// Register event handlers
context.on('pull_request.created', (event) => {
const files = context.getPullRequestFiles(event.pullRequest.id);
const codebase = context.getCodebase();
return validateFeature(files, codebase, event.pullRequest);
});
context.on('pull_request.updated', (event) => {
const files = context.getPullRequestFiles(event.pullRequest.id);
const codebase = context.getCodebase();
return validateFeature(files, codebase, event.pullRequest);
});
context.on('pull_request.labeled:feature', (event) => {
const files = context.getPullRequestFiles(event.pullRequest.id);
const codebase = context.getCodebase();
return validateFeature(files, codebase, event.pullRequest);
});
context.registerCommand('validate_feature', (args) => {
const prId = args.pullRequest;
if (!prId) {
return {
status: "error",
message: "No pull request specified"
};
}
const files = context.getPullRequestFiles(prId);
const codebase = context.getCodebase();
const pullRequest = context.getPullRequest(prId);
return validateFeature(files, codebase, pullRequest);
});
}
// Export functions
module.exports = {
activate,
validateFeature,
checkCompleteness,
checkTestCoverage,
checkDocumentation,
checkBestPractices,
checkAccessibility
};
```
## When It Runs
This rule can be triggered:
- When a new feature pull request is created
- When a pull request is updated
- When a pull request is labeled with 'feature'
- When a developer runs the `validate_feature` command in Cursor
- Before merging feature implementation
## Usage Example
1. Create a pull request for a new feature
2. Run `validate_feature --pullRequest=123` in Cursor
3. Review the validation results
4. Address any identified issues
5. Re-run validation to confirm all issues are resolved
## Feature Implementation Best Practices
### Completeness Checklist
- [ ] Implementation code
- [ ] Comprehensive tests
- [ ] User documentation
- [ ] Developer documentation
- [ ] API documentation (if applicable)
### Testing Requirements
- Unit tests for all functions/methods
- Integration tests for component interactions
- Functional tests for UI components
- Edge case coverage
- Minimum 80% test coverage
### Documentation Guidelines
- **User Documentation**: Explain how to use the feature
- **Developer Documentation**: Explain how the feature is implemented
- **API Documentation**: Document endpoints, parameters, and responses

View File

@ -0,0 +1,628 @@
---
description:
globs:
alwaysApply: true
---
# Feature Implementation Verifier
```yaml
name: Feature Implementation Verifier
description: Verifies that new features are properly implemented and tested
author: Cursor AI
version: 1.0.0
tags:
- feature
- implementation
- verification
- acceptance-criteria
activation:
always: true
event:
- pull_request
- command
triggers:
- pull_request.created
- pull_request.updated
- pull_request.labeled:feature
- command: "verify_feature"
```
## Rule Definition
This rule ensures that new feature implementations meet all requirements, follow best practices, and include appropriate tests.
## Verification Logic
```javascript
// Main function to verify feature implementation
function verifyFeatureImplementation(files, tests, requirements) {
const results = {
requirementsCoverage: checkRequirementsCoverage(files, requirements),
testCoverage: checkTestCoverage(files, tests),
codeQuality: checkCodeQuality(files),
documentation: checkDocumentation(files, requirements),
performance: checkPerformance(files),
score: 0,
issues: [],
recommendations: []
};
// Calculate overall score
results.score = calculateScore(results);
// Generate issues and recommendations
results.issues = identifyIssues(results);
results.recommendations = generateRecommendations(results.issues);
return {
...results,
summary: generateSummary(results)
};
}
// Check if all requirements are implemented
function checkRequirementsCoverage(files, requirements) {
const results = {
implementedRequirements: [],
missingRequirements: [],
implementationRate: 0
};
if (!requirements || requirements.length === 0) {
results.missingRequirements.push('requirements definition');
return results;
}
// For each requirement, check if it's implemented
for (const req of requirements) {
const isImplemented = files.some(file => fileImplementsRequirement(file, req));
if (isImplemented) {
results.implementedRequirements.push(req);
} else {
results.missingRequirements.push(req);
}
}
results.implementationRate = requirements.length > 0
? (results.implementedRequirements.length / requirements.length) * 100
: 0;
return results;
}
// Helper to check if a file implements a specific requirement
function fileImplementsRequirement(file, requirement) {
// This would contain complex analysis logic to match code to requirements
// For now, we'll use a simple text matching approach
return file.content.includes(requirement.id) ||
file.content.toLowerCase().includes(requirement.description.toLowerCase());
}
// Check if tests cover all the new functionality
function checkTestCoverage(files, tests) {
const results = {
testedFiles: [],
untestedFiles: [],
coverage: 0
};
if (!tests || tests.length === 0) {
files.forEach(file => {
if (shouldHaveTests(file)) {
results.untestedFiles.push(file.path);
}
});
return results;
}
// Check each file to see if it has test coverage
for (const file of files) {
const hasTests = tests.some(test => testCoversFile(test, file));
if (hasTests || !shouldHaveTests(file)) {
results.testedFiles.push(file.path);
} else {
results.untestedFiles.push(file.path);
}
}
const filesToTest = files.filter(file => shouldHaveTests(file)).length;
results.coverage = filesToTest > 0
? (results.testedFiles.length / filesToTest) * 100
: 100;
return results;
}
// Helper to determine if a test covers a specific file
function testCoversFile(test, file) {
// This would contain complex analysis to determine test coverage
// For now, we'll use a simple path matching approach
const filePath = file.path.replace(/\.(js|ts|jsx|tsx|java)$/, '');
const testPath = test.path;
return testPath.includes(filePath) ||
test.content.includes(file.path) ||
test.content.includes(filePath);
}
// Helper to determine if a file should have tests
function shouldHaveTests(file) {
// Skip certain files that don't need tests
const skipPaths = [
'app/client/public/',
'app/client/src/assets/',
'app/client/src/styles/',
'app/client/src/constants/',
'app/client/src/types/'
];
if (skipPaths.some(path => file.path.includes(path))) {
return false;
}
// Skip certain file types
const skipExtensions = ['.md', '.json', '.yml', '.yaml', '.svg', '.png', '.jpg'];
if (skipExtensions.some(ext => file.path.endsWith(ext))) {
return false;
}
return true;
}
// Check the code quality of the implementation
function checkCodeQuality(files) {
const results = {
qualityIssues: [],
issueCount: 0,
qualityScore: 100
};
// Check each file for quality issues
for (const file of files) {
const fileIssues = analyzeCodeQuality(file);
if (fileIssues.length > 0) {
results.qualityIssues.push({
file: file.path,
issues: fileIssues
});
results.issueCount += fileIssues.length;
results.qualityScore -= Math.min(fileIssues.length * 5, 20); // Max 20 points deduction per file
}
}
results.qualityScore = Math.max(0, results.qualityScore);
return results;
}
// Helper to analyze code quality in a file
function analyzeCodeQuality(file) {
const issues = [];
const content = file.content;
// Check for common code quality issues
if (file.path.includes('.js') || file.path.includes('.ts')) {
// Check for console.log statements
if (content.includes('console.log')) {
issues.push({
type: 'debugging',
line: findLineForPattern(content, 'console.log'),
description: 'Remove console.log statements before committing'
});
}
// Check for TODO comments
if (content.includes('TODO')) {
issues.push({
type: 'incomplete',
line: findLineForPattern(content, 'TODO'),
description: 'Resolve TODO comments before committing'
});
}
// Check for commented out code
if (content.match(/\/\/\s*[a-zA-Z0-9]+/)) {
issues.push({
type: 'cleanliness',
line: findLineForPattern(content, '//'),
description: 'Remove commented out code before committing'
});
}
}
// Check for proper indentation and formatting
const lines = content.split('\n');
for (let i = 0; i < lines.length; i++) {
const line = lines[i];
if (line.length > 120) {
issues.push({
type: 'formatting',
line: i + 1,
description: 'Line exceeds 120 characters'
});
}
// Check for inconsistent indentation
if (i > 0 && line.match(/^\s+/) && lines[i-1].match(/^\s+/)) {
const currentIndent = line.match(/^\s+/)[0].length;
const prevIndent = lines[i-1].match(/^\s+/)[0].length;
if (Math.abs(currentIndent - prevIndent) % 2 !== 0 && Math.abs(currentIndent - prevIndent) !== 0) {
issues.push({
type: 'formatting',
line: i + 1,
description: 'Inconsistent indentation'
});
}
}
}
return issues;
}
// Helper to find the line number for a pattern
function findLineForPattern(content, pattern) {
const lines = content.split('\n');
for (let i = 0; i < lines.length; i++) {
if (lines[i].includes(pattern)) {
return i + 1;
}
}
return 1;
}
// Check for appropriate documentation
function checkDocumentation(files, requirements) {
const results = {
documentedFiles: [],
undocumentedFiles: [],
documentationScore: 100
};
// Check each file for documentation
for (const file of files) {
if (shouldHaveDocumentation(file)) {
if (hasAdequateDocumentation(file)) {
results.documentedFiles.push(file.path);
} else {
results.undocumentedFiles.push(file.path);
results.documentationScore -= 10; // 10 points deduction per undocumented file
}
}
}
results.documentationScore = Math.max(0, results.documentationScore);
return results;
}
// Helper to determine if a file should have documentation
function shouldHaveDocumentation(file) {
// Public APIs, complex components, and services should have documentation
return file.path.includes('/api/') ||
file.path.includes('/services/') ||
file.path.includes('/components/') ||
file.path.endsWith('.java');
}
// Helper to check if a file has adequate documentation
function hasAdequateDocumentation(file) {
const content = file.content;
// Check for JSDoc, JavaDoc, or other documentation patterns
if (file.path.includes('.js') || file.path.includes('.ts')) {
return content.includes('/**') && content.includes('*/');
}
if (file.path.includes('.java')) {
return content.includes('/**') && content.includes('*/') && content.includes('@param');
}
// For other files, check for comment blocks
return content.includes('/*') && content.includes('*/');
}
// Check for performance implications
function checkPerformance(files) {
// This would have comprehensive performance analysis
// For now, return an empty array for performance issues
return {
performanceIssues: [],
issueCount: 0
};
}
// Calculate overall score based on all checks
function calculateScore(results) {
let score = 100;
// Deduct for missing requirements
if (results.requirementsCoverage.implementationRate < 100) {
score -= (100 - results.requirementsCoverage.implementationRate) * 0.3;
}
// Deduct for missing tests
if (results.testCoverage.coverage < 80) {
score -= (80 - results.testCoverage.coverage) * 0.3;
}
// Deduct for code quality issues
score -= (100 - results.codeQuality.qualityScore) * 0.2;
// Deduct for documentation issues
score -= (100 - results.documentation.documentationScore) * 0.2;
return Math.max(0, Math.round(score));
}
// Identify issues from all verification checks
function identifyIssues(results) {
const issues = [];
// Add missing requirements as issues
results.requirementsCoverage.missingRequirements.forEach(req => {
issues.push({
type: 'requirements',
severity: 'high',
message: `Missing implementation for requirement: ${req.id || req}`
});
});
// Add test coverage issues
if (results.testCoverage.untestedFiles.length > 0) {
issues.push({
type: 'testing',
severity: 'high',
message: `Missing tests for ${results.testCoverage.untestedFiles.length} files`
});
results.testCoverage.untestedFiles.forEach(file => {
issues.push({
type: 'testing',
severity: 'medium',
message: `No tests for file: ${file}`
});
});
}
// Add code quality issues
results.codeQuality.qualityIssues.forEach(fileIssue => {
fileIssue.issues.forEach(issue => {
issues.push({
type: 'code_quality',
severity: 'medium',
message: `${issue.description} in ${fileIssue.file} at line ${issue.line}`
});
});
});
// Add documentation issues
results.documentation.undocumentedFiles.forEach(file => {
issues.push({
type: 'documentation',
severity: 'medium',
message: `Missing or inadequate documentation in ${file}`
});
});
return issues;
}
// Generate recommendations based on identified issues
function generateRecommendations(issues) {
const recommendations = [];
// Group issues by type
const issuesByType = {};
issues.forEach(issue => {
if (!issuesByType[issue.type]) {
issuesByType[issue.type] = [];
}
issuesByType[issue.type].push(issue);
});
// Generate recommendations for requirements issues
if (issuesByType.requirements) {
recommendations.push({
type: 'requirements',
title: 'Complete the implementation of requirements',
steps: [
'Review the missing requirements and ensure they are implemented',
'Verify that the implementation matches the acceptance criteria',
'Update the code to address all missing requirements'
]
});
}
// Generate recommendations for testing issues
if (issuesByType.testing) {
recommendations.push({
type: 'testing',
title: 'Improve test coverage',
steps: [
'Add unit tests for untested files',
'Create integration tests where appropriate',
'Ensure all edge cases are covered in tests'
]
});
}
// Generate recommendations for code quality issues
if (issuesByType.code_quality) {
recommendations.push({
type: 'code_quality',
title: 'Address code quality issues',
steps: [
'Remove debugging code (console.log, TODO comments)',
'Fix formatting and indentation issues',
'Follow project coding standards and best practices'
]
});
}
// Generate recommendations for documentation issues
if (issuesByType.documentation) {
recommendations.push({
type: 'documentation',
title: 'Improve documentation',
steps: [
'Add JSDoc or JavaDoc comments to public APIs and classes',
'Document complex components and their usage',
'Ensure all services have proper documentation'
]
});
}
return recommendations;
}
// Generate a summary of the verification results
function generateSummary(results) {
const score = results.score;
let status = '';
if (score >= 90) {
status = 'EXCELLENT';
} else if (score >= 70) {
status = 'GOOD';
} else if (score >= 50) {
status = 'NEEDS IMPROVEMENT';
} else {
status = 'INCOMPLETE';
}
return {
score,
status,
issues: results.issues.length,
critical: results.issues.filter(issue => issue.severity === 'high').length,
recommendations: results.recommendations.length,
message: generateSummaryMessage(score, status, results)
};
}
// Generate a summary message based on results
function generateSummaryMessage(score, status, results) {
if (status === 'EXCELLENT') {
return 'Feature implementation meets or exceeds all requirements and standards. Good job!';
} else if (status === 'GOOD') {
return `Feature implementation is good overall but has ${results.issues.length} issues to address.`;
} else if (status === 'NEEDS IMPROVEMENT') {
const critical = results.issues.filter(issue => issue.severity === 'high').length;
return `Feature implementation needs significant improvement with ${critical} critical issues.`;
} else {
return 'Feature implementation is incomplete and does not meet minimum requirements.';
}
}
// Run on activation
function activate(context) {
// Register event handlers
context.on('pull_request.created', (event) => {
const files = context.getPullRequestFiles(event.pullRequest.id);
const tests = context.getPullRequestTests(event.pullRequest.id);
const requirements = context.getFeatureRequirements(event.pullRequest);
return verifyFeatureImplementation(files, tests, requirements);
});
context.on('pull_request.updated', (event) => {
const files = context.getPullRequestFiles(event.pullRequest.id);
const tests = context.getPullRequestTests(event.pullRequest.id);
const requirements = context.getFeatureRequirements(event.pullRequest);
return verifyFeatureImplementation(files, tests, requirements);
});
context.on('pull_request.labeled:feature', (event) => {
const files = context.getPullRequestFiles(event.pullRequest.id);
const tests = context.getPullRequestTests(event.pullRequest.id);
const requirements = context.getFeatureRequirements(event.pullRequest);
return verifyFeatureImplementation(files, tests, requirements);
});
context.registerCommand('verify_feature', (args) => {
const prId = args.pullRequest;
if (!prId) {
return {
status: "error",
message: "No pull request specified"
};
}
const files = context.getPullRequestFiles(prId);
const tests = context.getPullRequestTests(prId);
const requirements = context.getFeatureRequirements({ id: prId });
return verifyFeatureImplementation(files, tests, requirements);
});
}
// Export functions
module.exports = {
activate,
verifyFeatureImplementation,
checkRequirementsCoverage,
checkTestCoverage,
checkCodeQuality,
checkDocumentation,
checkPerformance
};
```
## When It Runs
This rule can be triggered:
- When a new feature pull request is created
- When a pull request is updated
- When a pull request is labeled with 'feature'
- When a developer runs the `verify_feature` command in Cursor
- Before merging a feature implementation
## Usage Example
1. Create a pull request for a new feature
2. Run `verify_feature --pullRequest=123` in Cursor
3. Review the verification results
4. Address any identified issues
5. Re-run verification to confirm all issues are resolved
## Feature Implementation Checklist
### Requirements
- [ ] Understand the feature requirements and acceptance criteria
- [ ] Design a solution that meets all requirements
- [ ] Create a plan for implementing the feature
- [ ] Consider edge cases and potential issues
### Implementation
- [ ] Follow the project's coding standards and architecture
- [ ] Write clean, efficient, and maintainable code
- [ ] Handle errors and edge cases gracefully
- [ ] Ensure the feature integrates well with existing functionality
### Testing
- [ ] Write unit tests for all components
- [ ] Create integration tests for complex interactions
- [ ] Test across different environments if applicable
- [ ] Verify that the feature meets all acceptance criteria
### Review
- [ ] Self-review the code before submission
- [ ] Address feedback from automated checks
- [ ] Ensure documentation is complete and accurate
- [ ] Verify test coverage is adequate
## Example: Verifying Acceptance Criteria
For a file upload feature, the verifier would check for:
- UI components for selecting files
- Upload progress indicators
- Success and error states
- Backend API for handling file uploads
- File validation and error handling
- Tests for valid and invalid uploads
- Performance considerations for large files

View File

@ -0,0 +1,253 @@
---
description:
globs:
alwaysApply: true
---
# Workflow Configuration Validator
```yaml
name: Workflow Configuration Validator
description: Validates GitHub workflow files before commits and pushes
author: Cursor AI
version: 1.0.0
tags:
- ci
- workflows
- quality-checks
- validation
activation:
always: true
events:
- pre_commit
- pre_push
- command
triggers:
- pre_commit
- pre_push
- command: "validate_workflows"
```
## Rule Definition
This rule ensures that GitHub workflow configuration files (especially `.github/workflows/quality-checks.yml`) are valid before allowing commits or pushes.
## Validation Logic
```javascript
const yaml = require('js-yaml');
const fs = require('fs');
const { execSync } = require('child_process');
/**
* Main function to validate GitHub workflow files
* @param {Object} context - The execution context
* @returns {Object} Validation results
*/
function validateWorkflows(context) {
const results = {
isValid: true,
errors: [],
warnings: []
};
// Primary focus: quality-checks.yml
const qualityChecksPath = '.github/workflows/quality-checks.yml';
try {
// Check if file exists
if (!fs.existsSync(qualityChecksPath)) {
results.errors.push(`${qualityChecksPath} file does not exist`);
results.isValid = false;
return results;
}
// Check if file is valid YAML
try {
const fileContents = fs.readFileSync(qualityChecksPath, 'utf8');
const parsedYaml = yaml.load(fileContents);
// Check for required fields in workflow
if (!parsedYaml.name) {
results.warnings.push(`${qualityChecksPath} is missing a name field`);
}
if (!parsedYaml.jobs || Object.keys(parsedYaml.jobs).length === 0) {
results.errors.push(`${qualityChecksPath} doesn't contain any jobs`);
results.isValid = false;
}
// Check for common GitHub Actions workflow validation
if (context.hasCommand('gh')) {
try {
// Use GitHub CLI to validate workflow if available
execSync(`gh workflow view ${qualityChecksPath} --json`, { stdio: 'pipe' });
} catch (error) {
results.errors.push(`GitHub CLI validation failed: ${error.message}`);
results.isValid = false;
}
} else {
// Basic structural validation if GitHub CLI is not available
const requiredKeys = ['on', 'jobs'];
for (const key of requiredKeys) {
if (!parsedYaml[key]) {
results.errors.push(`${qualityChecksPath} is missing required key: ${key}`);
results.isValid = false;
}
}
}
// Check for other workflows
const workflowsDir = '.github/workflows';
if (fs.existsSync(workflowsDir)) {
const workflowFiles = fs.readdirSync(workflowsDir)
.filter(file => file.endsWith('.yml') || file.endsWith('.yaml'));
// Validate all workflow files
for (const file of workflowFiles) {
if (file === 'quality-checks.yml') continue; // Already checked
const filePath = `${workflowsDir}/${file}`;
try {
const contents = fs.readFileSync(filePath, 'utf8');
yaml.load(contents); // Just check if it's valid YAML
} catch (e) {
results.errors.push(`${filePath} contains invalid YAML: ${e.message}`);
results.isValid = false;
}
}
}
} catch (e) {
results.errors.push(`Failed to parse ${qualityChecksPath}: ${e.message}`);
results.isValid = false;
}
} catch (error) {
results.errors.push(`General error validating workflows: ${error.message}`);
results.isValid = false;
}
return results;
}
/**
* Check if workflow files have been modified in the current changes
* @param {Object} context - The execution context
* @returns {boolean} Whether workflow files have been modified
*/
function haveWorkflowsChanged(context) {
try {
const gitStatus = execSync('git diff --name-only --cached', { encoding: 'utf8' });
const changedFiles = gitStatus.split('\n').filter(Boolean);
return changedFiles.some(file =>
file.startsWith('.github/workflows/') &&
(file.endsWith('.yml') || file.endsWith('.yaml'))
);
} catch (error) {
// If we can't determine if workflows changed, assume they did to be safe
return true;
}
}
/**
* Run the validation when triggered
* @param {Object} context - The execution context
* @returns {Object} The action result
*/
function onTrigger(context, event) {
// For pre-commit and pre-push, only validate if workflow files have changed
if ((event === 'pre_commit' || event === 'pre_push') && !haveWorkflowsChanged(context)) {
return {
status: 'success',
message: 'No workflow files changed, skipping validation'
};
}
const results = validateWorkflows(context);
if (!results.isValid) {
return {
status: 'failure',
message: 'Workflow validation failed',
details: results.errors.join('\n'),
warnings: results.warnings.join('\n')
};
}
return {
status: 'success',
message: 'All workflow files are valid',
warnings: results.warnings.length ? results.warnings.join('\n') : undefined
};
}
/**
* Register command and hooks
* @param {Object} context - The cursor context
*/
function activate(context) {
// Register pre-commit hook
context.on('pre_commit', (event) => {
return onTrigger(context, 'pre_commit');
});
// Register pre-push hook
context.on('pre_push', (event) => {
return onTrigger(context, 'pre_push');
});
// Register command for manual validation
context.registerCommand('validate_workflows', () => {
return onTrigger(context, 'command');
});
}
module.exports = {
activate,
validateWorkflows
};
```
## Usage
This rule runs automatically on pre-commit and pre-push events. You can also manually trigger it with the command `validate_workflows`.
### Pre-Commit Hook
When committing changes, this rule will:
1. Check if any workflow files were modified
2. If so, validate that `.github/workflows/quality-checks.yml` is properly formatted
3. Block the commit if validation fails
### Examples
**Valid Workflow:**
```yaml
name: Quality Checks
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run linters
run: |
npm ci
npm run lint
```
**Invalid Workflow (Will Fail Validation):**
```yaml
on:
push:
jobs:
lint:
# Missing "runs-on" field
steps:
- uses: actions/checkout@v3
- name: Run linters
run: npm run lint
```

144
.cursor/settings.json Normal file
View File

@ -0,0 +1,144 @@
{
"codebase": {
"structure": {
"frontend": "app/client",
"backend": "app/server",
"infrastructure": "deploy",
"workflows": ".github/workflows",
"scripts": "scripts"
},
"standards": {
"frontend": {
"testPatterns": ["*.test.ts", "*.test.tsx", "cypress/integration/**/*.spec.ts"],
"codeStyle": "airbnb",
"linters": ["eslint", "prettier"]
},
"backend": {
"testPatterns": ["**/*Test.java", "**/*Tests.java"],
"codeStyle": "google",
"linters": ["spotless"]
}
}
},
"development": {
"workflow": {
"bugFix": [
"Understand the bug report and reproduce locally",
"Identify root cause through code exploration",
"Write failing test(s) that demonstrate the bug",
"Implement fix that makes tests pass",
"Ensure all existing tests pass",
"Verify fix addresses original issue",
"Run pre-commit checks",
"Confirm GitHub workflows would pass"
],
"feature": [
"Understand requirements and acceptance criteria",
"Design implementation approach",
"Create test cases (unit, integration, e2e as appropriate)",
"Implement feature",
"Verify feature meets acceptance criteria",
"Ensure performance and efficiency",
"Ensure code follows project standards",
"Run pre-commit checks",
"Confirm GitHub workflows would pass"
]
},
"qualityChecks": {
"frontend": [
"Run unit tests: yarn run test:unit",
"Run type checking: yarn run check-types",
"Run linting: yarn run lint",
"Run cypress tests: npx cypress run",
"Check for cyclic dependencies: CI workflow"
],
"backend": [
"Run unit tests",
"Run integration tests",
"Run spotless check",
"Verify no resource leaks"
],
"general": [
"Verify no sensitive data is included",
"Ensure proper error handling",
"Check performance impact",
"Ensure backwards compatibility"
]
},
"gitWorkflow": {
"branchNaming": {
"bugFix": "fix/fix-name",
"feature": "feature/feature-name"
},
"commitConventions": "Descriptive commit messages with issue reference",
"semanticPR": {
"enabled": true,
"titleFormat": "type(scope): description",
"validTypes": [
"feat", "fix", "docs", "style", "refactor",
"perf", "test", "build", "ci", "chore", "revert"
],
"scopeRequired": false,
"titleValidation": true,
"commitsValidation": false
}
}
},
"incrementalLearning": {
"enabled": true,
"patterns": [
"**/*.java",
"**/*.ts",
"**/*.tsx",
"**/*.yml",
"**/*.yaml",
"**/*.md",
"**/*.json"
],
"storage": {
"codePatterns": true,
"testPatterns": true,
"buildPatterns": true,
"workflowPatterns": true
}
},
"testing": {
"frontend": {
"unit": {
"framework": "jest",
"command": "yarn run test:unit"
},
"integration": {
"framework": "cypress",
"command": "npx cypress run --spec <spec path> --browser chrome"
}
},
"backend": {
"unit": {
"framework": "junit",
"patterns": ["**/*Test.java"]
},
"integration": {
"framework": "junit",
"patterns": ["**/*IntegrationTest.java"]
}
}
},
"preCommit": {
"hooks": [
"Type checking",
"Linting",
"Unit tests",
"No sensitive data"
]
},
"cicd": {
"workflows": [
"client-build.yml",
"server-build.yml",
"ci-test-limited.yml",
"client-unit-tests.yml",
"server-integration-tests.yml"
]
}
}